博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Kubernetes学习之路(一)之Kubeadm部署K8S集群
阅读量:6999 次
发布时间:2019-06-27

本文共 17760 字,大约阅读时间需要 59 分钟。

一个星期会超过多少阅读量呢??发布一篇,看看是否重新在51上写学习博文,有老铁支持嘛??

使用kubeadm部署集群

节点名称 ip地址 部署说明 Pod 网段 Service网段 系统说明
k8s-master 192.168.56.11 docker、kubeadm、kubectl、kubelet 10.244.0.0/16 10.96.0.0/12 Centos 7.4
k8s-node01 192.168.56.12 docker、kubeadm、kubelet 10.244.0.0/16 10.96.0.0/12 Centos 7.4
k8s-node02 192.168.56.13 docker、kubeadm、kubelet 10.244.0.0/16 10.96.0.0/12 Centos 7.4

1、配置kubernetes源

[root@k8s-master ~]# cd /etc/yum.repos.d/配置阿里云的源:https://opsx.alibaba.com/mirror[root@k8s-master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  #配置dokcer源[root@k8s-master ~]# cat <
/etc/yum.repos.d/kubernetes.repo  #配置kubernetes源> [kubernetes]> name=Kubernetes> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/> enabled=1> gpgcheck=1> repo_gpgcheck=1> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg> EOF[root@k8s-master yum.repos.d]# yum repolist #查看可用源

将源拷贝到node01和node02节点

[root@k8s-master yum.repos.d]# scp kubernetes.repo docker-ce.repo k8s-node1:/etc/yum.repos.d/kubernetes.repo                         100%  276   276.1KB/s   00:00    docker-ce.repo                          100% 2640     1.7MB/s   00:00    [root@k8s-master yum.repos.d]# scp kubernetes.repo docker-ce.repo k8s-node2:/etc/yum.repos.d/kubernetes.repo                         100%  276   226.9KB/s   00:00    docker-ce.repo                          100% 2640     1.7MB/s   00:00

2、Master节点安装docker、kubelet、kubeadm、还有命令行工具kubectl

[root@k8s-master yum.repos.d]# yum install -y docker-ce kubelet kubeadm kubectl [root@k8s-master ~]# systemctl start docker[root@k8s-master ~]# systemctl enable docker

启动docker,docker需要到自动到docker仓库中所依赖的镜像文件,这些镜像文件会因为在国外仓库而下载无法完成,所以最好预先下载镜像文件,kubeadm也可以支持本地私有仓库进行获取镜像文件。

3、预先下载镜像

在master节点上使用docker pull拉取镜像,再通过tag打标签docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1docker pull xiyangxixia/k8s-scheduler:v1.11.1docker tag xiyangxixia/k8s-scheduler:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1docker pull xiyangxixia/k8s-controller-manager:v1.11.1docker tag xiyangxixia/k8s-controller-manager:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1docker pull xiyangxixia/k8s-apiserver-amd64:v1.11.1docker tag xiyangxixia/k8s-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1docker pull xiyangxixia/k8s-etcd:3.2.18docker tag xiyangxixia/k8s-etcd:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18docker pull xiyangxixia/k8s-coredns:1.1.3docker tag xiyangxixia/k8s-coredns:1.1.3 k8s.gcr.io/coredns:1.1.3docker pull xiyangxixia/k8s-pause:3.1docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1docker pull xiyangxixia/k8s-flannel:v0.10.0-s390xdocker tag xiyangxixia/k8s-flannel:v0.10.0-s390x quay.io/coreos/flannel:v0.10.0-s390xdocker pull xiyangxixia/k8s-flannel:v0.10.0-ppc64ledocker tag xiyangxixia/k8s-flannel:v0.10.0-ppc64le quay.io/coreos/flannel:v0.10.0-ppc64ldocker pull xiyangxixia/k8s-flannel:v0.10.0-armdocker tag xiyangxixia/k8s-flannel:v0.10.0-arm quay.io/coreos/flannel:v0.10.0-armdocker pull xiyangxixia/k8s-flannel:v0.10.0-amd64docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64node节点上拉取的镜像docker pull xiyangxixia/k8s-pause:3.1docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

4、部署kubernetes的Master节点

[root@k8s-master ~]# vim /etc/sysconfig/kubelet   #修改kubelet禁止提示swap警告KUBELET_EXTRA_ARGS="--fail-swap-on=false" #如果配置了swap不然提示出错信息更改kubelet配置,不提示swap警告信息,最好关闭swap[root@k8s-master ~]# swapoff -a  #关闭swap[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap     #初始化 [init] using Kubernetes version: v1.11.1[preflight] running pre-flight checksI0821 18:14:22.223765   18053 kernel_validator.go:81] Validating kernel versionI0821 18:14:22.223894   18053 kernel_validator.go:96] Validating kernel config    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03[preflight/images] Pulling images required for setting up a Kubernetes cluster[preflight/images] This might take a minute or two, depending on the speed of your internet connection[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[preflight] Activating the kubelet service[certificates] Generated ca certificate and key.[certificates] Generated apiserver certificate and key.[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11][certificates] Generated apiserver-kubelet-client certificate and key.[certificates] Generated sa key and public key.[certificates] Generated front-proxy-ca certificate and key.[certificates] Generated front-proxy-client certificate and key.[certificates] Generated etcd/ca certificate and key.[certificates] Generated etcd/server certificate and key.[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1][certificates] Generated etcd/peer certificate and key.[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1][certificates] Generated etcd/healthcheck-client certificate and key.[certificates] Generated apiserver-etcd-client certificate and key.[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled[apiclient] All control plane components are healthy after 51.033696 seconds[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule][patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation[bootstraptoken] using token: dx7mko.j2ug1lqjra5bf6p2[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root:  kubeadm join 192.168.56.11:6443 --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf

到了这步就可以完成Kubernetes Master的部署,这个过程需要几分钟。部署完成后,kubeadm会生成一行指令:

kubeadm join 192.168.56.11:6443 --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf

这个kubeadm join指令,就是用于给这个Master节点添加更多的工作节点(Worker)的命令,后面在部署Worker节点时会用到它,需要记录下来。

此外,kubeadm还会提示我们第一次使用Kubernetes集群所需要配置的命令:

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

​ 需要配置这些命令的原因是:Kubernetes集群默认需要加密的方式访问。所以,这几条命令,就是将刚刚生成的Kubernetes集群的安全配置文件,保存到当前用户的.kube目录下,kubectl默认会使用这个目录下的授权信息进行访问Kubernetes集群。如果不那么做的话,我们每次都需要通过export KUBECONFIG环境变量告诉kubectl这个安全配置文件的位置。

如果是使用root用户部署,可以使用export进行定义环境变量

[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

到此,集群的初始化已经完成,可以使用kubectl get cs进行查看集群的健康状态信息:

[root@k8s-master ~]# kubectl get csNAME                 STATUS    MESSAGE              ERRORcontroller-manager   Healthy   ok                   scheduler            Healthy   ok                   etcd-0               Healthy   {"health": "true"}[root@k8s-master ~]# kubectl get nodeNAME         STATUS     ROLES     AGE       VERSIONk8s-master   NotReady   master    43m       v1.11.2

从上面的结果可以看到,master的组件controller-manager、scheduler、etcd都处于正常状态。那么apiserver到哪去了?要知道kubectl是通过apiserver进行通信,从而在etcd中获取到集群的状态信息,所以可以获取到集群的状态信息,即表示apiserver是处于正常运行的状态。使用kubectl get node获取节点信息,可以看到master节点的状态是NotReady,这是因为还没有部署好Pod网络。

5、安装网络插件CNI

安装Pod网络插件,用于保证Pod之间的相互通信。在每个集群当中只能有一个Pod网络,在部署flannel之前需要更改的内核参数将桥接的IPv4的流量进行转发给iptables链,这是CNI插件的运行的前提要求。

[root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables1[root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables1[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.extensions/kube-flannel-ds created[root@k8s-master ~]# kubectl get node  #再查看master节点的状态信息,就是已经是Ready状态了NAME         STATUS    ROLES     AGE       VERSIONk8s-master   Ready     master    3h        v1.11.2

至此,Kubernetes的Master节点就已经部署完成,在默认情况下,Kubernetes的Master节点是不能运行用户Pod的,这个属于Kubernetes的污点机制。

6、部署Kubernetes的Worker节点

Kubernetes的Worker节点和Master节点几乎是相同的,它们运行这都是一个kubelet组件。唯一的区别在于kubeadm init的过程中,kubelet启动后,Master节点还会自动运行kube-api-server、kube-scheduler、kube-controller-manager这三个系统Pod。

所以,相比之下,部署Worker节点只需要2步即可完成:

(1)在所有Worker节点上执行"安装kubeadm、kubelet和docker"

(2)执行部署Master节点时生产的kubeadm join指令

[root@k8s-node01 ~]# yum install -y docker kubeadm kubelet[root@k8s-node01 ~]# systemctl enable dokcer kubelet[root@k8s-node01 ~]# systemctl start docker这里需要提前拉取好镜像,如果docker配置了代理另说,按本次方法,提前拉取node节点所需要的镜像。[root@k8s-node01 ~]# kubeadm join 192.168.56.11:6443 --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf[root@k8s-master ~]# kubectl get node  NAME         STATUS     ROLES     AGE       VERSIONk8s-master   Ready      master    7h        v1.11.2k8s-node01   NotReady   
3h v1.11.2

加入集群后,查看节点状态信息,看到node01节点的状态为NotReady,是因为node01节点上还没有镜像或者是还在拉取镜像。等待拉取完镜像就会启动对应的Pod。

[root@k8s-node01 ~]# docker psCONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES5502c29b43df        f0fad859c909           "/opt/bin/flanneld -…"   3 minutes ago       Up 3 minutes                            k8s_kube-flannel_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_1db1cc0a6fec4        d5c25579d0ff           "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_0bc54ad3399e8        k8s.gcr.io/pause:3.1   "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_0cbfca066b71d        k8s.gcr.io/pause:3.1   "/pause"                 10 minutes ago      Up 10 minutes                           k8s_POD_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_0[root@k8s-master ~]# kubectl get pods -n kube-system -o wideNAME                                 READY     STATUS    RESTARTS   AGE       IP              NODEcoredns-78fcdf6894-nmcmz             1/1       Running   0          1d        10.244.0.3      k8s-mastercoredns-78fcdf6894-p5pfm             1/1       Running   0          1d        10.244.0.2      k8s-masteretcd-k8s-master                      1/1       Running   1          1d        192.168.56.11   k8s-masterkube-apiserver-k8s-master            1/1       Running   8          1d        192.168.56.11   k8s-masterkube-controller-manager-k8s-master   1/1       Running   4          1d        192.168.56.11   k8s-masterkube-flannel-ds-n5c86                1/1       Running   0          1d        192.168.56.11   k8s-masterkube-flannel-ds-pgpr7                1/1       Running   1          1d        192.168.56.12   k8s-node01kube-proxy-rxlt7                     1/1       Running   1          1d        192.168.56.11   k8s-masterkube-proxy-vxckf                     1/1       Running   0          1d        192.168.56.12   k8s-node01kube-scheduler-k8s-master            1/1       Running   2          1d        192.168.56.11   k8s-master[root@k8s-master ~]# kubectl get node  #此时再查看状态已经变成ReadyNAME         STATUS    ROLES     AGE       VERSIONk8s-master   Ready     master    1d        v1.11.2k8s-node01   Ready     
1d v1.11.2

7、新增加集群的Worker节点

[root@k8s-node02 ~]# yum install -y docker kubeadm kubelet[root@k8s-node02 ~]# systemctl enable docker kubelet[root@k8s-node02 ~]# systemctl start docker

同样预先拉取好node节点所需镜像,在此处犯错的还有,根据官方说明tonken的默认有效时间为24h,由于时间差,导致这里的token失效,可以使用kubeadm token list查看token,发现之前初始化的tonken已经失效了。

[root@k8s-master ~]# kubeadm token listTOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPSdx7mko.j2ug1lqjra5bf6p2   
2018-08-22T18:15:43-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

那么此处需要重新生成token,生成的方法如下:

[root@k8s-master ~]# kubeadm token create1vxhuq.qi11t7yq2wj20cpe

如果没有值--discovery-token-ca-cert-hash,可以通过在master节点上运行以下命令链来获取:

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \   openssl dgst -sha256 -hex | sed 's/^.* //'8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

此时,再运行kube join命令将node02加入到集群当中,此处的--discovery-token-ca-cert-hash依旧可以使用初始化时的证书

[root@k8s-node02 ~]# kubeadm join 192.168.56.11:6443 --token 1vxhuq.qi11t7yq2wj20cpe --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf[root@k8s-master ~]# kubectl get nodeNAME         STATUS    ROLES     AGE       VERSIONk8s-master   Ready     master    1d        v1.11.2k8s-node01   Ready     
1d v1.11.2k8s-node02 Ready
2h v1.11.2

8、通过Taint/Toleration调整Master执行Pod策略

在前面我们了解到,默认情况下Master节点上是不允许运行用户Pod的。而Kubernetes采用的是Taint/Toleration机制。

其原理是:一旦某个节点被加上了一个Taint,即被“打上了污点”,那么所有Pod就都不能在这个节点上运行,因为Kubernetes的Pod都有"洁癖"。

除非,有个别的Pod声明自己能容忍这个"污点",即声明Toleration,它才可以在这个节点上运行。

其中,为节点打上"污点"(Taint)的命令是:

$ kubectl taint nodes node1 foo=bar:NoSchedule

此时,该node1节点就会增加一个键值对格式的Taint,即foo=bar:NoSchedule。其中值里面的NoSchedule,意味着这个Taint只会在调度新Pod时产生作用,而不会影响已经在node1上运行的Pod,哪怕它们没有Toleration。

那么Pod又该如何声明Toleration呢??

只需要在Pod的yaml文件中的spec部署,加入toleration字段即可:

apiVersion: v1kind: Pod...spec:  tolerations:  - key: "foo"    operator: "Equal"    value: "bar"    effect: "NoSchedule"

这个Toleration的含义是:这个Pod能容忍所有键值对为foo=bar的Taint(operator:"Equal","等于"操作)

我们可以通过kubectl describe来检查一下Master节点上的Taint字段:

[root@k8s-master ~]# kubectl describe node k8s-masterName:               k8s-masterRoles:              master.....Taints:             node-role.kubernetes.io/master:NoSchedule......

可以看到,Master节点默认被加上了node-role.kubernetes.io/master:NoSchedule这样的一个"污点",其中键是node-role.kubernetes.io/master,而没有提供"值"。

此时,需要用"Exists"操作符(operator: "Exists","存在"即可)来说明该Pod能够容忍所有以foo为键的Taint,才能让这个Pod运行在该Master节点上:

apiVersion: v1kind: Pod...spec:  tolerations:  - key: "foo"    operator: "Exists"    effect: "NoSchedule"

当然,我们还可以通过删除这个Taint,来解决污点阻碍的问题:

$ kubectl taint nodes --all node-role.kubernetes.io/master-

我们在"node-role.kubernetes.io/master"这个键后面增加了一个横线"-",表示移除所有以"node-role.kubernetes.io/master"为键的Taint。

到此,使用kubeadm部署k8s集群就暂时告一段落啦!!!!

转载于:https://blog.51cto.com/jinlong/2311708

你可能感兴趣的文章
WPF 的毛玻璃效果
查看>>
QQ坦白说异常火爆!小伙平凡遭到妹子表白,编程技术是时候登场了
查看>>
回顾小程序 2018 年三足鼎立历程,2019 年 BAT 火力全开!
查看>>
PHP开发:架构师三十秒轻松总结 “PHP入门小知识”
查看>>
我只想深耕技术写代码,公司却把我推上管理岗,怎么办?
查看>>
这些研发管理经验,聚合起了8000余名技术人员
查看>>
多项福利提高老用户满意度,华为打造有温度的手机品牌
查看>>
华兴资本确定IPO发行价 募集资金将达3.96亿美元
查看>>
登录五大联赛!国足当家前锋武磊确定加盟西班牙人
查看>>
2019年“欢乐春节”系列活动在曼谷启动
查看>>
日本奥委会主席公开否认东京申奥“贿选”
查看>>
马云关于商业与科技结合的奥秘,阿里云生态作出了诠释
查看>>
深入理解 MyBatis的二级缓存的设计原理
查看>>
Mysql中使用流式查询避免数据量过大导致OOM
查看>>
【译】你不知道的 Chrome 调试工具技巧 第二十四天:最后一天,元旦牛逼
查看>>
Vue.js快速入门
查看>>
Ios中微信页面返回上一页去除缓存几种常见思路
查看>>
es8(字符串,对象)
查看>>
深入理解React 组件状态(State)
查看>>
gulp进阶:文件号自动添加MD5后缀的三种方法与坑位提醒
查看>>