包装k8s的第三方平台:
https://kubesphere.com.cn/
https://microk8s.io/
https://github.com/Qihoo360/wayne
------------
apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.0 --pod-network-cidr=192.169.0.0/16
拷贝kubeconfig文件到家目录的.kube目录 (仅master)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
查看kube-system命名空间下的pod状态(仅master)
kubectl get pod -n kube-system 宝塔面板安全放行:
8 | 放行端口:[50000:59999] | 正常 | 2019-12-24 00:48:48 | 6 | 删除 |
7 | 放行端口:[10000:49999] | 正常 | 2019-12-24 00:48:26 | 8 | 删除 |
6 | 放行端口:[100:9999] |
允许master节点部署pod,使用命令如下:
kubectl taint nodes --all node-role.kubernetes.io/master-
安装helm:
$ 下载 Helm 二进制文件
$ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.16.1-linux-amd64.tar.gz
$ 解压缩
$ tar -zxvf helm-v2.9.1-linux-amd64.tar.gz
$ 复制 helm 二进制 到bin目录下
$cp linux-amd64/helm /usr/local/bin/
helm init
helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
测试:
kubectl create deployment nginx --image=nginx:alpine
kubectl scale deployment nginx --replicas=2
kubectl get pods -l app=nginx -o wide
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m12s
nginx NodePort 10.96.140.175 <none> 80:30285/TCP 4m59s
kubectl get pod --all-namespaces -o wide
kubectl describe pod 查看 Pod 具体情况,以确认拉取失败的镜像:
kubectl describe pod nginx-5b6fb6dd96-fnc48
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
kubectl proxy
Now access Dashboard at:
允许外部访问
注意:会占用终端
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'
防火墙放开8001端口
通过浏览器访问
注意:192.168.10.104为master ip
http://192.168.10.104:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
可以通过任意 NodeIP:Port 在集群外部访问这个服务:
[centos@k8s-master ~]$ curl 10.96.140.175:80
[centos@k8s-master ~]$ curl 192.168.92.57:30670
[centos@k8s-master ~]$ curl 192.168.92.58:30670
最后验证一下dns, pod network是否正常:
运行Busybox并进入交互模式
[centos@k8s-master ~]$ kubectl run -it curl --image=radial/busyboxplus:curl
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-66959f6557-s5qqs:/ ]$
输入nslookup nginx查看是否可以正确解析出集群内的IP,以验证DNS是否正常
[ root@curl-66959f6557-s5qqs:/ ]$ nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.108.17.2 nginx.default.svc.cluster.local
通过服务名进行访问,验证kube-proxy是否正常
[ root@curl-66959f6557-q472z:/ ]$ curl http://nginx/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
</body>
</html>
[ root@curl-66959f6557-q472z:/ ]$
分别访问一下2个Pod的内网IP,验证跨Node的网络通信是否正常
[ root@curl-66959f6557-s5qqs:/ ]$ curl 10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
</body>
</html>
[ root@curl-66959f6557-s5qqs:/ ]$ curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
</body>
</html>
[ root@curl-66959f6557-s5qqs:/ ]$
移除节点和集群
kubernetes集群移除节点
以移除k8s-node2节点为例,在Master节点上运行:
kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node2
上面两条命令执行完成后,在k8s-node2节点执行清理命令,重置kubeadm的安装状态:
kubeadm reset
在master上删除node并不会清理k8s-node2运行的容器,需要在删除节点上面手动运行清理命令。
如果你想重新配置集群,使用新的参数重新运行kubeadm init或者kubeadm join即可。
至此3个节点的集群搭建完成,后续可以继续添加node节点,或者部署dashboard、helm包管理工具、EFK日志系统、Prometheus Operator监控系统、rook+ceph存储系统等组件。
private docker in ali:
https://blog.csdn.net/kozazyh/article/details/79427119
===========================
kubectl delete service --all
评论已关闭!