博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
CentOS 7.4搭建Kubernetes 1.8.5集群
阅读量:6331 次
发布时间:2019-06-22

本文共 29107 字,大约阅读时间需要 97 分钟。

环境介绍

角色 操作系统 IP 主机名 Docker版本
master,node CentOS 7.4 192.168.0.210 node210 17.11.0-ce
node CentOS 7.4 192.168.0.211 node211 17.11.0-ce
node CentOS 7.4 192.168.0.212 node212 17.11.0-ce

1.基础环境配置(所有服务器执行)

a.SELinux关闭

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinuxsetenforce 0

b.Docker安装

curl -sSL https://get.docker.com/ | sh

c.配置国内Docker镜像加速器

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://e2a6d434.m.daocloud.io

d.开启Docker开机自动启动

systemctl enable docker.servicesystemctl restart docker

2.kubernetes证书准备(master执行)

a.为将文件复制到Node节点,节省部署时间,我这里做ssh信任免密复制

ssh-keygen -t rsassh-copy-id 192.168.0.211ssh-copy-id 192.168.0.212

b.下载证书生成工具

yum -y install wgetwget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64chmod +x cfssl_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslwget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64chmod +x cfssljson_linux-amd64mv cfssljson_linux-amd64 /usr/local/bin/cfssljsonwget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl-certinfo_linux-amd64mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

c.CA证书制作

#目录准备

mkdir /root/sslcd /root/ssl

#创建CA证书配置

vim ca-config.json

{  "signing": {    "default": {      "expiry": "87600h"    },    "profiles": {      "kubernetes": {        "usages": [            "signing",            "key encipherment",            "server auth",            "client auth"        ],        "expiry": "87600h"      }    }  }}

#创建CA证书请求文件

vim ca-csr.json

{  "CN": "kubernetes",  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "JIANGXI",      "L": "NANCHANG",      "O": "k8s",      "OU": "System"    }  ]}

#生成CA证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

#创建kubernetes证书签名请求

vim kubernetes-csr.json

{    "CN": "kubernetes",    "hosts": [      "127.0.0.1",      "192.168.0.210",      #修改成自己主机的IP      "192.168.0.211",      #修改成自己主机的IP      "192.168.0.212",      #修改成自己主机的IP      "10.254.0.1",      "kubernetes",      "node210",      #修改成自己主机的主机名      "node211",      #修改成自己主机的主机名      "node212",      #修改成自己主机的主机名      "kubernetes.default",      "kubernetes.default.svc",      "kubernetes.default.svc.cluster",      "kubernetes.default.svc.cluster.local"    ],    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "ST": "JIANGXI",            "L": "JIANGXI",            "O": "k8s",            "OU": "System"        }    ]}

#生成kubernetes证书及私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

#创建admin证书签名请求

vim admin-csr.json

{  "CN": "admin",  "hosts": [],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "JIANGXI",      "L": "JIANGXI",      "O": "system:masters",      "OU": "System"    }  ]}

#生成admin证书及私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#创建 kube-proxy 证书签名请求

vim kube-proxy-csr.json

{  "CN": "system:kube-proxy",  "hosts": [],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "JIANGXI",      "L": "JIANGXI",      "O": "k8s",      "OU": "System"    }  ]}

#生成证书及私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#分发证书

mkdir -p /etc/kubernetes/sslcp -r *.pem /etc/kubernetes/sslcd /etcscp -r kubernetes/ 192.168.0.211:/etc/scp -r kubernetes/ 192.168.0.212:/etc/

3.etcd集群安装及配置

a.下载etcd,并分发至节点

wget https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gztar zxf etcd-v3.2.11-linux-amd64.tar.gzmv etcd-v3.2.11-linux-amd64/etcd* /usr/local/binscp -r /usr/local/bin/etc* 192.168.0.211:/usr/local/bin/scp -r /usr/local/bin/etc* 192.168.0.212:/usr/local/bin/

b.创建etcd服务启动文件

vim /usr/lib/systemd/system/etcd.service

[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.targetDocumentation=https://github.com/coreos[Service]Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=-/etc/etcd/etcd.confExecStart=/usr/local/bin/etcd \  --name ${ETCD_NAME} \  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \  --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \  --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \  --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \  --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \  --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \  --initial-cluster infra1=https://192.168.0.210:2380,infra2=https://192.168.0.211:2380,infra3=https://192.168.0.212:2380 \  --initial-cluster-state new \  --data-dir=${ETCD_DATA_DIR}Restart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.target

c.创建必要的目录

mkdir -p /var/lib/etcd/mkdir /etc/etcd

d.编辑etcd的配置文件

vim /etc/etcd/etcd.conf
node210的配置文件/etc/etcd/etcd.conf为

# [member]ETCD_NAME=infra1ETCD_DATA_DIR="/var/lib/etcd"ETCD_LISTEN_PEER_URLS="https://192.168.0.210:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.0.210:2379"#[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.210:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.210:2379"

node211的配置文件/etc/etcd/etcd.conf为

# [member]ETCD_NAME=infra2ETCD_DATA_DIR="/var/lib/etcd"ETCD_LISTEN_PEER_URLS="https://192.168.0.211:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.0.211:2379"#[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.211:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.211:2379"

node212的配置文件/etc/etcd/etcd.conf为

# [member]ETCD_NAME=infra3ETCD_DATA_DIR="/var/lib/etcd"ETCD_LISTEN_PEER_URLS="https://192.168.0.212:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.0.212:2379"#[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.212:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.212:2379"

#在所有节点执行,启动etcd

systemctl daemon-reloadsystemctl enable etcdsystemctl start etcdsystemctl status etcd

如果报错,就需要查看/var/log/messages文件进行排错

e.测试集群是否正常

验证ETCD是否成功启动etcdctl \  --ca-file=/etc/kubernetes/ssl/ca.pem \  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \  cluster-health

4.配置kubernetes参数

a.下载kubernetes编译好的二进制文件并进行分发

wget https://dl.k8s.io/v1.8.5/kubernetes-server-linux-amd64.tar.gztar zxf kubernetes-server-linux-amd64.tar.gz cp -rf kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kubectl,kubefed,kubelet,kube-proxy,kube-scheduler} /usr/local/bin/scp -r kubernetes/server/bin/{kubelet,kube-proxy} 192.168.0.211:/usr/local/bin/scp -r kubernetes/server/bin/{kubelet,kube-proxy} 192.168.0.212:/usr/local/bin/

#查看kubernetes最新版,可到https://github.com/kubernetes/kubernetes/releases

然后进入 CHANGELOG-x.x.md就可限制二进制的下载地址

b.创建 TLS Bootstrapping Token

cd /etc/kubernetesexport BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')cat > token.csv <

c.创建 kubelet bootstrapping kubeconfig 文件

cd /etc/kubernetesexport KUBE_APISERVER="https://192.168.0.210:6443"

#设置集群参数

kubectl config set-cluster kubernetes \  --certificate-authority=/etc/kubernetes/ssl/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER} \  --kubeconfig=bootstrap.kubeconfig

#设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \  --token=${BOOTSTRAP_TOKEN} \  --kubeconfig=bootstrap.kubeconfig

#设置上下文参数

kubectl config set-context default \  --cluster=kubernetes \  --user=kubelet-bootstrap \  --kubeconfig=bootstrap.kubeconfig

#设置默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#授权kubelet-bootstrap角色

kubectl create clusterrolebinding kubelet-bootstrap \  --clusterrole=system:node-bootstrapper \  --user=kubelet-bootstrap

d.创建 kube-proxy kubeconfig 文件

export KUBE_APISERVER="https://192.168.0.210:6443"
#设置集群参数

kubectl config set-cluster kubernetes \  --certificate-authority=/etc/kubernetes/ssl/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER} \  --kubeconfig=kube-proxy.kubeconfig

#设置客户端认证参数

kubectl config set-credentials kube-proxy \  --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \  --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \  --embed-certs=true \  --kubeconfig=kube-proxy.kubeconfig

#设置上下文参数

kubectl config set-context default \  --cluster=kubernetes \  --user=kube-proxy \  --kubeconfig=kube-proxy.kubeconfig

#设置默认上下文

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

e.创建 kubectl kubeconfig 文件

export KUBE_APISERVER="https://192.168.0.210:6443"
#设置集群参数

kubectl config set-cluster kubernetes \  --certificate-authority=/etc/kubernetes/ssl/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER}

#设置客户端认证参数

kubectl config set-credentials admin \  --client-certificate=/etc/kubernetes/ssl/admin.pem \  --embed-certs=true \  --client-key=/etc/kubernetes/ssl/admin-key.pem

#设置上下文参数

kubectl config set-context kubernetes \  --cluster=kubernetes \  --user=admin

#设置默认上下文

kubectl config use-context kubernetes

f.将2个bootstrap.kubeconfig kube-proxy.kubeconfig文件分发至其余服务器

scp -r *.kubeconfig 192.168.0.211:/etc/kubernetes/scp -r *.kubeconfig 192.168.0.212:/etc/kubernetes/

5.MASTER安装及配置

a.apiserver安装配置
#apiserver服务启动文件
vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]Description=Kubernetes API ServiceDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetAfter=etcd.service[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/apiserverExecStart=/usr/local/bin/kube-apiserver \        $KUBE_LOGTOSTDERR \        $KUBE_LOG_LEVEL \        $KUBE_ETCD_SERVERS \        $KUBE_API_ADDRESS \        $KUBE_API_PORT \        $KUBELET_PORT \        $KUBE_ALLOW_PRIV \        $KUBE_SERVICE_ADDRESSES \        $KUBE_ADMISSION_CONTROL \        $KUBE_API_ARGSRestart=on-failureType=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target

#配置kubernetes默认配置

vim /etc/kubernetes/config

#### kubernetes system config## The following values are used to configure various aspects of all# kubernetes services, including##   kube-apiserver.service#   kube-controller-manager.service#   kube-scheduler.service#   kubelet.service#   kube-proxy.service# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow-privileged=true"# How the controller-manager, scheduler, and proxy find the apiserver#KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"KUBE_MASTER="--master=http://192.168.0.210:8080"

#配置apiserver参数

vim /etc/kubernetes/apiserver

##### kubernetes system config#### The following values are used to configure the kube-apiserver##### The address on the local server to listen to.#KUBE_API_ADDRESS="--insecure-bind-address=sz-pg-oam-docker-test-001.tendcloud.com"KUBE_API_ADDRESS="--advertise-address=192.168.0.210 --bind-address=192.168.0.210 --insecure-bind-address=192.168.0.210"### The port on the local server to listen on.#KUBE_API_PORT="--port=8080"### Port minions listen on#KUBELET_PORT="--kubelet-port=10250"### Comma separated list of nodes in the etcd clusterKUBE_ETCD_SERVERS="--etcd-servers=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379"### Address range to use for servicesKUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"### default admission control policiesKUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"### Add your own!KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h --allow-privileged=true"#如果出现错误,查看/var/log/messages

#启动apiserver

systemctl daemon-reloadsystemctl enable kube-apiserversystemctl start kube-apiserversystemctl status kube-apiserver

#如果出现错误,查看/var/log/messages

b.controller-manager服务配置

#controller-manager服务启动文件
vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/controller-managerExecStart=/usr/local/bin/kube-controller-manager \        $KUBE_LOGTOSTDERR \        $KUBE_LOG_LEVEL \        $KUBE_MASTER \        $KUBE_CONTROLLER_MANAGER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

#配置controller-manager服务配置文件

vim /etc/kubernetes/controller-manager

#The following values are used to configure the kubernetes controller-manager#defaults from config and apiserver should be adequate#Add your own!KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"

#启动controller-manager服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

c.scheduler服务安装及配置

#配置scheduler服务启动文件
vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]Description=Kubernetes Scheduler PluginDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/schedulerExecStart=/usr/local/bin/kube-scheduler \            $KUBE_LOGTOSTDERR \            $KUBE_LOG_LEVEL \            $KUBE_MASTER \            $KUBE_SCHEDULER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

#配置scheduler服务配置文件

vim /etc/kubernetes/scheduler

#kubernetes scheduler config#default config should be adequate#Add your own!KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"

#启动scheduler服务

systemctl daemon-reloadsystemctl enable kube-schedulersystemctl start kube-scheduler

d.测试master是否正常

kubectl get componentstatuses
#结果如下说明正常

NAME                 STATUS    MESSAGE              ERRORscheduler            Healthy   ok                   controller-manager   Healthy   ok                   etcd-0               Healthy   {"health": "true"}   etcd-1               Healthy   {"health": "true"}   etcd-2               Healthy   {"health": "true"}

6.node安装(所有节点)

a.flannel安装及配置(容器网络我们采用flannel)
#yum安装flannel
yum install -y flannel
#检查node节点证书情况
ls /etc/kubernetes/ssl
#修改flannel.service配置文件如下
vi /usr/lib/systemd/system/flanneld.service

[Unit]Description=Flanneld overlay address etcd agentAfter=network.targetAfter=network-online.targetWants=network-online.targetAfter=etcd.serviceBefore=docker.service[Service]Type=notifyEnvironmentFile=/etc/sysconfig/flanneldEnvironmentFile=-/etc/sysconfig/docker-networkExecStart=/usr/bin/flanneld-start \  -etcd-endpoints=${ETCD_ENDPOINTS} \  -etcd-prefix=${ETCD_PREFIX} \  $FLANNEL_OPTIONSExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/dockerRestart=on-failure[Install]WantedBy=multi-user.targetRequiredBy=docker.service

#修改flannel配置文件

vi /etc/sysconfig/flanneld

# Flanneld configuration options# etcd url location.  Point this to the server where etcd runsETCD_ENDPOINTS="https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379"# etcd config key.  This is the configuration key that flannel queries# For address range assignmentETCD_PREFIX="/kube-centos/network"# Any additional options that you want to passFLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

#在etcd中创建网络配置

etcdctl --endpoints=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379 \  --ca-file=/etc/kubernetes/ssl/ca.pem \  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \  mkdir /kube-centos/networketcdctl --endpoints=https://192.168.0.210:2379,https://192.168.0.211:2379,https://192.168.0.212:2379 \  --ca-file=/etc/kubernetes/ssl/ca.pem \  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \  mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

#flannel服务启动

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

b.配置Docker服务启动文件,集成flannel

vim /usr/lib/systemd/system/docker.service

在ExecStart上增加EnvironmentFile=-/run/flannel/dockerEnvironmentFile=-/run/docker_opts.envEnvironmentFile=-/run/flannel/subnet.env修改如下ExecStart=/usr/bin/dockerd --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}

效果如下:

[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerEnvironmentFile=-/run/flannel/dockerEnvironmentFile=-/run/docker_opts.envEnvironmentFile=-/run/flannel/subnet.envExecStart=/usr/bin/dockerd --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}ExecReload=/bin/kill -s HUP $MAINPID# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinity# Uncomment TasksMax if your systemd version supports it.# Only systemd 226 and above support this version.#TasksMax=infinityTimeoutStartSec=0# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes# kill only the docker process, not all processes in the cgroupKillMode=process# restart the docker process if it exits prematurelyRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target

#重启启动Docker服务

systemctl daemon-reloadsystemctl restart dockersystemctl status docker

切记要先启动flannel,再启动Docker

c.查询etcd是否分配网络

etcdctl --endpoints=${ETCD_ENDPOINTS} \  --ca-file=/etc/kubernetes/ssl/ca.pem \  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \  ls /kube-centos/network/subnets

结果大致如下

/kube-centos/network/subnets/172.30.1.0-24/kube-centos/network/subnets/172.30.54.0-24/kube-centos/network/subnets/172.30.99.0-24
etcdctl --endpoints=${ETCD_ENDPOINTS} \  --ca-file=/etc/kubernetes/ssl/ca.pem \  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \  get /kube-centos/network/config

结果大致如下

{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}

d.安装及配置kubelet

#创建kubelet服务启动文件
vim /usr/lib/systemd/system/kubelet.service

[Unit]Description=Kubernetes Kubelet ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletEnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/kubeletExecStart=/usr/local/bin/kubelet \            $KUBE_LOGTOSTDERR \            $KUBE_LOG_LEVEL \            $KUBELET_API_SERVER \            $KUBELET_ADDRESS \            $KUBELET_PORT \            $KUBELET_HOSTNAME \            $KUBE_ALLOW_PRIV \            $KUBELET_POD_INFRA_CONTAINER \            $KUBELET_ARGSRestart=on-failure[Install]WantedBy=multi-user.target

#kubelet认证配置文件

vim /etc/kubernetes/kubelet.kubeconfig

apiVersion: v1kind: Configclusters:  - cluster:      server: http://192.168.0.210:8080    name: localcontexts:  - context:      cluster: local    name: localcurrent-context: local

#kubelet配置文件

vim /etc/kubernetes/kubelet

node210下/etc/kubernetes/kubelet内容如下

##### kubernetes kubelet (minion) config### The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=192.168.0.210"### The port for the info server to serve on#KUBELET_PORT="--port=10250"### You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname-override=192.168.0.210"### location of the api-server#KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080"KUBELET_API_SERVER=" "### pod infrastructure container#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause"### Add your own!#KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false --allow-privileged=true"

node211里配置文件如下

##### kubernetes kubelet (minion) config### The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=192.168.0.211"### The port for the info server to serve on#KUBELET_PORT="--port=10250"### You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname-override=192.168.0.211"### location of the api-server#KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080"KUBELET_API_SERVER=" "### pod infrastructure container#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause"### Add your own!#KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false --allow-privileged=true"

node212里配置文件如下

##### kubernetes kubelet (minion) config### The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=192.168.0.212"### The port for the info server to serve on#KUBELET_PORT="--port=10250"### You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname-override=192.168.0.212"### location of the api-server#KUBELET_API_SERVER="--api-servers=http://192.168.0.210:8080"KUBELET_API_SERVER=" "### pod infrastructure container#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause"### Add your own!#KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false --fail-swap-on=false --allow-privileged=true"

#启动kubelet服务

mkdir -p /var/lib/kubeletsystemctl daemon-reloadsystemctl enable kubeletsystemctl start kubeletsystemctl status kubelet

#这里很容易出错,出错时查看/var/log/messages看日志进行排错

#检查kubelet服务是否正常

kubectl get nodesNAME            STATUS    ROLES     AGE       VERSION192.168.0.210   Ready     
14h v1.8.5192.168.0.211 Ready
14h v1.8.5192.168.0.212 Ready
14h v1.8.5

c.安装及配置kube-proxy

#配置kube-proxy服务启动文件
vim /usr/lib/systemd/system/kube-proxy.service

[Unit]Description=Kubernetes Kube-Proxy ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target[Service]EnvironmentFile=-/etc/kubernetes/configEnvironmentFile=-/etc/kubernetes/proxyExecStart=/usr/local/bin/kube-proxy \        $KUBE_LOGTOSTDERR \        $KUBE_LOG_LEVEL \        $KUBE_MASTER \        $KUBE_PROXY_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

#kube-proxy配置文件如下:

node210:
vim /etc/kubernetes/proxy

#### kubernetes proxy config# default config should be adequate# Add your own!KUBE_PROXY_ARGS="--bind-address=192.168.0.210 --hostname-override=192.168.0.210 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

node211:

vim /etc/kubernetes/proxy

#### kubernetes proxy config# default config should be adequate# Add your own!KUBE_PROXY_ARGS="--bind-address=192.168.0.211 --hostname-override=192.168.0.211 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

node212:

vim /etc/kubernetes/proxy

#### kubernetes proxy config# default config should be adequate# Add your own!KUBE_PROXY_ARGS="--bind-address=192.168.0.212--hostname-override=192.168.0.212 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

#启动kube-proxy服务

systemctl daemon-reloadsystemctl enable kube-proxysystemctl start kube-proxysystemctl status kube-proxy

d.在所有节点默认开启forward为accept

vim /usr/lib/systemd/system/forward.service

[Unit]Description=iptables forwardDocumentation=http://iptables.org/After=network.target docker.service[Service]Type=forkingExecStart=/usr/sbin/iptables -P FORWARD ACCEPTExecReload=/usr/sbin/iptables -P FORWARD ACCEPTExecStop=/usr/sbin/iptables -P FORWARD ACCEPTPrivateTmp=true[Install]WantedBy=multi-user.target

#启动forward服务

systemctl daemon-reloadsystemctl enable forwardsystemctl start forwardsystemctl status forward

7.测试集群是否工作正常

a.创建一个deploy
kubectl run nginx --replicas=2 --labels="run=nginx-service" --image=nginx --port=80
b.映射服务到外网可访问
kubectl expose deployment nginx --type=NodePort --name=nginx-service
c.查看服务状态

kubectl describe svc nginx-serviceName:                   nginx-serviceNamespace:              defaultLabels:                 run=nginx-serviceAnnotations:            
Selector: run=nginx-serviceType: NodePortIP: 10.254.84.99Port:
80/TCPNodePort:
30881/TCPEndpoints: 172.30.1.2:80,172.30.54.2:80Session Affinity: NoneEvents:

d.查看pods启动情况

kubectl get podsNAME                     READY     STATUS    RESTARTS   AGEnginx-2317272628-nsfrr   1/1       Running   0          1mnginx-2317272628-qbbgg   1/1       Running   0          1m

e.在外网通过

都可以访问nginx页面

若无法访问,可通过iptables -nL查看forward链是否开启

问题处理:

1.kubelet: E0428 14:59:15.715224 1078 pod_workers.go:186] Error syncing pod aecdf24b-4ab0-11e8-90c5-000c2935cc91 ("kube-router-fzxz7_kube-system(aecdf24b-4ab0-11e8-90c5-000c2935cc91)"), skipping: pod cannot be run: pod with UID "aecdf24b-4ab0-11e8-90c5-000c2935cc91" specified privileged container, but is disallowed

处理方法:在apiserver和kubelet的配置文件中开启--allow-privileged=true

2.Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"

处理方法:在kubelet的配置文件中添加--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

3.connections, error: error deleting connection tracking state for UDP service IP: 10.254.0.2, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH

处理方法: 未安装conntrack-tools ,执行yum -y install conntrack-tools 即可

转载于:https://blog.51cto.com/fengwan/2049124

你可能感兴趣的文章
winform datagridview 通过弹出小窗口来隐藏列 和冻结窗口
查看>>
Jquery闪烁提示特效
查看>>
最佳6款用于移动网站开发的 jQuery 图片滑块插件
查看>>
C++ String
查看>>
获取系统托盘图标的坐标及文本
查看>>
log4j Test
查看>>
HDU 1255 覆盖的面积(矩形面积交)
查看>>
Combinations
查看>>
SQL数据库无法附加,提示 MDF" 已压缩,但未驻留在只读数据库或文件组中。必须将此文件解压缩。...
查看>>
第二十一章流 3用cin输入
查看>>
在workflow中,无法为实例 ID“...”传递接口类型“...”上的事件“...” 问题的解决方法。...
查看>>
获取SQL数据库中的数据库名、所有表名、所有字段名、列描述
查看>>
Orchard 视频资料
查看>>
简述:预处理、编译、汇编、链接
查看>>
调试网页PAIP HTML的调试与分析工具
查看>>
路径工程OpenCV依赖文件路径自动添加方法
查看>>
玩转SSRS第七篇---报表订阅
查看>>
WinCE API
查看>>
POJ 3280 Cheapest Palindrome(DP 回文变形)
查看>>
oracle修改内存使用和性能调节,SGA
查看>>