V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
• 请不要在回答技术问题时复制粘贴 AI 生成的内容
hansonwang99
V2EX  ›  程序员

利用 K8S 技术栈打造个人私有云(连载之: K8S 集群搭建)

  •  1
     
  •   hansonwang99 ·
    hansonwang99 · 2018-03-13 11:46:31 +08:00 · 1788 次点击
    这是一个创建于 2466 天前的主题,其中的信息可能已经有所发展或是发生改变。

    iPhone 5S


    最近被业务折腾的死去活来,实在没时间发帖,花了好多个晚上才写好这篇帖子,后续会加油的!


    [利用 K8S 技术栈打造个人私有云系列文章目录]


    环境介绍

    玩集群嘛,当然要搞几台机器做节点!无赖自己并没有性能很强劲的多余机器,在家里翻箱倒柜,找出了几台破旧的本子,试试看吧,与其垫桌脚不如拿出来遛遛弯...

    总体环境安排如下图所示:

    集群总体架构布局

    各部分简介如下:

    Master 节点 ( 一台 08 年买的 Hedy 笔记本 Centos7.3 64bit )

    HEDY

    • docker
    • etcd
    • flannel
    • kube-apiserver
    • kube-scheduler
    • kube-controller-manager

    Slave 节点 ( 一台二手 Thinkpad T420s Centos7.3 64bit )

    Thinkpad  T420s

    • docker
    • flannel
    • kubelet
    • kube-proxy

    Client 节点( 一台 12 年的 Sony Vaio SVS13 Win7 Ultimate )

    • 客户端嘛,毕竟甲方,不需要安装啥东西,有个 ssh 客户端能连到 master 和 slave 节点就 OK

    Docker 镜像仓库

    • 一般企业内部应用的话,其会搭建自己的 docker registry,用作镜像仓库,我这里就直接用 Docker Gub 作为镜像仓库,自己不搭建了(其实主要是没机子啊!)

    Wireless Router (雷猴子家的小米路由器 3 )

    • 最好能穿墙,因为我家路由器放在客厅,但我实验是在卧室里做的啊!

    各部分全部都是由 wifi 进行互联,我个人不太喜欢一大堆线绕来绕去


    环境准备

    1. 先设置 master 节点和所有 slave 节点的主机名

    master 上执行:

    hostnamectl --static set-hostname  k8s-master
    

    slave 上执行:

    hostnamectl --static set-hostname  k8s-node-1
    
    1. 修改 master 和 slave 上的 hosts

    在 master 和 slave 的/etc/hosts文件中均加入以下内容:

    192.168.31.166   k8s-master
    192.168.31.166   etcd
    192.168.31.166   registry
    192.168.31.199   k8s-node-1
    
    1. 关闭 master 和 slave 上的防火墙
    systemctl disable firewalld.service
    systemctl stop firewalld.service
    

    部署 Master 节点

    master 节点需要安装以下组件:

    • etcd
    • flannel
    • docker
    • kubernets

    下面按顺序阐述

    1. etcd 安装

    • 安装命令:yum install etcd -y
    • 编辑 etcd 的默认配置文件/etc/etcd/etcd.conf
    # [member]
    ETCD_NAME=master
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    #ETCD_WAL_DIR=""
    #ETCD_SNAPSHOT_COUNT="10000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    #ETCD_LISTEN_PEER_URLS="http://localhost:2380"
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    #ETCD_CORS=""
    #
    #[cluster]
    #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
    # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
    #ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
    #ETCD_INITIAL_CLUSTER_STATE="new"
    #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
    #ETCD_DISCOVERY=""
    #ETCD_DISCOVERY_SRV=""
    #ETCD_DISCOVERY_FALLBACK="proxy"
    #ETCD_DISCOVERY_PROXY=""
    #ETCD_STRICT_RECONFIG_CHECK="false"
    #ETCD_AUTO_COMPACTION_RETENTION="0"
    #ETCD_ENABLE_V2="true"
    #
    #[proxy]
    #ETCD_PROXY="off"
    #ETCD_PROXY_FAILURE_WAIT="5000"
    #ETCD_PROXY_REFRESH_INTERVAL="30000"
    #ETCD_PROXY_DIAL_TIMEOUT="1000"
    #ETCD_PROXY_WRITE_TIMEOUT="5000"
    #ETCD_PROXY_READ_TIMEOUT="0"
    #
    #[security]
    #ETCD_CERT_FILE=""
    #ETCD_KEY_FILE=""
    #ETCD_CLIENT_CERT_AUTH="false"
    #ETCD_TRUSTED_CA_FILE=""
    #ETCD_AUTO_TLS="false"
    #ETCD_PEER_CERT_FILE=""
    #ETCD_PEER_KEY_FILE=""
    #ETCD_PEER_CLIENT_CERT_AUTH="false"
    #ETCD_PEER_TRUSTED_CA_FILE=""
    #ETCD_PEER_AUTO_TLS="false"
    #
    #[logging]
    #ETCD_DEBUG="false"
    # examples for -log-package-levels etcdserver=WARNING,security=DEBUG
    #ETCD_LOG_PACKAGE_LEVELS=""
    #
    #[profiling]
    #ETCD_ENABLE_PPROF="false"
    #ETCD_METRICS="basic"
    #
    #[auth]
    #ETCD_AUTH_TOKEN="simple"
    
    • 启动 etcd 并验证

    首先启动 etcd 服务

    systemctl start etcd // 启动 etcd 服务
    

    再获取 etcd 的健康指标看看:

    etcdctl -C http://etcd:2379 cluster-health
    etcdctl -C http://etcd:4001 cluster-health
    

    查看 etcd 集群健康度

    2. flannel 安装

    • 安装命令:yum install flannel
    • 配置 flannel:/etc/sysconfig/flanneld
    # Flanneld configuration options  
    
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
    
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
    
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
    
    • 配置 etcd 中关于 flannel 的 key
    etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
    

    配置 etcd 中关于 flannel 的 key

    • 启动 flannel 并设置开机自启
    systemctl start flanneld.service
    systemctl enable flanneld.service
    

    3. docker 安装

    该部分网上教程太多了,主要步骤如下

    • 安装命令:yum install docker -y
    • 开启 docker 服务:service docker start
    • 设置 docker 开启自启动:chkconfig docker on

    4. kubernets 安装

    k8s 的安装命令很简单,执行:

    yum install kubernetes
    

    但 k8s 需要配置的东西比较多,正如第一节“环境介绍”中提及的,毕竟 master 上需要运行以下组件:

    • kube-apiserver
    • kube-scheduler
    • kube-controller-manager

    下面详细阐述:

    • 配置/etc/kubernetes/apiserver文件
    ###
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
    
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--address=0.0.0.0"
    
    # The port on the local server to listen on.
    KUBE_API_PORT="--port=8080"
    
    # Port minions listen on
    KUBELET_PORT="--kubelet-port=10250"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    
    # default admission control policies
    # KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    
    # Add your own!
    KUBE_API_ARGS=""
    
    • 配置/etc/kubernetes/config文件
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://k8s-master:8080"
    
    • 启动 k8s 各个组件
    systemctl start kube-apiserver.service
    systemctl start kube-controller-manager.service
    systemctl start kube-scheduler.service
    
    • 设置 k8s 各组件开机启动
    systemctl enable kube-apiserver.service
    systemctl enable kube-controller-manager.service
    systemctl enable kube-scheduler.service
    

    部署 Slave 节点

    slave 节点需要安装以下组件:

    • flannel
    • docker
    • kubernetes

    下面按顺序阐述:

    1. flannel 安装

    • 安装命令:yum install flannel
    • 配置 flannel:/etc/sysconfig/flanneld
    # Flanneld configuration options  
    
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
    
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/atomic.io/network"
    
    # Any additional options that you want to pass
    #FLANNEL_OPTIONS=""
    
    • 启动 flannel 并设置开机自启
    systemctl start flanneld.service
    systemctl enable flanneld.service
    

    2. docker 安装

    参考前文 master 节点上部署 docker 过程,此处不再赘述

    3. kubernetes 安装

    安装命令:yum install kubernetes

    不同于 master 节点,slave 节点上需要运行 kubernetes 的如下组件:

    • kubelet
    • kubernets-proxy

    下面详细阐述要配置的东西:

    • 配置/etc/kubernetes/config
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://k8s-master:8080"
    
    • 配置/etc/kubernetes/kubelet
    ###
    # kubernetes kubelet (minion) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
    
    # location of the api-server
    KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
    
    # pod infrastructure container
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
    
    # Add your own!
    KUBELET_ARGS=""
    
    • 启动 kube 服务
    systemctl start kubelet.service
    systemctl start kube-proxy.service
    
    • 设置 k8s 组件开机自启
    systemctl enable kubelet.service
    systemctl enable kube-proxy.service
    

    至此为止,k8s 集群的搭建过程就完成了,下面来验证一下集群是否搭建成功了

    验证集群状态

    • 查看端点信息:kubectl get endpoints

    端点信息

    • 查看集群信息:kubectl cluster-info

    集群信息

    • 获取集群中的节点状态: kubectl get nodes

    获取集群中的节点状态

    OK,节点已经就绪,可以在上面做实验了!


    参考文献


    后记

    作者更多的原创文章在此


    pmispig
        1
    pmispig  
       2018-03-13 15:51:13 +08:00
    自己玩的话,kubeadm 一条命令就好了...
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   4430 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 25ms · UTC 03:56 · PVG 11:56 · LAX 19:56 · JFK 22:56
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.