お金をかけずにサーバーの勉強をしよう

MicroK8sのクラスタ作成

2022年9月9日

メニューへ戻る

MicroK8sでKubernetesのお勉強」にて、シングル構成(1台だけ)の Kubernetes環境を作りましたが、Kubernetesの真髄はやはり分散環境によるロードの分散と冗長化ですので、ここでは 3台のクラスタ環境を作ります。

MicroK8sのクラスタ環境の作り方はこちら。
Create a MicroK8s cluster

3つの環境は Ubuntu Server 22.04.1で作りました。
サーバー環境の作り方は「Ubuntu Linux Serverをインストール」に、日本語環境等の初期設定を「Ubuntu Serverの初期設定」にそれぞれ書いてあります。

3台の仮想OSは以下の通り。
それぞれにメモリ 8GBを与えていますが、4GBでも大丈夫だと思います。

マシン名IPアドレスCPU数メモリ量
microk8s-master192.168.1.12028GB
microk8s-worker1192.168.1.12128GB
microk8s-worker2192.168.1.12228GB

事前準備として、マスターノード(microk8s-master)でワーカーノード(microk8s-worker1・microk8s-worker2)のマシン名を解決できるよう [/etc/hosts]ファイルに以下の行を追加しておきます。

192.168.1.121   microk8s-worker1
192.168.1.122   microk8s-worker2


では手順に従って進めていきます。

まずマスターノードでワーカーノードで実行するべきコマンドを生成します。

コマンド行が 3種類ありますが、ピンク色の行が今回の対象です。
暗号のような箇所は add-nodeを実行する度に変わります。

subro@microk8s-master:~$ sudo microk8s add-node
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.120:25000/d2a953bc497c19dc1c1672046be60df8/fee53408b9e4

Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.1.120:25000/d2a953bc497c19dc1c1672046be60df8/fee53408b9e4 --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.1.120:25000/d2a953bc497c19dc1c1672046be60df8/fee53408b9e4
microk8s join 2408:82:a8:0:20c:29ff:fe34:edbb:25000/d2a953bc497c19dc1c1672046be60df8/fee53408b9e4

ではワーカーノードで実行します。

microk8s-worker1の例

subro@microk8s-worker1:~$ sudo microk8s join 192.168.1.120:25000/d2a953bc497c19dc1c1672046be60df8/fee53408b9e4
Contacting cluster at 192.168.1.120
Waiting for this node to finish joining the cluster. .. .. ..

microk8s-worker2でもマスターノードでの add-node実行から同じことをやって下さい。
(1台追加の度にやる模様)

終わったら、マスターノードでクラスタに参加しているノードを確認してみます。

subro@microk8s-master:~$ sudo microk8s kubectl get no
NAME               STATUS   ROLES    AGE     VERSION
microk8s-worker1   Ready    <none>   9m7s    v1.24.3-2+63243a96d1c393
microk8s-worker2   Ready    <none>   82s     v1.24.3-2+63243a96d1c393
microk8s-master    Ready    <none>   4h51m   v1.24.3-2+63243a96d1c393

クラスタにワーカーノードとなれるマシンが3つになりました。

※本気の環境ではマスターノードにワーカーノードをやらせないことにするのでしょうが、実験用のPCにメモリの余裕がないので、ここではマスターノードもワーカーノードを兼ねてもらうことにします。

クラスタの状態を見てみると、HA(high-availability)が有効になっていることが分かります。

subro@microk8s-master:~$ sudo microk8s status --wait-ready
microk8s is running
high-availability: yes
  datastore master nodes: 192.168.1.120:19001 192.168.1.121:19001 192.168.1.122:19001
  datastore standby nodes: none
addons:
  enabled:
    ha-cluster           # (core) Configure high availability on the current node
  disabled:
    community            # (core) The community addons repository
    dashboard            # (core) The Kubernetes dashboard
    dns                  # (core) CoreDNS
    gpu                  # (core) Automatic enablement of Nvidia CUDA
    helm                 # (core) Helm 2 - the package manager for Kubernetes
    helm3                # (core) Helm 3 - Kubernetes package manager
    host-access          # (core) Allow Pods connecting to Host services smoothly
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    ingress              # (core) Ingress controller for external access
    mayastor             # (core) OpenEBS MayaStor
    metallb              # (core) Loadbalancer for your Kubernetes cluster
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
    prometheus           # (core) Prometheus operator for monitoring and logging
    rbac                 # (core) Role-Based Access Control for authorisation
    registry             # (core) Private image registry exposed on localhost:32000
    storage              # (core) Alias to hostpath-storage add-on, deprecated


これで kubernetesのクラスタ環境としてとりあえず負荷分散と冗長化をしているはずなので、NGINXの Podを 30個でも動かしてみましょうか。

予想では 3台に Podが 10個ずつ動くような気がします。

Podのデプロイ用ファイルはこんな風です。

subro@microk8s-master:~$ cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 30
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

デプロイします。

subro@microk8s-master:~$ sudo microk8s kubectl apply -f nginx.yaml
deployment.apps/nginx-deployment created

Podがどうなったか見てみます。

subro@microk8s-master:~$ sudo microk8s kubectl get all --all-namespaces
NAMESPACE     NAME                                           READY   STATUS    RESTARTS      AGE
kube-system   pod/calico-kube-controllers-75f94bc9d6-v85bq   1/1     Running   1 (63m ago)   5h7m
kube-system   pod/calico-node-tbfql                          1/1     Running   0             26m
kube-system   pod/calico-node-6gjm2                          1/1     Running   0             25m
kube-system   pod/calico-node-48s54                          1/1     Running   0             17m
default       pod/nginx-deployment-6c8b449b8f-r4b6d          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-zdfgr          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-7pd2q          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-pkqzq          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-v9ccx          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-4rhsb          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-kwwcm          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-xs7nm          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-grrhw          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-r2fdm          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-xssgv          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-78dx2          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-wclv2          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-cgrxs          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-58qx6          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-sfrkn          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-8g5lq          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-x7hgw          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-jhqx8          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-w6rk4          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-pkwln          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-sgsdf          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-zxcs8          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-5npq5          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-kmdnd          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-zpxf9          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-gg8z2          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-bn9r9          1/1     Running   0             108s
default       pod/nginx-deployment-6c8b449b8f-pd9t5          1/1     Running   0             107s
default       pod/nginx-deployment-6c8b449b8f-65gks          1/1     Running   0             107s

NAMESPACE   NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   5h8m

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   3         3         3       3            3           kubernetes.io/os=linux   5h8m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           5h8m
default       deployment.apps/nginx-deployment          30/30   30           30          108s

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-75f94bc9d6   1         1         1       5h7m
default       replicaset.apps/nginx-deployment-6c8b449b8f          30        30        30      109s

勘定したら [nginx] の行が 30行ありました。
Podは予定の数立ち上がっています。

ではワーカーノードへの分散っぷりを見てみます。

subro@microk8s-master:~$ sudo microk8s kubectl get pods -l app=nginx -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP             NODE               NOMINATED NODE   READINESS GATES
nginx-deployment-6c8b449b8f-r4b6d   1/1     Running   0          5m36s   10.1.170.65    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-zdfgr   1/1     Running   0          5m36s   10.1.170.66    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-7pd2q   1/1     Running   0          5m36s   10.1.170.67    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-pkqzq   1/1     Running   0          5m36s   10.1.195.65    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-v9ccx   1/1     Running   0          5m36s   10.1.166.131   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-4rhsb   1/1     Running   0          5m36s   10.1.170.68    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-kwwcm   1/1     Running   0          5m36s   10.1.195.66    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-xs7nm   1/1     Running   0          5m36s   10.1.166.132   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-grrhw   1/1     Running   0          5m36s   10.1.170.69    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-r2fdm   1/1     Running   0          5m36s   10.1.170.70    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-xssgv   1/1     Running   0          5m36s   10.1.195.67    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-78dx2   1/1     Running   0          5m36s   10.1.166.133   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-wclv2   1/1     Running   0          5m35s   10.1.170.71    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-cgrxs   1/1     Running   0          5m36s   10.1.195.68    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-58qx6   1/1     Running   0          5m36s   10.1.166.134   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-sfrkn   1/1     Running   0          5m36s   10.1.166.135   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-8g5lq   1/1     Running   0          5m35s   10.1.170.72    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-x7hgw   1/1     Running   0          5m36s   10.1.195.69    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-jhqx8   1/1     Running   0          5m35s   10.1.170.73    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-w6rk4   1/1     Running   0          5m35s   10.1.195.70    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-pkwln   1/1     Running   0          5m35s   10.1.166.136   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-sgsdf   1/1     Running   0          5m35s   10.1.170.74    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-zxcs8   1/1     Running   0          5m35s   10.1.195.71    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-5npq5   1/1     Running   0          5m35s   10.1.166.137   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-kmdnd   1/1     Running   0          5m36s   10.1.195.72    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-zpxf9   1/1     Running   0          5m35s   10.1.166.138   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-gg8z2   1/1     Running   0          5m36s   10.1.195.73    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-bn9r9   1/1     Running   0          5m36s   10.1.166.139   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-pd9t5   1/1     Running   0          5m35s   10.1.195.74    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-65gks   1/1     Running   0          5m35s   10.1.166.140   microk8s-master    <none>           <none>

見事に 10個ずつ別れています。\(^o^)/

折角ですから、冗長化の方もやってみます。
microk8s-worker2 をシャットダウンします。

subro@microk8s-worker2:~$ sudo poweroff

ノードの状態を見ると microk8s-worker2が NotReadyになりました。

subro@microk8s-master:~$ sudo microk8s kubectl get no
NAME               STATUS     ROLES    AGE     VERSION
microk8s-worker2   NotReady   <none>   37m     v1.24.3-2+63243a96d1c393
microk8s-worker1   Ready      <none>   45m     v1.24.3-2+63243a96d1c393
microk8s-master    Ready      <none>   5h27m   v1.24.3-2+63243a96d1c393

ワーカーノードへの分散っぷりをもう一回見てみます。
microk8s-worker2にあった Podが Terminatingになりました。

subro@microk8s-master:~$ sudo microk8s kubectl get pods -l app=nginx -o wide
NAME                                READY   STATUS        RESTARTS   AGE    IP             NODE               NOMINATED NODE   READINESS GATES
nginx-deployment-6c8b449b8f-pkqzq   1/1     Running       0          16m    10.1.195.65    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-v9ccx   1/1     Running       0          16m    10.1.166.131   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-kwwcm   1/1     Running       0          16m    10.1.195.66    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-xs7nm   1/1     Running       0          16m    10.1.166.132   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-xssgv   1/1     Running       0          16m    10.1.195.67    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-78dx2   1/1     Running       0          16m    10.1.166.133   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-cgrxs   1/1     Running       0          16m    10.1.195.68    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-58qx6   1/1     Running       0          16m    10.1.166.134   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-sfrkn   1/1     Running       0          16m    10.1.166.135   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-x7hgw   1/1     Running       0          16m    10.1.195.69    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-w6rk4   1/1     Running       0          16m    10.1.195.70    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-pkwln   1/1     Running       0          16m    10.1.166.136   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-zxcs8   1/1     Running       0          16m    10.1.195.71    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-5npq5   1/1     Running       0          16m    10.1.166.137   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-kmdnd   1/1     Running       0          16m    10.1.195.72    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-zpxf9   1/1     Running       0          16m    10.1.166.138   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-gg8z2   1/1     Running       0          16m    10.1.195.73    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-bn9r9   1/1     Running       0          16m    10.1.166.139   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-pd9t5   1/1     Running       0          16m    10.1.195.74    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-65gks   1/1     Running       0          16m    10.1.166.140   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-4rhsb   1/1     Terminating   0          16m    10.1.170.68    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-8g5lq   1/1     Terminating   0          16m    10.1.170.72    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-zdfgr   1/1     Terminating   0          16m    10.1.170.66    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-jhqx8   1/1     Terminating   0          16m    10.1.170.73    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-grrhw   1/1     Terminating   0          16m    10.1.170.69    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-sgsdf   1/1     Terminating   0          16m    10.1.170.74    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-r2fdm   1/1     Terminating   0          16m    10.1.170.70    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-r4b6d   1/1     Terminating   0          16m    10.1.170.65    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-7pd2q   1/1     Terminating   0          16m    10.1.170.67    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-wclv2   1/1     Terminating   0          16m    10.1.170.71    microk8s-worker2   <none>           <none>
nginx-deployment-6c8b449b8f-jqqq2   1/1     Running       0          109s   10.1.195.76    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-k4gb5   1/1     Running       0          109s   10.1.166.141   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-v2dw6   1/1     Running       0          109s   10.1.166.142   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-m6rk9   1/1     Running       0          109s   10.1.166.143   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-fcczp   1/1     Running       0          109s   10.1.195.75    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-ssmrb   1/1     Running       0          109s   10.1.195.77    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-cwpf9   1/1     Running       0          109s   10.1.166.144   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-nvzpp   1/1     Running       0          109s   10.1.195.78    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-xmzlx   1/1     Running       0          109s   10.1.166.145   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-9phq6   1/1     Running       0          109s   10.1.195.79    microk8s-worker1   <none>           <none>

5分くらい経つと自動的に microk8s-masterと microk8s-worker1で 15個ずつになりました。
ちゃんと HAになっています。


microk8s-worker2を起動します。

microk8s-worker2が上がってきました。

subro@microk8s-master:~$ sudo microk8s kubectl get no
NAME               STATUS   ROLES    AGE     VERSION
microk8s-worker1   Ready    <none>   50m     v1.24.3-2+63243a96d1c393
microk8s-master    Ready    <none>   5h33m   v1.24.3-2+63243a96d1c393
microk8s-worker2   Ready    <none>   43m     v1.24.3-2+63243a96d1c393

再度ワーカーノードへの分散っぷりを見てみます。
また microk8s-worker2に Podが分散されているはず。

subro@microk8s-master:~$ sudo microk8s kubectl get pods -l app=nginx -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
nginx-deployment-6c8b449b8f-pkqzq   1/1     Running   0          37m   10.1.195.65    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-v9ccx   1/1     Running   0          37m   10.1.166.131   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-kwwcm   1/1     Running   0          37m   10.1.195.66    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-xs7nm   1/1     Running   0          37m   10.1.166.132   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-xssgv   1/1     Running   0          37m   10.1.195.67    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-78dx2   1/1     Running   0          37m   10.1.166.133   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-cgrxs   1/1     Running   0          37m   10.1.195.68    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-58qx6   1/1     Running   0          37m   10.1.166.134   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-sfrkn   1/1     Running   0          37m   10.1.166.135   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-x7hgw   1/1     Running   0          37m   10.1.195.69    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-w6rk4   1/1     Running   0          37m   10.1.195.70    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-pkwln   1/1     Running   0          37m   10.1.166.136   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-zxcs8   1/1     Running   0          37m   10.1.195.71    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-5npq5   1/1     Running   0          37m   10.1.166.137   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-kmdnd   1/1     Running   0          37m   10.1.195.72    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-zpxf9   1/1     Running   0          37m   10.1.166.138   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-gg8z2   1/1     Running   0          37m   10.1.195.73    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-bn9r9   1/1     Running   0          37m   10.1.166.139   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-pd9t5   1/1     Running   0          37m   10.1.195.74    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-65gks   1/1     Running   0          37m   10.1.166.140   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-jqqq2   1/1     Running   0          22m   10.1.195.76    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-k4gb5   1/1     Running   0          22m   10.1.166.141   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-v2dw6   1/1     Running   0          22m   10.1.166.142   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-m6rk9   1/1     Running   0          22m   10.1.166.143   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-fcczp   1/1     Running   0          22m   10.1.195.75    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-ssmrb   1/1     Running   0          22m   10.1.195.77    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-cwpf9   1/1     Running   0          22m   10.1.166.144   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-nvzpp   1/1     Running   0          22m   10.1.195.78    microk8s-worker1   <none>           <none>
nginx-deployment-6c8b449b8f-xmzlx   1/1     Running   0          22m   10.1.166.145   microk8s-master    <none>           <none>
nginx-deployment-6c8b449b8f-9phq6   1/1     Running   0          22m   10.1.195.79    microk8s-worker1   <none>           <none>

しかし、待てど暮せど一向に変化がありません。

実は Kubernetesは Podの再配置はしてくれないんですね〜。

これを実現するには Deschedulerという機能を使わないといけないようです。

しかし、このネタはここまでで終わりなのでした。
Deschedulerについては別なところで書こうと思います。


更にここでもまだ NGINXの Podにクラスタ外の PCからアクセス出来ないのでした…。
いつになったらできるようになるのか。

この環境でクラスタ外部からアクセスできるようにする話を「オンプレ Kubernetes環境に LoadBalancerサービスを」に書いています。