Kubernetes/intsall

RKE2를 통한 Rancher(kubernetes) 클러스터 구축(Offline환경)

sherrylover 2023. 3. 17. 11:39
728x90
반응형

sha256sum-amd64.txt
0.00MB
helm-v3.9.4-linux-amd64.tar.gz
13.38MB
install.sh
0.02MB

*환경(virtual box)

os : centos 7.9

Server node(master node) 1대  (19*.***.54.80)

agent node (worker node) 1대(19*.***.54.133)

 

*회사는 폐쇄망이라서  offline 설치로 실행해야함

 

 


First , RKE2 패키지 설치

#####master node 먼저 설치##############

1. RKE2설치파일이 생길 디렉토리 만들어 주기

 

 mkdir rke2-artifacts
 cd rke2-artifacts/

 

 

2.  RKE2 설치파일 다운로드( 이때는 온라인 환경이어야함)

curl -OLs https://github.com/rancher/rke2/releases/download/v1.24.6%2Brke2r1/rke2-images.linux-amd64.tar.zst
curl -OLs https://github.com/rancher/rke2/releases/download/v1.24.6%2Brke2r1/rke2.linux-amd64.tar.gz
curl -OLs https://github.com/rancher/rke2/releases/download/v1.24.6%2Brke2r1/sha256sum-amd64.txt
curl -sfL https://get.rke2.io --output install.sh

 

#다운로드 완료되면 rke2-artifacts 폴더에 설치파일이 들어온것을 확인할 수 있음

[root@localhost rke2-artifacts]# ls -alh
합계 804M
drwxr-xr-x. 2 root root  121  3월 16 16:58 .
dr-xr-x---. 8 root root 4.0K  3월 16 16:50 ..
-rw-r--r--. 1 root root  22K  3월 16 16:58 install.sh
-rw-r--r--. 1 root root 758M  3월 16 16:55 rke2-images.linux-amd64.tar.zst
-rw-r--r--. 1 root root  47M  3월 16 16:57 rke2.linux-amd64.tar.gz
-rw-r--r--. 1 root root 3.6K  3월 16 16:58 sha256sum-amd64.txt

 

 

#install.sh 파일로 설치시작

[root@localhost rke2-artifacts]# INSTALL_RKE2_ARTIFACT_PATH=/root/rke2-artifacts sh install.sh
[INFO]  staging local checksums from /root/rke2-artifacts/sha256sum-amd64.txt
[INFO]  staging zst airgap image tarball from /root/rke2-artifacts/rke2-images.linux-amd64.tar.zst
[INFO]  staging tarball from /root/rke2-artifacts/rke2.linux-amd64.tar.gz
[INFO]  verifying airgap tarball
grep: /tmp/rke2-install.9Xcwc7TcAB/rke2-images.checksums: 그런 파일이나 디렉터리가 없습니다
[INFO]  installing airgap tarball to /var/lib/rancher/rke2/agent/images
[INFO]  verifying tarball
[INFO]  unpacking tarball file to /usr/local

 

#환경변수설정

[root@localhost rke2-artifacts]# export PATH=$PATH:/opt/rke2/bin

 

3.  RKE2 yaml  파일 수동생성(옵션)

-> 이부분은 필수가 아님

[root@localhost ~]# mkdir -p /etc/rancher/rke2
[root@master ~]# vi /etc/rancher/rke2/config.yaml
[root@master ~]# cat /etc/rancher/rke2/config.yaml
node-name:
  - "master"
token: my-shared-secret

*my-shared-secret 토큰정보는 /var/lib/rancher/rke2/server/node-token 파일에 있음

참고 : https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher

 

 

#rancher 환경변수 설정(옵션)

[root@master ~]# export PATH=$PATH:/var/lib/rancher/rke2/bin/
[root@master ~]# echo 'export PATH=/usr/local/bin:/var/lib/rancher/rke2/bin:$PATH' >> ~/.bashrc

 

 

 

 

#파일에  host명이 들어가는데 localhost말고 다른 호스트명으로 설정해주려면 아래와 같이 설정변경 (설정되어 있다면 생략가능)

 

#rke2 server쪽

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
19*.***.54.80	master
[root@localhost ~]# reboot

#rke2 agent쪽

[root@localhost ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
19*.***.54.133  worker
[root@localhost ~]# reboot

 

4.  rke2서비스 올리기

[root@master ~]# systemctl enable rke2-server
Created symlink from /etc/systemd/system/multi-user.target.wants/rke2-server.service to /usr/local/lib/systemd/system/rke2-server.service.
[root@master ~]# systemctl status rke2-server
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
   Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; enabled; vendor preset: disabled)
   Active: activating (start) since 금 2023-03-17 13:46:19 KST; 2min 19s ago
     Docs: https://github.com/rancher/rke2#readme
  Process: 3072 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 3068 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 3062 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
 Main PID: 3077 (rke2)
    Tasks: 20
   Memory: 2.5G
   CGroup: /system.slice/rke2-server.service
           ├─3077 /usr/local/bin/rke2 server
           └─3095 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/container...

 3월 17 13:47:53 master rke2[3077]: time="2023-03-17T13:47:53+09:00" level=info msg="Waiting for API server to b...lable"
 3월 17 13:47:53 master rke2[3077]: time="2023-03-17T13:47:53+09:00" level=info msg="Waiting for etcd server to ...lable"
 3월 17 13:48:03 master rke2[3077]: {"level":"warn","ts":"2023-03-17T13:48:03.387+0900","logger":"etcd-client","caller...
 3월 17 13:48:03 master rke2[3077]: time="2023-03-17T13:48:03+09:00" level=info msg="Failed to test data store c...eeded"
 3월 17 13:48:03 master rke2[3077]: time="2023-03-17T13:48:03+09:00" level=info msg="etcd pod not found, retrying"
 3월 17 13:48:23 master rke2[3077]: time="2023-03-17T13:48:23+09:00" level=info msg="Waiting for API server to b...lable"
 3월 17 13:48:23 master rke2[3077]: time="2023-03-17T13:48:23+09:00" level=info msg="Waiting for etcd server to ...lable"
 3월 17 13:48:23 master rke2[3077]: time="2023-03-17T13:48:23+09:00" level=info msg="etcd pod not found, retrying"
 3월 17 13:48:38 master rke2[3077]: {"level":"warn","ts":"2023-03-17T13:48:38.389+0900","logger":"etcd-client","caller...
 3월 17 13:48:38 master rke2[3077]: time="2023-03-17T13:48:38+09:00" level=info msg="Failed to test data store c...eeded"
Hint: Some lines were ellipsized, use -l to show in full.

 

5.rke2 config 파일을 복사및 kubectl  관련파일 복사

[root@master ~]# mkdir -p .kube
[root@master ~]# cp /etc/rancher/rke2/rke2.yaml .kube/config

[root@master ~]# cp /var/lib/rancher/rke2/bin/kubectl /usr/local/bin
[root@master ~]# ll -rtl /usr/local/bin
합계 260672
-rwxr-xr-x. 1 root root      3410  9월 27 11:29 rke2-uninstall.sh
-rwxr-xr-x. 1 root root      2750  9월 27 11:29 rke2-killall.sh
-rwxr-xr-x. 1 root root 171469960  9월 27 11:42 rke2
-rwxr-xr-x. 1 root root  46874624  3월 16 14:12 helm
-rwxr-xr-x. 1 root root  48570656  3월 17 14:00 kubectl

 

 

6.kubectl 명령어로 rke2서비스가 pod로 올라왔는지 확인

[root@master ~]# kubectl get po -A
NAMESPACE     NAME                                                    READY   STATUS      RESTARTS   AGE
kube-system   cloud-controller-manager-master                         1/1     Running     0          8m32s
kube-system   etcd-master                                             1/1     Running     0          8m57s
kube-system   helm-install-rke2-canal-7tvtt                           0/1     Completed   0          9m13s
kube-system   helm-install-rke2-coredns-qd8rs                         0/1     Completed   0          9m13s
kube-system   helm-install-rke2-ingress-nginx-2wsms                   0/1     Completed   0          9m13s
kube-system   helm-install-rke2-metrics-server-v4t2q                  0/1     Completed   0          9m13s
kube-system   kube-apiserver-master                                   1/1     Running     0          8m42s
kube-system   kube-controller-manager-master                          1/1     Running     0          8m32s
kube-system   kube-proxy-master                                       1/1     Running     0          8m21s
kube-system   kube-scheduler-master                                   1/1     Running     0          8m28s
kube-system   rke2-canal-rshx7                                        2/2     Running     0          8m46s
kube-system   rke2-coredns-rke2-coredns-76cb76d66-fq9qv               1/1     Running     0          8m47s
kube-system   rke2-coredns-rke2-coredns-autoscaler-58867f8fc5-fg6tn   1/1     Running     0          8m47s
kube-system   rke2-ingress-nginx-controller-b5mfn                     1/1     Running     0          8m5s
kube-system   rke2-metrics-server-6979d95f95-g5bjt                    1/1     Running     0
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                       AGE   VERSION
master   Ready    control-plane,etcd,master   12m   v1.24.6+rke2r1

 

issue -> pod가 정상적으로 안올라올때 swapoff 로 해결

[root@worker ~]# kubectl get po -A
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

#해결방법시작
[root@worker ~]# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy
[root@worker ~]# sudo -i
[root@worker ~]# swapoff -a
[root@worker ~]# exit
logout
[root@worker ~]# strace -eopenat kubectl version
openat(AT_FDCWD, "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", O_RDONLY) = -1 ENOENT (No such file or directory)
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
openat(AT_FDCWD, "/usr/local/bin/kubectl", O_RDONLY|O_CLOEXEC) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
openat(AT_FDCWD, "/usr/local/share/mime/globs2", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/share/mime/globs2", O_RDONLY|O_CLOEXEC) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
openat(AT_FDCWD, "/root/.kube/config", O_RDONLY|O_CLOEXEC) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
openat(AT_FDCWD, "/root/.kube/cache/http/.diskv-temp/955255442", O_RDWR|O_CREAT|O_EXCL|O_CLOEXEC, 0600) = 7
openat(AT_FDCWD, "/root/.kube/cache/http/.diskv-temp/3769208528", O_RDWR|O_CREAT|O_EXCL|O_CLOEXEC, 0600) = 7
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.6+rke2r1", GitCommit:"b39bf148cd654599a52e867485c02c4f9d28b312", GitTreeState:"clean", BuildDate:"2022-09-21T17:05:47Z", GoVersion:"go1.18.6b7", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.6+rke2r1", GitCommit:"b39bf148cd654599a52e867485c02c4f9d28b312", GitTreeState:"clean", BuildDate:"2022-09-21T17:05:47Z", GoVersion:"go1.18.6b7", Compiler:"gc", Platform:"linux/amd64"}
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=1152, si_uid=0} ---
+++ exited with 0 +++

#pod가 정상적으로 뜸
[root@worker ~]# kubectl get po -A
NAMESPACE     NAME                                                    READY   STATUS      RESTARTS   AGE
kube-system   cloud-controller-manager-worker                         1/1     Running     0          3m9s
kube-system   etcd-worker                                             1/1     Running     0          2m56s
kube-system   helm-install-rke2-canal-x672s                           0/1     Completed   0          2m54s
kube-system   helm-install-rke2-coredns-ztc66                         0/1     Completed   0          2m54s
kube-system   helm-install-rke2-ingress-nginx-kbls6                   0/1     Completed   0          2m54s
kube-system   helm-install-rke2-metrics-server-dlpqz                  0/1     Completed   0          2m54s
kube-system   kube-apiserver-worker                                   1/1     Running     0          3m7s
kube-system   kube-controller-manager-worker                          1/1     Running     0          3m10s
kube-system   kube-proxy-worker                                       1/1     Running     0          3m6s
kube-system   kube-scheduler-worker                                   1/1     Running     0          3m10s
kube-system   rke2-canal-c7g75                                        2/2     Running     0          2m29s
kube-system   rke2-coredns-rke2-coredns-76cb76d66-tv4qr               1/1     Running     0          2m30s
kube-system   rke2-coredns-rke2-coredns-autoscaler-58867f8fc5-q6zdh   1/1     Running     0          2m30s
kube-system   rke2-ingress-nginx-controller-v9vhb                     1/1     Running     0          111s
kube-system   rke2-metrics-server-6979d95f95-5dvtt                    1/1     Running     0          119s

Second , Rancher 설치(Helm 설치 포함)

 

1. helm 설치파일 다운로드

 

wget https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz

[root@master ~]# wget https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz
--2023-03-17 14:10:18--  https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.199.39.108, 2606:2800:247:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.199.39.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14026634 (13M) [application/x-tar]
Saving to: ‘helm-v3.9.4-linux-amd64.tar.gz’

100%[================================================================================>] 14,026,634  11.0MB/s   in 1.2s

2023-03-17 14:10:20 (11.0 MB/s) - ‘helm-v3.9.4-linux-amd64.tar.gz’ saved [14026634/14026634]

[root@master ~]# ls
anaconda-ks.cfg  get_helm.sh  helm-v3.9.4-linux-amd64.tar.gz  initial-setup-ks.cfg  rke2-artifacts
[root@master ~]# tar zxvf helm-v3.9.4-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md
[root@master ~]# ls
anaconda-ks.cfg  get_helm.sh  helm-v3.9.4-linux-amd64.tar.gz  initial-setup-ks.cfg  linux-amd64  rke2-artifacts
[root@master ~]# cp linux-amd64/helm /usr/local/bin/
cp: overwrite `/usr/local/bin/helm'? yes
[root@master bin]# cd /usr/local/bin
[root@master bin]# ls -al
합계 260184
drwxr-xr-x.  2 root root        93  3월 17 14:00 .
drwxr-xr-x. 12 root root       131  3월 16 17:20 ..
-rwxr-xr-x.  1 root root  46374912  3월 17 14:11 helm
-rwxr-xr-x.  1 root root  48570656  3월 17 14:00 kubectl
-rwxr-xr-x.  1 root root 171469960  9월 27 11:42 rke2
-rwxr-xr-x.  1 root root      2750  9월 27 11:29 rke2-killall.sh
-rwxr-xr-x.  1 root root      3410  9월 27 11:29 rke2-uninstall.sh

 

 

2. cert-manager 이용하여 셀프인증서 생성 (선택사항)

 

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml

[root@master bin]# kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
[root@master bin]# helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
[root@master bin]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
[root@master bin]# helm install cert-manager jetstack/cert-manager \
> --namespace cert-manager \
> --create-namespace \
> --version v1.7.1
NAME: cert-manager
LAST DEPLOYED: Fri Mar 17 14:15:46 2023
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.7.1 has been deployed successfully!

In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).

More information on the different types of issuers and how to configure them
can be found in our documentation:

https://cert-manager.io/docs/configuration/

For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:

https://cert-manager.io/docs/usage/ingress/

 

kubectl  명령어로 인증서 pod생성 확인

[root@master ~]# kubectl -n cert-manager get po
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-646c67487-xx4t2               1/1     Running   0          5m58s
cert-manager-cainjector-7cb8669d6b-ccs5g   1/1     Running   0          5m58s
cert-manager-webhook-696c5db7ff-6m6dr      1/1     Running   0          5m58s

 

3. helm  repo 정보 업데이트 후 rancher 서비스  설치

[root@master bin]# helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
"rancher-stable" has been added to your repositories

 

 

#cattle-system( helm포함)  라는 이름을 가진 pod를 생성 후  helm repo update

[root@master ~]# kubectl create namespace cattle-system
namespace/cattle-system created
[root@master ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "rancher-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

 

 

 issue -> 이때 namespce 명을  cattle-system으로 아닌 다른 이름으로 지정하면

rancher가 heml 패키지랑 같이 뜨지 않는 현상이 발생해서 문제가 생길 수 있음,

helm을 삭제하고 재설치해도 리소스를 물고있기때문에 helm을 완벽히 삭제 하지 않으면 아래와 같은 이슈를 만날 수 있음

[root@master ~]# kubectl -n cattle-system get po
NAME                   READY   STATUS   RESTARTS   AGE
helm-operation-2rmng   1/2     Error    0          52m
helm-operation-6pnnc   1/2     Error    0          51m
helm-operation-9kpcd   1/2     Error    0          72m
helm-operation-dlvvf   1/2     Error    0          101m
helm-operation-ffp6c   1/2     Error    0          80m
helm-operation-kpldc   1/2     Error    0          93m
helm-operation-ld4lg   1/2     Error    0          81m
helm-operation-mvkvg   1/2     Error    0          63m
helm-operation-pdpvn   1/2     Error    0          92m
helm-operation-pxlrz   1/2     Error    0          45m
helm-operation-zgfv8   1/2     Error    0          69m
[root@master ~]# helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=master
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRoleBinding "rancher" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "cattle-system": current value is "rancher-service"

 

#helm 패키지 명령어를 통해  cattle-system를 master 호스트에 설치

명령어 : helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=master

[root@master ~]# helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=master
NAME: rancher
LAST DEPLOYED: Fri Mar 17 14:26:37 2023
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued, Containers are started and the Ingress rule comes up.

Check out our docs at https://rancher.com/docs/

If you provided your own bootstrap password during installation, browse to https://master to get started.

If this is the first time you installed Rancher, get started by running this command and clicking the URL it generates:

```
echo https://master/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')
```

To get just the bootstrap password on its own, run:

```
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
```


Happy Containering!

 

#cattle-system pod에 rancher container가 만들어진 후

상태가 ContainerCreating -> Running으로 변경됨을 확인할 수 있음

[root@master ~]# kubectl -n cattle-system get po
NAME                      READY   STATUS    RESTARTS       AGE
rancher-5db6f86c9-2b92t   1/1     Running   1 (2m9s ago)   3m50s
rancher-5db6f86c9-hnts6   1/1     Running   1 (2m9s ago)   3m50s
rancher-5db6f86c9-lffdx   0/1     Running   1 (25s ago)    3m50s

 

 

4. ingress  pod 떠있는지 확인

[root@master ~]# kubectl get ingress -A
NAMESPACE         NAME      CLASS    HOSTS    ADDRESS         PORTS     AGE
rancher-service   rancher   <none>   master   19*.***.54.80   80, 443   9m59s
[root@master ~]# kubectl get ds -n kube-system
NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
rke2-canal                      1         1         1       1            1           kubernetes.io/os=linux   46m
rke2-ingress-nginx-controller   1         1         1       1            1           kubernetes.io/os=linux   45m

 

 

*기타명령어 

방화벽 해제 : iptables -F

포트별해제 : 

firewall-cmd --add-port=80/tcp --zone=public --permanent

firewall-cmd --add-port=443/tcp --zone=public --permanent

 

 

-> 설치결과  (rancher에 접속하면 404 에러가 뜸)

-> 해결책은 RKE2 버전을 낮춰서 호환성있는 버전들로 재설치

 

-> 최종 pod 뜨는 현황 (이것도 아직 다 안뜬걸지도...), cattle-system pod가 정상적으로 올라와야 rancher 접속이 가능함

-> vm 환경이라 4GB / 2 core로 잡아줬는데 리소스가 부족하거나, porxy 서버 설정의 문제거나 둘중에 하나일듯

[root@localhost ~]# kubectl get po -A
NAMESPACE       NAME                                                    READY   STATUS      RESTARTS       AGE
cattle-system   helm-operation-4nn49                                    2/2     Running     0              2m23s
cattle-system   helm-operation-6xnqn                                    1/2     Error       0              25m
cattle-system   helm-operation-77794                                    1/2     Error       0              23m
cattle-system   helm-operation-7m26r                                    1/2     Error       0              40m
cattle-system   helm-operation-7rmqh                                    1/2     Error       0              19m
cattle-system   helm-operation-84plz                                    1/2     Error       0              8m3s
cattle-system   helm-operation-8g5mg                                    1/2     Error       0              33m
cattle-system   helm-operation-8ndzm                                    1/2     Error       0              35m
cattle-system   helm-operation-bzx6h                                    1/2     Error       0              32m
cattle-system   helm-operation-cnh5z                                    1/2     Error       0              30m
cattle-system   helm-operation-d2bhx                                    1/2     Error       0              46m
cattle-system   helm-operation-fct2x                                    1/2     Error       0              10m
cattle-system   helm-operation-g65fr                                    1/2     Error       0              36m
cattle-system   helm-operation-h7g7r                                    1/2     Error       0              24m
cattle-system   helm-operation-hf67j                                    1/2     Error       0              45m
cattle-system   helm-operation-ht45p                                    1/2     Error       0              29m
cattle-system   helm-operation-hv9lz                                    1/2     Error       0              21m
cattle-system   helm-operation-j5sxl                                    1/2     Error       0              5m41s
cattle-system   helm-operation-jrbzj                                    1/2     Error       0              15m
cattle-system   helm-operation-jzv8s                                    1/2     Error       0              11m
cattle-system   helm-operation-l24lh                                    1/2     Error       0              18m
cattle-system   helm-operation-mtdq4                                    1/2     Error       0              9m12s
cattle-system   helm-operation-mxwlg                                    1/2     Error       0              43m
cattle-system   helm-operation-ng5nc                                    1/2     Error       0              13m
cattle-system   helm-operation-p7lkm                                    1/2     Error       0              22m
cattle-system   helm-operation-p9spz                                    1/2     Error       0              16m
cattle-system   helm-operation-phxg6                                    1/2     Error       0              28m
cattle-system   helm-operation-q6khb                                    2/2     Running     0              3m27s
cattle-system   helm-operation-qc48c                                    1/2     Error       0              14m
cattle-system   helm-operation-qtlhc                                    1/2     Error       0              26m
cattle-system   helm-operation-r4c5x                                    2/2     Running     0              77s
cattle-system   helm-operation-r7rk6                                    1/2     Error       0              34m
cattle-system   helm-operation-rtnws                                    1/2     Error       0              38m
cattle-system   helm-operation-t6p2b                                    1/2     Error       0              20m
cattle-system   helm-operation-v85d4                                    1/2     Error       0              27m
cattle-system   helm-operation-vlv8v                                    1/2     Error       0              7m
cattle-system   helm-operation-zb7hn                                    1/2     Error       0              17m
cattle-system   helm-operation-zqh5k                                    2/2     Running     0              4m32s
cattle-system   rancher-84696c75d9-9zp8j                                1/1     Running     12 (39m ago)   106m
cattle-system   rancher-84696c75d9-cftfn                                1/1     Running     11 (39m ago)   106m
cattle-system   rancher-84696c75d9-l5czq                                1/1     Running     11 (44m ago)   106m
cert-manager    cert-manager-84bf598b6b-hgbcg                           1/1     Running     8 (42m ago)    110m
cert-manager    cert-manager-cainjector-66f945d879-4tfwc                1/1     Running     10 (42m ago)   110m
cert-manager    cert-manager-webhook-7954c6c994-w655k                   1/1     Running     3 (55m ago)    110m
kube-system     cloud-controller-manager-localhost.localdomain          1/1     Running     19 (50s ago)   119m
kube-system     etcd-localhost.localdomain                              1/1     Running     1 (55m ago)    119m
kube-system     helm-install-rke2-canal--1-fsbqx                        0/1     Completed   0              120m
kube-system     helm-install-rke2-coredns--1-mhjwh                      0/1     Completed   0              120m
kube-system     helm-install-rke2-ingress-nginx--1-pjr9h                0/1     Completed   0              120m
kube-system     helm-install-rke2-metrics-server--1-h4ml6               0/1     Completed   0              120m
kube-system     kube-apiserver-localhost.localdomain                    1/1     Running     1 (55m ago)    120m
kube-system     kube-controller-manager-localhost.localdomain           1/1     Running     20 (48s ago)   120m
kube-system     kube-proxy-localhost.localdomain                        1/1     Running     1 (55m ago)    120m
kube-system     kube-scheduler-localhost.localdomain                    1/1     Running     19 (49s ago)   120m
kube-system     rke2-canal-xk5v6                                        2/2     Running     5 (55m ago)    120m
kube-system     rke2-coredns-rke2-coredns-687554ff58-b594b              1/1     Running     1 (55m ago)    120m
kube-system     rke2-coredns-rke2-coredns-autoscaler-7566b44b85-jvck9   1/1     Running     3 (44m ago)    120m
kube-system     rke2-ingress-nginx-controller-qwlm2                     1/1     Running     1 (55m ago)    119m
kube-system     rke2-metrics-server-8574659c85-69lz2                    1/1     Running     6 (49m ago)    119m
728x90
반응형