Skip to content

OpenShift 4: Libvirt Platform Agnostic installation

Avatar photo

https://www.linkedin.com/in/muhammad-aizuddin-zali-4807b552/

Hello everyone, just a couple of months ago, Red Hat has released a shiny OpenShift 4 [1] based on CoreOS technology. In this guide, we going to see how we can install OCP4 UPI on libvirt as baremetal type deployment.

A short definition of:
UPI: User Provided Infrastructure (User manually prepare machine for bootstrap)
IPI: Installer Provided Infrastructure (OCP Installer automatically terraforming machine for bootstrap)

1. Pre-Requisites

  • A Valid Red Hat OpenShift subscription
  • DNS Server (Helper Node)
  • HAProxy Server (Helper Node)
  • DHCP Server (Helper Node)
    • For DHCP Clustering refer here.
  • PXE/TFTP Server (Helper Node)
  • Libvirt Host
    • 1 x Helper Node
      -> To host above services
      -> 4vCPU, 4GB RAM
      -> 50 GB Disk
      -> RHEL 8.0
      -> helper.ocp4.techbeatly.com
    • 1 x Bootstrap Node
      -> 4 vCPU, 6GB RAM
      -> 50 GB Disk
      -> Empty Node
      -> PXE Booted
      -> bootstrap.ocp4.techbeatly.com
    • 3 x Master Node
      -> 4 vCPU, 8GB RAM
      -> 50 GB Disk
      -> Empty Node
      -> PXE Booted
      -> master[1-3].ocp4.techbeatly.com
    • 3 x Worker Node
      -> 4 vCPU, 8GB RAM
      -> 50 GB Disk
      -> Empty Node
      -> PXE Booted
      ->worker[1-3].ocp4.techbeatly.com
  • SDN Subnets
    • Cluster Network: 10.128.0.0/14 (host prefix /23)
    • Service Network: 172.30.0.0/16
  • FQDN
    • K8S API: api.ocp4.techbeatly.com
    • Machine API: api-int.ocp4.techbeatly.com
    • Wildcard subdomain: *.apps.ocp4.techbeatly.com

NOTE: Above sizing purely for testing purpose, for actual production load please engage Red Hat Consulting for proper design.

2. Configuration

2.1 Install and configure packages:

[root@helper ~]# dnf -y install bind bind-utils dhcp-server tftp-server syslinux httpd haproxy
[root@helper ~]# firewall-cmd --add-service={dhcp,tftp,http,https,dns} --permanent
[root@helper ~]# firewall-cmd --add-port={6443/tcp,22623/tcp,8080/tcp} --permanent
[root@helper ~]# firewall-cmd --reload

2.2 Configure DNS

1. Comment out below lines in /etc/named.conf:

#listen-on port 53 { 127.0.0.1; };
#listen-on-v6 port 53 { ::1; };

2. Allow DNS query from the subnet:

allow-query     { localhost;192.168.0.0/24; };

3. Now we need to create DNS hosted zone, append /etc/named.conf:

zone "techbeatly.com" IN {
    type master;
    file "techbeatly.com.db"; 
    allow-update { none; };
};

zone “50.168.192.in-addr.arpa” IN {
    type master;
    file “50.168.192.in-addr.arpa";
};

4. Create a zone database file in /var/named/techbeatly.com.db and /var/named/50.168.192.in-addr.arpa with below content:

[root@helper ~]# cat /var/named/techbeatly.com.db 
 $TTL     1D
 @        IN  SOA dns.ocp4.techbeatly.com. root.techbeatly.com. (
                       2019022400 ; serial
                       3h         ; refresh
                       15         ; retry
                       1w         ; expire
                       3h         ; minimum
                                                                             )
                  IN  NS  dns.ocp4.techbeatly.com.
 dns.ocp4            IN  A   192.168.50.30
 bootstrap.ocp4            IN  A   192.168.50.60
 master01.ocp4            IN  A   192.168.50.61
 master02.ocp4            IN  A   192.168.50.62
 master03.ocp4            IN  A   192.168.50.63
 etcd-0.ocp4            IN  A   192.168.50.61
 etcd-1.ocp4            IN  A   192.168.50.62
 etcd-2.ocp4            IN  A   192.168.50.63
 api.ocp4               IN  A   192.168.50.30
 api-int.ocp4           IN  A   192.168.50.30
 *.apps.ocp4            IN  A   192.168.50.30
 worker01.ocp4            IN  A   192.168.50.64
 worker02.ocp4            IN  A   192.168.50.64
 worker03.ocp4            IN  A   192.168.50.66
 _etcd-server-ssl._tcp.ocp4    IN  SRV 0 10    2380 etcd-0.ocp4
 _etcd-server-ssl._tcp.ocp4      IN      SRV     0 10    2380 etcd-1.ocp4
 _etcd-server-ssl._tcp.ocp4      IN      SRV     0 10    2380 etcd-2.ocp4


[root@helper ~]# cat /var/named/50.168.192.in-addr.arp
 $TTL     1D
 @        IN  SOA dns.ocp4.techbeatly.com. root.techbeatly.com. (
                       2019022400 ; serial
                       3h         ; refresh
                       15         ; retry
                       1w         ; expire
                       3h         ; minimum

                  IN  NS  dns.ocp4.techbeatly.com.
60 IN PTR bootstrap.ocp4.techbeatly.com.
61 IN PTR master01.ocp4.techbeatly.com.
62 IN PTR master02.ocp4.techbeatly.com.
63 IN PTR master03.ocp4.techbeatly.com.
64 IN PTR worker01.ocp4.techbeatly.com.
65 IN PTR worker02.ocp4.techbeatly.com.
66 IN PTR worker03.ocp4.techbeatly.com.

TIPS: Do not set PTR record for etcd. PTR record will be used to set the node hostname.

5. Restart named and test the DNS:

[root@helper ~]# dig @localhost -t srv _etcd-server-ssl._tcp.ocp4.techbeatly.com
 ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-17.P2.el8_0 <<>> @localhost -t srv _etcd-server-ssl._tcp.ocp4.techbeatly.com
 ; (1 server found)
 ;; global options: +cmd
 ;; Got answer:
 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26440
 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 5
 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ; COOKIE: ae2bc6642a496d42698637d95d1996a47c622a7ab26d73ef (good)
 ;; QUESTION SECTION:
 ;_etcd-server-ssl._tcp.ocp4.techbeatly.com. IN SRV
 ;; ANSWER SECTION:
 _etcd-server-ssl._tcp.ocp4.techbeatly.com. 86400 IN SRV    0 10 2380 etcd-0.ocp4.techbeatly.com.
 _etcd-server-ssl._tcp.ocp4.techbeatly.com. 86400 IN SRV    0 10 2380 etcd-1.ocp4.techbeatly.com.
 _etcd-server-ssl._tcp.ocp4.techbeatly.com. 86400 IN SRV    0 10 2380 etcd-2.ocp4.techbeatly.com.
 ;; AUTHORITY SECTION:
 techbeatly.com.        86400   IN  NS  dns.ocp4.techbeatly.com.
 ;; ADDITIONAL SECTION:
 etcd-0.ocp4.techbeatly.com. 86400 IN    A   192.168.50.61
 etcd-1.ocp4.techbeatly.com. 86400 IN    A   192.168.50.62
 etcd-2.ocp4.techbeatly.com. 86400 IN    A   192.168.50.63
 dns.ocp4.techbeatly.com. 86400    IN  A   192.168.50.30
 ;; Query time: 0 msec
 ;; SERVER: 127.0.0.1#53(127.0.0.1)
 ;; WHEN: Mon Jul 01 13:14:12 +08 2019
 ;; MSG SIZE  rcvd: 318

2.3 Configuring DHCP and PXE

  1. Setup /etc/dhcp/dhcpd.conf with below content (update this as per your environment!):
ddns-update-style interim;
ignore client-updates;
authoritative;
allow booting;
allow bootp;
deny unknown-clients;
default-lease-time -1;
max-lease-time -1;
subnet 192.168.50.0 netmask 255.255.255.0 {
         option routers 192.168.50.1;
         option domain-name-servers 192.168.50.30;
         option ntp-servers time.unisza.edu.my;
         option domain-search "techbeatly.com","ocp4.techbeatly.com";
         filename "pxelinux.0";
         next-server 192.168.50.30;
         host bootstrap { hardware ethernet 52:54:00:7d:2d:b1; fixed-address 192.168.50.60; }
         host master01 { hardware ethernet 52:54:00:7d:2d:b2; fixed-address 192.168.50.61; }
         host master02 { hardware ethernet 52:54:00:7d:2d:b3; fixed-address 192.168.50.62; }
         host master03 { hardware ethernet 52:54:00:7d:2d:b4; fixed-address 192.168.50.63; }
         host worker01 { hardware ethernet 52:54:00:7d:2d:b5; fixed-address 192.168.50.64; }
         host worker02 { hardware ethernet 52:54:00:7d:2d:b6; fixed-address 192.168.50.65; }
         host worker03 { hardware ethernet 52:54:00:7d:2d:c1; fixed-address 192.168.50.66; }

2. Populate /var/lib/tftpboot/pxelinux.cfg/default with below content(install syslinux package if not yet done):

[root@helper 4.1.0]# cat /var/lib/tftpboot/pxelinux.cfg/default
default menu.c32
prompt 0
timeout 30
menu title **** OpenShift 4 PXE Boot Menu ****
 
label Install CoreOS 4.1.0 Bootstrap Node
 kernel /openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-kernel
 append ip=dhcp rd.neednet=1 coreos.inst.install_dev=vda console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.image_url=http://192.168.50.30:8080/openshift4/4.1.0/images/rhcos-4.1.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.50.30:8080/openshift4/4.1.0/ignitions/bootstrap.ign initrd=/openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-initramfs.img
label Install CoreOS 4.1.0 Master Node
 kernel /openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-kernel
 append ip=dhcp rd.neednet=1 coreos.inst.install_dev=vda console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.image_url=http://192.168.50.30:8080/openshift4/4.1.0/images/rhcos-4.1.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.50.30:8080/openshift4/4.1.0/ignitions/master.ign initrd=/openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-initramfs.img                                                                                                                                            
label Install CoreOS 4.1.0 Worker Node
 kernel /openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-kernel
 append ip=dhcp rd.neednet=1 coreos.inst.install_dev=vda console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.image_url=http://192.168.50.30:8080/openshift4/4.1.0/images/rhcos-4.1.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.50.30:8080/openshift4/4.1.0/ignitions/worker.ign initrd=/openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-initramfs.img

3 . Copy syslinux for PXE boot

 #> cp -rvf /usr/share/syslinux/* /var/lib/tftpboot

4. Start TFTP

#> systemctl start tftp

2.3.1 Optional MAC-Address based boot file

While above procedure is sufficient, you can extend it further to:

  • PXE to offer bootfile based on the MAC Address of the client. Further automate the PXE boot selection menu with zero touch.
  1. Create mac address based file e.g /var/lib/tftpboot/pxelinux.cfg/01-52-54-00-7d-2d-b1 (NOTE that 01- prefix in front is required) with below specific node content. Do this for each of the nodes with node specific mac address and boot content.
default menu.c32
prompt 0
timeout 2
menu title **** OpenShift 4 Bootstrap PXE Boot Menu ****
label Install CoreOS 4.3.0 Bootstrap Node
 kernel /openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-kernel
 append ip=dhcp rd.neednet=1 coreos.inst.install_dev=vda console=tty0 console=ttyS0 coreos.inst=yes coreos.inst.image_url=http://192.168.50.30:8080/openshift4/4.1.0/images/rhcos-4.1.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.50.30:8080/openshift4/4.1.0/ignitions/bootstrap.ign initrd=/openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-initramfs.img

2. This will tell PXE server to serve this file content when it sees the MAC address are corresponding to the pxelinux.cfg hence we dont need to navigate the PXE menu, a zero touch.

2.4 Configure webserver to host RHCOS image

  1. Switch httpd to listen from 80 to 8080 in /etc/httpd/conf/httpd.conf:
Listen 8080
[root@helper 4.1.0]# systemctl restart httpd

2. Create a new directory for hosting the kernel and initramfs for PXE boot:

 [root@helper 4.1.0]# mkdir -p /var/lib/tftpboot/openshift4/4.1.0
 [root@helper 4.1.0]# cd /var/lib/tftpboot/openshift4/4.1.0
 [root@helper 4.1.0]# wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.1/latest/rhcos-4.1.0-x86_64-installer-kernel
 [root@helper 4.1.0]# wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.1/latest/rhcos-4.1.0-x86_64-installer-initramfs.img
 [root@helper 4.1.0]# restorecon -RFv .
 Relabeled /var/lib/tftpboot/openshift4/4.1.0 from unconfined_u:object_r:tftpdir_rw_t:s0 to system_u:object_r:tftpdir_rw_t:s0
 Relabeled /var/lib/tftpboot/openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-kernel from unconfined_u:object_r:tftpdir_rw_t:s0 to system_u:object_r:tftpdir_rw_t:s0
 Relabeled /var/lib/tftpboot/openshift4/4.1.0/rhcos-4.1.0-x86_64-installer-initramfs.img from unconfined_u:object_r:tftpdir_rw_t:s0 to system_u:object_r:tftpdir_rw_t:s0

3. Now, we going to host the Red Hat CoreOS image:

[root@helper html]# mkdir -p /var/www/html/openshift4/4.1.0/images/
 [root@helper html]# cd  /var/www/html/openshift4/4.1.0/images/
 [root@ocphelpernode images]# wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.1/latest/rhcos-4.1.0-x86_64-metal-bios.raw.gz
 [root@helper images]# restorecon -RFv .
 Relabeled /var/www/html/openshift4/4.1.0/images from unconfined_u:object_r:httpd_sys_content_t:s0 to system_u:object_r:httpd_sys_content_t:s0
 Relabeled /var/www/html/openshift4/4.1.0/images/rhcos-4.1.0-x86_64-metal-bios.raw.gz from unconfined_u:object_r:httpd_sys_content_t:s0 to system_u:object_r:httpd_sys_content_t:s0

2.5 Configure HAProxy as LB

1. A load balance required for balancing masters and ingress router, append /etc/haproxy/haproxy.conf :

---------------------------------------------------------------------
 round robin balancing for OCP Kubernetes API Server
 ---------------------------------------------------------------------
 frontend k8s_api
   bind *:6443
   mode tcp
   default_backend k8s_api_backend
 backend k8s_api_backend
   balance roundrobin
   mode tcp
   option httpchk GET /healthz
   http-check expect string "ok"
      server bootstrap 192.168.50.60:6443 check check-ssl verify none
      server master01 192.168.50.61:6443 check check-ssl verify none
      server master02 192.168.50.62:6443 check check-ssl verify none
      server master03 192.168.50.63:6443 check check-ssl verify none


 ---------------------------------------------------------------------
 round robin balancing for OCP Machine Config Server
 ---------------------------------------------------------------------
 frontend machine_config
   bind *:22623
   mode tcp
   default_backend machine_config_backend
 backend machine_config_backend
   balance roundrobin
   mode tcp
   server bootstrap 192.168.50.60:22623 check
   server master01 192.168.50.61:22623 check
   server master02 192.168.50.62:22623 check
   server master03 192.168.50.63:22623 check
 ---------------------------------------------------------------------
 round robin balancing for OCP Ingress Insecure Port
 ---------------------------------------------------------------------
 frontend ingress_insecure
   bind *:80
   mode tcp
   default_backend ingress_insecure_backend
 backend ingress_insecure_backend
   balance roundrobin
   mode tcp
   server worker01 192.168.50.64:80 check
   server worker02 192.168.50.65:80 check
   server worker03 192.168.50.66:80 check
 ---------------------------------------------------------------------
 round robin balancing for OCP Ingress Secure Port
 ---------------------------------------------------------------------
 frontend ingress_secure
   bind *:443
   mode tcp
   default_backend ingress_secure_backend
 backend ingress_secure_backend
   balance roundrobin
   mode tcp
   server worker01 192.168.50.64:443 check
   server worker02 192.168.50.65:443 check
   server worker03 192.168.50.66:443 check
 ---------------------------------------------------------------------
 Exposing HAProxy Statistic Page
 ---------------------------------------------------------------------
 listen stats
     bind :32700
     stats enable
     stats uri /
     stats hide-version
     stats auth admin:password
 [root@ocp4node ~]#

2. Allow haproxy to bind to custom port with SELinux:

[root@helper ~]# semanage port  -a 22623 -t http_port_t -p tcp
[root@helper ~]# semanage port  -a 6443 -t http_port_t -p tcp
[root@helper ~]# semanage port  -a 32700 -t http_port_t -p tcp
[root@helper ~]# semanage port  -l  | grep -w http_port_t
 http_port_t                    tcp      22623, 32700, 6443, 80, 81, 443, 488, 8008, 8009, 8443, 9000

2.6 Configure OpenShift Installer and CLI binary

  1. To start creating ignition files and managing cluster, download installer and client:
[root@helper ~]# wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux-4.1.3.tar.gz
[root@helper ~]# wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux-4.1.3.tar.gz
[root@helper ~]# tar -xvf openshift-client-linux-4.1.3.tar.gz 
[root@helper ~]# tar -xvf openshift-install-linux-4.1.3.tar.gz 
[root@helper ~]# cp -v {oc,kubectl,openshift-install} /usr/bin/
 'oc' -> '/usr/bin/oc'
 'kubectl' -> '/usr/bin/kubectl'
 'openshift-install' -> '/usr/bin/openshift-install'

2. If not yet, create a sshkey pair to use to access CoreOS node later on:

[root@helper ocp4]# ssh-keygen -t rsa

3. Create installation working directory and start to prepare the ignition file:

NOTE: Replace “pullSecret” and “sshKey” using your own value.

[root@helper ~]# mkdir -p ocp4
[root@helper ~]# cd ocp4 
[root@helper ~]# cat install-config-base.yaml 
apiVersion: v1
baseDomain: techbeatly.com
compute:
- hyperthreading: Enabled
  name: worker
  replicas: 0
controlPlane:
  hyperthreading: Enabled
  name: master
  replicas: 3
metadata:
  name: ocp4
networking:
  clusterNetworks:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  none: {}
pullSecret: 'GET FROM cloud.redhat.com'
sshKey: 'SSH PUBLIC KEY'

4. Now copy install-config-base.yaml to install-config.yaml. (install-config.yaml will be consumed by installer, hence copied from base file instead)

[root@helper ocp4]# cp install-config-base.yaml install-config.yaml 

5. Create manifests from this install-config .yaml, this is required because we want master to be unscheduleable from non control-plane pods.

[root@helper ocp4]# openshift-install create manifests
[root@helper ocp4]# sed -i 's|mastersSchedulable: true|mastersSchedulable: false|g' manifests/cluster-scheduler-02-config.yml

[root@helper ocp4]# cat manifests/cluster-scheduler-02-config.yml 
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
  creationTimestamp: null
  name: cluster
spec:
  mastersSchedulable: false
  policy:
    name: ""
status: {}

6. Next create ignitions file (a RHCOS firstboot configuration files) to be used for node bootstrapping.


 [root@helper ocp4]# openshift-install create ignition-configs
 WARNING There are no compute nodes specified. The cluster will not fully initialize without compute nodes. 
 INFO Consuming "Install Config" from target directory 
 [root@helper ocp4]# ll
 total 288
 drwxr-xr-x. 2 root root     50 Jul  1 16:46 auth
 -rw-r--r--. 1 root root 276768 Jul  1 16:46 bootstrap.ign
 -rw-r--r--. 1 root root   3545 Jul  1 16:42 install-config-base.yaml
 -rw-r--r--. 1 root root   1824 Jul  1 16:46 master.ign
 -rw-r--r--. 1 root root     96 Jul  1 16:46 metadata.json
 -rw-r--r--. 1 root root   1824 Jul  1 16:46 worker.ign
 [root@helper ocp4]#

7. We need to copy ignition file to http hosted directory:

[root@helper ocp4]# mkdir /var/www/html/openshift4/4.1.0/ignitions
 [root@helper ocp4]# cp -v *.ign /var/www/html/openshift4/4.1.0/ignitions/
 'bootstrap.ign' -> '/var/www/html/openshift4/4.1.0/ignitions/bootstrap.ign'
 'master.ign' -> '/var/www/html/openshift4/4.1.0/ignitions/master.ign'
 'worker.ign' -> '/var/www/html/openshift4/4.1.0/ignitions/worker.ign'
 [root@helper ocp4]# restorecon -RFv /var/www/html/
 Relabeled /var/www/html/openshift4 from unconfined_u:object_r:httpd_sys_content_t:s0 to system_u:object_r:httpd_sys_content_t:s0
 Relabeled /var/www/html/openshift4/4.1.0 from unconfined_u:object_r:httpd_sys_content_t:s0 to system_u:object_r:httpd_sys_content_t:s0
 Relabeled /var/www/html/openshift4/4.1.0/ignitions from unconfined_u:object_r:httpd_sys_content_t:s0 to system_u:object_r:httpd_sys_content_t:s0
 Relabeled /var/www/html/openshift4/4.1.0/ignitions/bootstrap.ign from unconfined_u:object_r:httpd_sys_content_t:s0 to system_u:object_r:httpd_sys_content_t:s0
 Relabeled /var/www/html/openshift4/4.1.0/ignitions/master.ign from unconfined_u:object_r:httpd_sys_content_t:s0 to system_u:object_r:httpd_sys_content_t:s0
 Relabeled /var/www/html/openshift4/4.1.0/ignitions/worker.ign from unconfined_u:object_r:httpd_sys_content_t:s0 to system_u:object_r:httpd_sys_content_t:s0
 [root@helper ocp4]# 

8. At this moment we are done with the configuration, enable and start all services:

[root@ocp4node ocp4]# systemctl enable --now haproxy.service dhcpd httpd tftp named

9. We are now should be able to boot up all the nodes and let bootstrap installing the initial cluster!

For automating this via Ansible Playbook using Libvirt, example can be found here.

Post Install Known Issue

1. Router pod scheduled on master node.
-> Load balancer is pointing to worker node (or router-infra node, depend on your layout), when router pods scheduled on master, operator like console cluster operator will be failing and degraded since it cant contact oauth endpoint route.
-> Setting master to unscheduleable is a must if we want no other pod scheduled here (or use proper labelling/node selector after install), this step should be cover above during manifest creation and configuration to set it properly.
-> In a case, where master still scheduleable after cluster deployed, run below commandn and then restart router pod(assuming load balancer is pointing to all node except masters):

[root@ocp4node ocp4]# oc patch schedulers.config.openshift.io cluster  --type=merge -p      '{"spec":{"mastersSchedulable":  false}}'
scheduler.config.openshift.io/cluster patched 

Disclaimer:

The views expressed and the content shared in all published articles on this website are solely those of the respective authors, and they do not necessarily reflect the views of the author’s employer or the techbeatly platform. We strive to ensure the accuracy and validity of the content published on our website. However, we cannot guarantee the absolute correctness or completeness of the information provided. It is the responsibility of the readers and users of this website to verify the accuracy and appropriateness of any information or opinions expressed within the articles. If you come across any content that you believe to be incorrect or invalid, please contact us immediately so that we can address the issue promptly.

Avatar photo


https://www.linkedin.com/in/muhammad-aizuddin-zali-4807b552/
Red Hat ASEAN Senior Platform Consultant. Kubernetes, OpenShift and DevSecOps evangelist.

Comments

12 Responses

  1. Peter says:

    I did follow you steps but I got
    something like:
    master:
    ignition[xxx]: GET error: Get https://api-int.ocp.ocp4.lab:22623/config/master: EOF

    worker:
    ignition[xxx]: GET error: Get https://api-int.ocp.ocp4.lab:22623/config/woker: EOF

    • Muhammad Aizuddin Bin Zali says:

      what curl -kv https://api-int.ocp.ocp4.lab:22623 telling you?

      • Peter says:

        [root@ocp4-helper ~]# curl -kv https://api-int.ocp.ocp4.lab:22623
        * About to connect() to api-int.ocp.ocp4.lab port 22623 (#0)
        * Trying 192.168.126.251…
        * Connected to api-int.ocp.ocp4.lab (192.168.126.251) port 22623 (#0)
        * Initializing NSS with certpath: sql:/etc/pki/nssdb
        * NSS error -5938 (PR_END_OF_FILE_ERROR)
        * Encountered end of file
        * Closing connection 0
        curl: (35) Encountered end of file

        • Rick says:

          Any update on this? I am having the same issue.

          • Avatar photo Muhammad Aizuddin Zali says:

            This can mean anything, First check the bootstrap node for logs. In high level, this is how the bootstrapping looks like:

            1. The bootstrap machine boots and starts hosting the remote resources required for the master machines to boot. (Requires manual intervention if you provision the infrastructure)

            2. The master machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure)

            3. The master machines use the bootstrap machine to form an etcd cluster.

            4. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster.

            5. The temporary control plane schedules the production control plane to the master machines.

            6. The temporary control plane shuts down and passes control to the production control plane.

            7. The bootstrap machine injects OpenShift Container Platform components into the production control plane.

            8. The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure)

            9. The control plane sets up the worker nodes.

            10. The control plane installs additional services in the form of a set of Operators.

        • Alok Hom says:

          thats because it needs the bootstrap server and the haproxy pointing to bootstrap server 22623 port success

  2. CertDepot says:

    I’m not able to follow your tutorial. Several steps seem missing.
    You don’t provide any instructions around the pxelinux.0 file (cp?).
    You don’t mention the xinetd daemon at any time (Fedora specific?).
    You don’t say anything about the libvirt network configuration (bridge?) that you use.
    Check your tutorial and make it easier to follow.

    • Avatar photo Muhammad Aizuddin Zali says:

      Thanks for the comment. We are trying to focus on OCP 4 instead of supporting components, but we did updated it with note on how to get pxelinux file from syslinux package. Other than that its RHEL 8 specific with default setting of libvirtd. nothing fancy.

  3. sunny says:

    Quick comment any reason why naming convention changed fro master and etcd…
    some start with 01 and others with 0.
    master01.ocp4 IN A 192.168.50.61
    master02.ocp4 IN A 192.168.50.62
    master03.ocp4 IN A 192.168.50.63
    etcd-0.ocp4 IN A 192.168.50.61
    etcd-1.ocp4 IN A 192.168.50.62
    etcd-2.ocp4 IN A 192.168.50.63

    • Avatar photo Muhammad Aizuddin Zali says:

      its required by etcd-(N-1) rule for etcd discovery. Master01 will have etcd-0 naming and so on.

  4. […] Some of this (PXE configs) DHCP configuration coming from another post. […]

  5. Pooriya says:

    Hi,

    Thank you very much for your nice post. I have been able to stand up the control plane manually, but when I try to use the tftp server to make it more streamlined, I get stuck on the tftp boot menu page. When I select any of the entry for bootstrap, master, or worker nothing happens and it is just a menu. Any idea what the issue is please? I exactly followed the below section. Thanks.

    2.3 Configuring DHCP and PXE

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.