Welcome back to the OpenShift BootCamp series.
In this video you will learn the OpenShift Architecture, control plane, compute nodes, infrastructure nodes and sample reference architecture.
In the backend you have the same kubernetes orchestrating your application pods and services. Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.
OpenShift is the enterprise ready Kubernetes distribution with a lot of enterprise features added on top of Kubernetes. You can deploy OpenShift clusters on top of Public or Private clouds. Major components in OpenShift come from RHEL and related Red Hat technologies hence benefit from the intense testing and certification initiatives. Ofcourse, OpenShift is also open source which makes this product more open and collaborative.
OpenShift uses a special operating system for its cluster nodes. For Red Hat OpenShift it will be Red Hat Enterprise Linux CoreOS (RHCOS) and for OKD it will be Fedora CoreOS (FCOS). CoreOS is a special purpose container oriented Operating System with features like fast installation, simplified upgrades etc.
OpenShift uses ignition as a first boot system configuration and this ignition will help to bring up the machines and configure them automatically. OpenShift uses CRI-O as the container runtime which is a Kubernetes native container runtime. So you do not need to worry about the Docker container engine as CRI-O will help to start and manage the containers in OpenShift. CoreOs also comes with Kubelet which is the primary node agent for Kubernetes responsible for managing containers on nodes.
Please note, there are some restrictions on selection of Operating Systems for the cluster nodes. For the control plane nodes you must use RHCOS. But for the compute or worker node you can use RHCOS or RHEL. And if you are using the OKD, then you can use the alternatives like FCOS, CentOS or Fedora.
OpenShift high level architecture is simple, you have your infrastructure on bare metal, private cloud, public cloud or any supported platform. Then the base operating system like RHCOS or RHEL. On top of that you have Kubernetes and the cluster components.
Primarily you have two types of nodes in your OpenShift cluster – control plane nodes and compute nodes (also known as master nodes and worker nodes). By default, OpenShift will configure the master nodes in such a way that end user application pods will not be able to deploy on master nodes. Master nodes or control plane nodes are dedicated for running the control plane services. Master nodes will run Kubernetes services such as kube-apiserver, kube-controller-manager, kube-scheduler, etcd etc. Also there are OpenShift services running on master nodes, such as OpenShift API server, OpenShift controller manager, OpenShift OAuth API server etc. Some of the services run as static pods and some services run as systemd services. Services that always need to start when the system is booted can be run as systemd services, such as kubelet and CRI-O. Please note, kubelet will start some static pods depending on the nodes based on the node configuration such as scheduler, SDN, etc.
While Control plane nodes are running OpenShift services and pods, user application pods will be running on Compute nodes or worker nodes in the OpenShift cluster. Worker node will also have similar pods and services running as part of cluster services such as kube-proxy, kubelet, CRI-O etc.
For the enterprise and production clusters, there might be a situation to isolate the OpenShift infrastructure workloads from master nodes and in such cases you can configure to use infrastructure nodes in OpenShift. You can configure dedicated nodes in OpenShift cluster for running infrastructure services such as router, internal image registry, logging, monitoring etc.
You have the freedom to architecture and deploy OpenShift clusters based on your needs and the architecture also depends on the platform and environment you are installing. Refer to the OpenShift Documentation for more details and requirements when you design an OpenShift cluster.
Disclaimer: The views expressed and the content shared are those of the author and do not reflect the views of the author's employer or techbeatly platform.