Site icon techbeatly

OpenShift 4: Red Hat OpenShift Container Storage 4.5 Lab Installation

Image : plutonlogistics.com

Official documentation of Red Hat OpenShift Container Storage can be found here.

This blog merely written only to show on the technical feasibility such configuration without taking any official supportability scope in context.

As we already knew, OpenShift Container Storage (OCS) version 4.x has already GA somewhere last two weeks.

OCS backend is using Ceph Storage (to understand more about Ceph please read here) and NooBaa for S3 compliant storage.

In this blog we will install OCS based:

The Local Volume disk topology:

Configuration Prerequisites:

# oc get sc
NAME                          PROVISIONER                             AGE
local-sc-osd                  kubernetes.io/no-provisioner            40m

NOTE: OCS mon requires volumeMode: Filesystem and OCS OSD requires volumeMode: Block.

Example of the LocalVolume YAML:

apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
  name: "local-disks-osd"
  namespace: "local-storage" 
spec:
  nodeSelector: 
    nodeSelectorTerms:
    - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker01
          - worker02
          - worker03
  storageClassDevices:
    - storageClassName: "local-sc-osd"
      volumeMode: Block
      devicePaths: 
        - /dev/vdc

Guest Host Configurations:

#oc version
Client Version: openshift-clients-4.2.1-201910220950
Server Version: 4.2.14
Kubernetes Version: v1.14.6+b294fe5
#free -m
              total        used        free      shared  buff/cache   available
Mem:          64213       53613         463         315       10136        9597
Swap:          8191          59        8132
# lscpu | grep node
NUMA node(s):                    1
NUMA node0 CPU(s):               0-7
# cat /proc/loadavg 
10.76 11.66 11.90 27/1540 13455
# dmidecode --type system
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.
Handle 0x000F, DMI type 1, 27 bytes
System Information
	Manufacturer: LENOVO
	Product Name: XXXXX
	Version: ThinkPad P50
	Serial Number: XXXXXX
	UUID: XXXXXXX
	Wake-up Type: Power Switch
	SKU Number: LENOVO_MT_20EQ_BU_Think_FM_ThinkPad P50
	Family: ThinkPad P50

VM resources assigned (block1 disks are not being used):

# virsh domstats --vcpu --balloon --block | egrep "Domain|balloon.current|vcpu.current|block.[[:digit:]].(name|capacity)"
Domain: 'master01.ocp4.local.bytewise.my'
  balloon.current=8388608
  vcpu.current=6
  block.0.name=vda
  block.0.capacity=32212254720
Domain: 'master02.ocp4.local.bytewise.my'
  balloon.current=8388608
  vcpu.current=6
  block.0.name=vda
  block.0.capacity=32212254720
Domain: 'master03.ocp4.local.bytewise.my'
  balloon.current=8388608
  vcpu.current=6
  block.0.name=vda
  block.0.capacity=42949672960
Domain: 'worker01.ocp4.local.bytewise.my'
  balloon.current=25165824
  vcpu.current=8
  block.0.name=vda
  block.0.capacity=42949672960
  block.1.name=vdb
  block.1.capacity=10737418240
  block.2.name=vdc
  block.2.capacity=53687091200
Domain: 'worker02.ocp4.local.bytewise.my'
  balloon.current=25165824
  vcpu.current=8
  block.0.name=vda
  block.0.capacity=42949672960
  block.1.name=vdb
  block.1.capacity=10737418240
  block.2.name=vdc
  block.2.capacity=53687091200
Domain: 'worker03.ocp4.local.bytewise.my'
  balloon.current=25165824
  vcpu.current=8
  block.0.name=vda
  block.0.capacity=42949672960
  block.1.name=vdb
  block.1.capacity=10737418240
  block.2.name=vdc
  block.2.capacity=53687091200

Before proceeding to next section, ensure that OCS operator already installed using the official documentation. We wont cover the operator installation here since the official docs are sufficient.

Configurations:

  1. Label each of the storage node with below label. Take note on rack for topology awareness being use by CRUSH map for HA and resiliency. Each node should have their own rack value (e.g worker01 using topology.rook.io/rack: rack0, worker02 using topology.rook.io/rack: rack1 and so on).

NOTE: You need to do relabel if you uninstalled previous OCS cluster.

# oc get node -l  beta.kubernetes.io/arch=amd64 -o yaml | egrep 'kubernetes.io/hostname: worker|cluster.ocs.openshift.io/openshift-storage|topology.rook.io/rack'
      cluster.ocs.openshift.io/openshift-storage: ""
      kubernetes.io/hostname: worker01
      topology.rook.io/rack: rack0
      cluster.ocs.openshift.io/openshift-storage: ""
      kubernetes.io/hostname: worker02
      topology.rook.io/rack: rack1
      cluster.ocs.openshift.io/openshift-storage: ""
      kubernetes.io/hostname: worker03
      topology.rook.io/rack: rack2

NOTE: This step will be automated when using GUI from the operator page to create cluster. When using GUI, the YAML resource will be created using default value instead of value we wanted it to used. Hence this step.

2. Define a storagecluster CR call ocscluster.yaml (note the storage size is 50Gi) to provision new OCS cluster and oc create -f ocscluster.yaml:

apiVersion: ocs.openshift.io/v1    
kind: StorageCluster    
metadata:    
  namespace: openshift-storage    
  name: ocs-storagecluster    
spec:    
  manageNodes: false    
  monDataDirHostPath: /var/lib/rook
  resources:      
    mon:      
      requests: {}
      limits: {}
    mds:
      requests: {}
      limits: {}
    rgw:      
      requests: {}
      limits: {}
    mgr:
      requests: {}
      limits: {}
    noobaa-core:      
      requests: {}
      limits: {}
    noobaa-db:        
      requests: {}
      limits: {}
  storageDeviceSets:    
  - name: ocs-deviceset    
    count: 1
    resources: {}   
    placement: {}    
    dataPVCTemplate:    
      spec:    
        storageClassName: localblock
        accessModes:    
        - ReadWriteOnce    
        volumeMode: Block    
        resources:    
          requests:    
            storage: 50Gi    
    portable: false
    replica: 3

3. This will trigger the OCS operator (OCS meta operator) to provision required operator to build the OCS cluster component and services.


4. Finally you will see:

This conclude the blog. We anticipate for OCS 4.3 and beyond we going to have better support on external Ceph Cluster, independent mode and other great cloud native storage stuff! Stay tune!

Exit mobile version