Interloc Solutions Blog

How to create and use a local Storage Class for SNO (Part 1)

Written by Julio Perera | Feb 6, 2024 5:04:33 PM

At Interloc, many of our Maximo consultants are continuously improving upon their MAS functional and technical skills. For those resources focusing on MAS installations, we use the Single Node OpenShift deployment option to create our own, individual environments to replicate issues and probable solutions. Considering these SNO instances are predominantly deployed either in Bare Metal hosts or in Virtualized platforms (like VMware ESXi as VMs) then a solution to have a locally provisioned Storage Class is needed in order to have all MAS and dependencies use it for Shared Storage.  

Additionally, for multi-node, highly available deployments; the recommendation is to install OpenShift Data Foundation / OpenShift Container Storage instead. 

Bare Metal deployments also need to configured with the Image Registry in the Cluster to be backed up by persistent storage. This requires a Storage Class. We will cover these Image Registry configurations in a future second blog post on this subject. 

In this blog we will discuss how to have such Storage Class locally without creating external dependencies as well as considerations and limitations that apply.  

First, the Storage Class that we are going to create only applies to Bare Metal or locally virtualized deployments, including on-prem. For Cloud-based deployments, a proper Cloud-based storage solution should be implemented instead. 

The IBM ansible scripting at https://github.com/ibm-mas/ansible-devops does not specifically cover this case, requiring external storage. 

The performance of this approach should be very good as there is very little overhead and no networking involved. Therefore, latency and bandwidth will translate to the underlying storage performance parameters in a straightforward fashion. 

After investigating, we have determined we can use the LVM Storage Operator (formerly ODF LVM Storage). Which is going to provide a Storage Class that supports automatic provisioning as required but also only ReadWriteOnce (RWO) Mode, which is a limitation. Given that we intend to use the Storage Class in a Single Node OpenShift situation, there should be no need to have a ReadWriteMany class and ReadWriteOnce should suffice. The only problem may be with the PVC handler on the MAS Manage class that needs to be adjusted to use the ReadWriteOnce mode manually.  

The actual deployment will depend on a secondary hard drive (SSD type recommended) to the VM. The primary hard drive configured will be used for the Operating System (RH Core OS having the actual Node installed) and the secondary will be an initially unpartitioned disk available to be used as shared storage by the Operator. An example is shown below: 

  1. First Disk:  Operating System with minimum of 120 GB and recommended of 150 GB
  2. Second Disk: LVM space to use as shared storage with 512 GB (typical) 

Depending on the RedHat version, there are two different Operators with slightly different definitions that can be used. With the assumption we are installing on RedHat OpenShift 4.12 and upwards, we will use the newer operator definition and documentation.  Please see below for the referenced URLs for both versions: 

For RH OCP 4.10/4.11 versions:   https://github.com/ibm-mas-manage/sno/blob/main/docs/index.md 

For RH OCP 4.12 versions and up:   https://github.com/openshift/lvm-operator 

Alternate approaches, such as an NFS server and provisioner or an integrated all-in one NFS approach can be also deployed internally.  These are less preferred due to issues shutting down the node and out of sequence order of service stoppage, in turn creating issues when attempting to upgrade OpenShift. 

This concludes the considerations for the deployment.  In the next blog entry, we will cover the specific steps to accomplish the installation and configuration of the Storage Provider.