This article provides a pragmatic, production-focused reference for design considerations in deploying IBM Maximo Application Suite (MAS) on Red Hat OpenShift running on-prem. Since MAS has a considerable infrastructure footprint, carefully designing and clearly stating the infrastructure requirements is important to make sure infrastructure, network and security teams are on board and aligned.
As a starting point, use IBM Infrastructure Calculator to estimate infrastructure requirements, i.e. number of control plane and worker nodes and their sizing (CPU, RAM, Disk) - https://www.ibm.com/docs/en/masv-and-l/cd?topic=premises-requirements-capacity-planning
Provide a default StorageClass using ODF or an enterprise NFS as required by MAS components. For ODF, deploy three storage/infra nodes with SSD/NVMe and validate IOPS/latency prior to MAS installation.
In high-availability (non-SNO) deployments, distribute node VMs across distinct hypervisors/physical servers. Avoid co-locating two control-plane VMs on the same host; apply the same separation to ODF/OCS storage nodes.
With OCS/ODF, a 10 Gbps link is recommended for storage nodes. In the default three-node setup, every write is replicated to all three nodes.
Keep the application and database in the same data center (or on a sub-ms link) to prevent latency and performance issues.
Default (and recommended) OpenShift’s service network is 172.30.0.0/16 and the pod network is 10.128.0.0/14; avoid overlapping these ranges with on-prem subnets to prevent conflicts. Default subnets for Pods, Services can be modified on install in case there are conflicts with customer’s subnets.
Identify internet proxy, if any, being used by the on-prem network and get certificates for any re-encrypting proxy such as Zscaler Internet Access (ZIA) that might be used, as this will be entered while generating OpenShift Discovery ISOs.
IP requirements for the on-prem network are one IP address per node, plus reserved VIPs for API and Ingress.
Using DHCP reservations for the nodes is recommended. Otherwise, as an alternative, static IPs can be used.
For ACME, delegate a subdomain for the cluster (for example, mas.example.org) to an ACME DNS‑01 provider such as Azure DNS, Cloudflare, Amazon Route 53.
Forward subdomain DNS queries to DNS-01 provider name servers for resolution.
Cert-manager needs to access Google and Cloudflare DNS to internally validate the subdomain, if either of these are not accessible from on-prem network, make sure to update accordingly in Cert-manager operator. Cert-manager must also reach your chosen DNS provider’s API endpoints.
The Ingress router isn’t limited to the default *.apps.<cluster-domain> wildcard. It will serve any Route you configure, provided the corresponding DNS records resolve to the ingress IP.
The list of ACME/DNS01 supported DNS providers is here: https://cert-manager.io/docs/configuration/acme/dns01/
There are other validation mechanisms for ACME besides DNS01 but DNS01 is the only one that supports wildcard certificates.
This example uses Automated Certificate Management with certificates issued by Let’s Encrypt. Alternatively, you can use Manual Certificate Management or Automated Certificate Management backed by an AD Certification Authority.
For installation, a Jumpbox running Docker/Podman is needed to run mascli docker container. If the Jumpbox is running Windows OS, WSL can be used for running a Linux kernel.
Deploying MAS on OpenShift on-prem is mostly about getting the foundations right: accurate capacity planning, a reliable storage class, clean IP/CIDR choices, DNS delegation for ACME, and predictable egress through the corporate proxy. When those pieces are aligned, the MAS install becomes repeatable and supportable.