IBM Maximo Application Suite On‑Prem Installation: Design Considerations
IBM Maximo | MAS | Information Security | Secure Solutions | IT Security | IBM MAS | MAS Deployment | RedHat | Network | Red Hat OpenShift
This article provides a pragmatic, production-focused reference for design considerations in deploying IBM Maximo Application Suite (MAS) on Red Hat OpenShift running on-prem. Since MAS has a considerable infrastructure footprint, carefully designing and clearly stating the infrastructure requirements is important to make sure infrastructure, network and security teams are on board and aligned.

Reference Architecture Overview
- In this implementation example, the OpenShift cluster runs on VMware vSphere, which provides the compute hosts for control-plane and worker nodes, plus shared storage and networking. In this example we have used the “Assisted Installer” on Red Hat for on-prem but other methods can be used as well.
- Two virtual IPs (VIPs) are reserved—one for the API and one for Ingress—so operators and apps can reach the cluster and routes consistently even as pods move.
- Control-plane nodes handle etcd, scheduling, and core services; worker nodes run MAS workloads. Right-sizing counts (CPU/RAM/disk) follow capacity planning, may also provide storage if OpenShift Data Foundation is used.
- Cluster networks: OpenShift’s service network uses 172.30.0.0/16 and the pod network uses 10.128.0.0/14.
- Corporate egress/proxy: If your data center uses a re-encrypting proxy (e.g., Zscaler ZIA), import its certs early—this matters for cluster installation and image pulls. Cluster might need to reach out to Internet Container Repos being used such as quay.io, docker.io and others, depending upon the specific installation
- Persistent storage: Provide a StorageClass via OpenShift Data Foundation (ODF) or an enterprise NFS.
- DNS & certificates: Delegate a cluster subdomain (e.g., mas.example.org) to an ACME DNS-01 provider (Azure DNS, Cloudflare, Route 53), and forward subdomain lookups there so cert-manager can validate.
- Cert-manager egress: Ensure cert-manager can reach the chosen DNS provider’s endpoints (e.g., Google/Cloudflare) for challenge validation; update operator settings if access is restricted.
- IP planning: Allocate one IP per node plus the API/Ingress VIPs; align with firewall/NAT rules to permit required egress and inbound routes.
- Install jumpbox: Use a bastion/jumpbox with Docker/Podman to run mascli/ansible scripts; on Windows, WSL can supply Linux userland.
Infrastructure Design and Considerations
As a starting point, use IBM Infrastructure Calculator to estimate infrastructure requirements, i.e. number of control plane and worker nodes and their sizing (CPU, RAM, Disk) - https://www.ibm.com/docs/en/masv-and-l/cd?topic=premises-requirements-capacity-planning
Provide a default StorageClass using ODF or an enterprise NFS as required by MAS components. For ODF, deploy three storage/infra nodes with SSD/NVMe and validate IOPS/latency prior to MAS installation.
In high-availability (non-SNO) deployments, distribute node VMs across distinct hypervisors/physical servers. Avoid co-locating two control-plane VMs on the same host; apply the same separation to ODF/OCS storage nodes.
With OCS/ODF, a 10 Gbps link is recommended for storage nodes. In the default three-node setup, every write is replicated to all three nodes.
Keep the application and database in the same data center (or on a sub-ms link) to prevent latency and performance issues.
Network Considerations
Default (and recommended) OpenShift’s service network is 172.30.0.0/16 and the pod network is 10.128.0.0/14; avoid overlapping these ranges with on-prem subnets to prevent conflicts. Default subnets for Pods, Services can be modified on install in case there are conflicts with customer’s subnets.
Identify internet proxy, if any, being used by the on-prem network and get certificates for any re-encrypting proxy such as Zscaler Internet Access (ZIA) that might be used, as this will be entered while generating OpenShift Discovery ISOs.
IP requirements for the on-prem network are one IP address per node, plus reserved VIPs for API and Ingress.
Using DHCP reservations for the nodes is recommended. Otherwise, as an alternative, static IPs can be used.
DNS Considerations
For ACME, delegate a subdomain for the cluster (for example, mas.example.org) to an ACME DNS‑01 provider such as Azure DNS, Cloudflare, Amazon Route 53.
Forward subdomain DNS queries to DNS-01 provider name servers for resolution.
Cert-manager needs to access Google and Cloudflare DNS to internally validate the subdomain, if either of these are not accessible from on-prem network, make sure to update accordingly in Cert-manager operator. Cert-manager must also reach your chosen DNS provider’s API endpoints.
The Ingress router isn’t limited to the default *.apps.<cluster-domain> wildcard. It will serve any Route you configure, provided the corresponding DNS records resolve to the ingress IP.
The list of ACME/DNS01 supported DNS providers is here: https://cert-manager.io/docs/configuration/acme/dns01/
There are other validation mechanisms for ACME besides DNS01 but DNS01 is the only one that supports wildcard certificates.
This example uses Automated Certificate Management with certificates issued by Let’s Encrypt. Alternatively, you can use Manual Certificate Management or Automated Certificate Management backed by an AD Certification Authority.
Jumpbox/bastion host
For installation, a Jumpbox running Docker/Podman is needed to run mascli docker container. If the Jumpbox is running Windows OS, WSL can be used for running a Linux kernel.
Conclusion
Deploying MAS on OpenShift on-prem is mostly about getting the foundations right: accurate capacity planning, a reliable storage class, clean IP/CIDR choices, DNS delegation for ACME, and predictable egress through the corporate proxy. When those pieces are aligned, the MAS install becomes repeatable and supportable.
About Khalid Sayyed
Khalid Sayyed is a Maximo Consultant with over 15 years of industry experience, primarily in IBM Maximo and Cognos analytics. He has worked on multiple Maximo implementations and on support/administration projects, leading teams of developers and consultants. He has extensive experience working with IBM Maximo infrastructure & architecture, integrations, administration, upgrade, configurations, customizations, analytics, and end user/admin trainings. Khalid is very passionate about IOT, Data science and its applications to reliability industry for predictive maintenance.
