aarna.ml’s GPU Cloud Management Software enables GPU-as-a-service providers to build hyperscaler-grade AI clouds, empowering a transition from static GPU rentals to fully on-demand, multi-tenant, and self-service IaaS and PaaS
aarna.ml provides a sophisticated GPU Cloud Management Software (CMS) designed to automate and orchestrate a multi-tenant AI infrastructure with on-demand self-service Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) capabilities. As enterprises and Neoclouds scale their AI initiatives, they face immense operational complexity in managing a mix of GPU, DPU, and networking resources while ensuring security and maximizing ROI. The aarna.ml GPU CMS abstracts this complexity, transforming static hardware into a dynamic, self-service GPU-as-a-Service offering.
At its core, the platform provides robust Infrastructure-as-a-Service (IaaS) capabilities with a focus on "hard isolation." It automates the provisioning of secure, tenant-specific environments, including Bare Metal (BMaaS), GPU-enabled VMs (VMaaS), and dedicated Kubernetes clusters (K8s-as-a-Service), leveraging Red Hat OpenShift. Our GPU CMS enforces true multi-tenancy across the entire stack: compute, storage, and networking (VPCs, VxLANs, InfiniBand P-Keys), including high-speed interconnects like NVLink.
Moving beyond infrastructure, aarna.ml enables NCPs to offer high-value Platform-as-a-Service (PaaS) offerings. This includes simplified "Easy Workload Setup" for job submission, one-click "Bundle" deployments of common AI applications, and turnkey services like Fine-Tuning-as-a-Service (FTaaS) and Model-Inference-as-a-Service (MlaaS). The platform integrates seamlessly with the broader MLOps ecosystem, including tools like Kubeflow and MLflow.
A key focus is maximizing hardware ROI. The GPU CMS enables advanced GPU virtualization (NVIDIA MIG, time-slicing) and intelligent oversubscription to drive utilization rates above 90%. Our deep integration with the NVIDIA ecosystem is a core differentiator, providing orchestration for advanced components like BlueField DPUs and the DOCA software stack to offload networking, storage, and security, freeing up host resources for AI workloads. Furthermore, we offer specialized vertical solutions, such as our AI-RAN Edition, which provides a converged management plane for both telecommunications and AI workloads at the network edge.
Ultimately, aarna.ml empowers organizations to unlock the full potential of their accelerated computing investments, turning complex, capital-intensive hardware into a flexible, secure, and profitable AI cloud.
Provides the ability to offer multi-tenancy at the IaaS layer with hard isolation. The software configures multiple elements in the data center in unison to provide various levels of security. This is achieved across the seven pillars of hard multi-tenancy: Compute (BM or KVM), GPU (BM or MIG), Network (VxLAN/VRF), InfiniBand (P-KEY), Storage (VRF + Vendor APIs), VPC WAN Gateway, and NVLINK Partitions. This ensures that each tenant's data and configurations are secure, even when using the same underlying infrastructure , providing isolation as strong as physically separated clusters.
The platform enables a full spectrum of cloud services, allowing providers to support diverse use cases from a single infrastructure foundation. It supports foundational IaaS like Bare-Metal-as-a-Service (BMaaS), VM-as-a-Service (VMaaS), and Container-as-a-Service (CaaS). On top of this, the platform facilitates high-value PaaS offerings, including Job Submission, a 3rd Party PaaS Catalog, and integrations for MLOps and Model-as-a-Service with tools like Red Hat OpenshiftAI and Red Hat AI.
The aarna.ml GPU CMS provides a single pane of glass for management and a self-service portal for users. This unified interface serves all user personas, from the Cloud/Enterprise Super Admin managing the entire infrastructure to Tenant Admins managing their organization's resources and Tenant End Users consuming services. Through this portal, users can access a single pane for observability and monitoring that encompasses hardware, PaaS, and AI/ML jobs.
The platform is designed to cater to different business and security requirements by supporting multiple forms of IaaS multi-tenancy. For Neocloud providers serving external customers, multinational corporations, and enterprises in sensitive industries, strict hard isolation is needed, where each tenant can be provided with a dedicated OpenShift cluster. Departments using the OpenShift cluster across internal team members where security needs may not be as strong, departments can additionally choose soft isolation using Kubernetes-native techniques like Namespaces or vClusters. In this way, enterprises can combine the hard isolation from the aarna.ml GPU CMS along with soft isolation from Red Hat OpenShift.
The Red Hat Ecosystem Catalog is the official source for discovering and learning more about the Red Hat Ecosystem of both Red Hat and certified third-party products and services.
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.