Skip to main content
Anson Qian
Software Engineer at Azure Kubernetes Service
View all authors

Optimizing RDMA performance for AI workloads on AKS with DRANET

· 10 min read
Anson Qian
Software Engineer at Azure Kubernetes Service
Michael Zappa
Software Engineer for Azure Container Networking

RDMA (Remote Direct Memory Access) is critical for unlocking the full potential of GPU infrastructure, enabling the high-throughput, low-latency GPU-to-GPU communication that large-scale AI workloads demand. In distributed training, collective operations like all-reduce and all-gather synchronize gradients and activations across GPUs — any communication bottleneck stalls the entire training pipeline. In disaggregated inference, RDMA provides the fast inter-node transfers needed to move KV-cache data between prefill and decode phases running on separate GPU pools.

rdma-for-ai-workload-on-gpu-infra

DRANET is an open-source Dynamic Resource Allocation (DRA) network driver that discovers RDMA-capable devices, advertises them as ResourceSlices, and injects the allocated devices into each pod and container. Combined with the NVIDIA GPU DRA driver, it enables topology-aware co-scheduling of GPUs and NICs for high-performance AI networking on Kubernetes.

Scaling Anyscale Ray Workloads on AKS

· 7 min read
Anson Qian
Software Engineer at Azure Kubernetes Service
Bob Mital
Principal Product Manager at Microsoft Azure
Kenneth Kilty
Technical Program Manager for Cloud Native Platforms

This post focuses on running Anyscale's managed Ray service on AKS, using the Anyscale Runtime (formerly RayTurbo) for an optimized Ray experience. For open-source Ray on AKS, see our Ray on AKS overview.

Ray is an open-source distributed compute framework for scaling Python and AI workloads from a laptop to clusters with thousands of nodes. Anyscale provides a managed ML/AI platform and an optimized Ray runtime with better scalability, observability, and operability than running open-source KubeRay—including intelligent autoscaling, enhanced monitoring, and fault-tolerant training.

As part of Microsoft and Anyscale's strategic collaboration to deliver distributed AI/ML Azure-native computing at scale, we've been working closely with Anyscale to enhance the production-readiness of Ray workloads on Azure Kubernetes Service (AKS) in three critical areas:

  • Elastic scalability through multi-cluster multi-region capacity aggregation
  • Data persistence with unified storage across ML/AI development and operation lifecycle
  • Operational simplicity through automated credential management with service principal

Whether you're fine-tuning models with DeepSpeed or LLaMA-Factory or deploying inference endpoints for LLMs ranging from small to large-scale reasoning models, Anyscale on AKS delivers a production-grade ML/AI platform that scales with your needs.

Performance Tuning AKS for Network Intensive Workloads

· 6 min read
Anson Qian
Software Engineer at Azure Kubernetes Service
Alyssa Vu
Software Engineer at Microsoft

As more intelligent applications are deployed and hosted on Azure Kubernetes Service (AKS), network performance becomes increasingly critical to ensuring a seamless user experience. For example, a chatbot server running in an AKS cluster needs to handle high volumes of network traffic with low latency, while retrieving contextual data — such as conversation history and user feedback from a database or cache, and interacting with a LLM (Large Language Model) endpoint through prompt requests and streamed inference responses.

In this blog post, we share how we conducted simple benchmarks to evaluate and compare network performance across various VM (Virtual Machine) SKUs and series. We also provide recommendations on key kernel settings to help you explore the trade-offs between network performance and resource usage.