Skip to main content

5 posts tagged with "Storage"

Persistent storage, data performance, CSI drivers, and Azure storage integrations for AKS.

View All Tags

Azure Container Storage v2.1.0: Now GA with Elastic SAN

· 7 min read
Saurabh Sharma
Product Manager for Cloud Native Storage initiatives

Stateful workloads on Kubernetes continue to demand not only faster performance but also larger scale and more streamlined operational simplicity. Azure Container Storage v2.1.0 is now generally available with three headline improvements:

Elastic SAN (ESAN) integration: consolidate hundreds of persistent volumes under a single managed ESAN, bypassing VM disk-attachment limits

Modular on-demand installation: deploy only the components your chosen storage type requires, reducing install time and cluster footprint

Node selector support: control where Azure Container Storage components run so you can optimize resource usage across node pools

Announcing Azure Container Storage v2.0.0: Transforming Performance for Stateful Workloads on AKS

· 9 min read
Saurabh Sharma
Product Manager for Cloud Native Storage initiatives

Introduction

Last year we announced the general availability of Azure Container Storage, the industry’s first platform-managed container native storage service in the public cloud. This solution delivers high performance and scalable storage that can effectively meet the demands of containerized environments. Today we are announcing a new v2.0.0 release of Azure Container Storage for Azure Kubernetes Service (AKS). It builds on the foundation of previous release and takes it further by focusing on higher performance, lower latency, efficient resource management and a Kubernetes native user experience for managing stateful workloads on AKS.

From 7B to 70B+: Serving giant LLMs efficiently with KAITO and ACStor v2

· 6 min read
Sachi Desai
Product Manager for AI/ML, GPU workloads on Azure Kubernetes Service
Francis Yu
Product Manager focusing on storage orchestration for Kubernetes workloads

XL-size large language models (LLMs) are quickly evolving from experimental tools to essential infrastructure. Their flexibility, ease of integration, and growing range of capabilities are positioning them as core components of modern software systems.

Massive LLMs power virtual assistants and recommendations across social media, UI/UX design tooling and self-learning platforms. But how do they differ from your average language model? And how do you get the best bang for your buck running them at scale?

Let’s unpack why large models matter and how Kubernetes, paired with NVMe local storage, accelerates intelligent app development.