There are no compute resources more precious than GPUs today. As the lifeblood of AI, GPUs are in high demand and short supply. Kubernetes (K8s) has become the de facto standard for running modern AI workloads, providing enterprises the flexibility to spin up resources dynamically across clouds and on-prem environments. But as AI workloads grow ever more compute-intensive and “bursty,” organizations face an urgent need: how do you efficiently schedule and scale AI workloads on Kubernetes wherever GPU resources are available, maximizing efficiency and minimizing cost?
That’s the challenge Elotl set out to solve, and why we’re excited to welcome them to the Cisco Investments portfolio.
Our investment in Elotl builds on Cisco’s broader vision to empower enterprises with innovative, scalable, and secure infrastructure for the AI era. With strategic moves like our acquisition of Isovalent—a leader in Kubernetes networking and security—we’re committed to helping organizations seamlessly connect, scale, and protect their AI workloads across any environment.
“When Kubernetes wasn’t even a thing, we had a vision for how containers could reshape infrastructure,” says Madhuri Yechuri, Founder and CEO of Elotl. “We saw the opportunity to build a clean-slate approach to compute management—one that eliminates waste before it’s created and helps organizations focus on innovation instead of manual, error-prone processes.”
Engineering Leadership and Vision Behind Container Infrastructure Evolution
Elotl’s story is rooted in decades of engineering expertise. Yechuri and her team have deep experience working on compute orchestration and scheduling, recognized the seismic shifts in infrastructure management early on.
“Infrastructure evolves in ten-year cadences,” Yechuri explains. “From bare metal servers to virtual machines, and now containers. Having been part of this evolution, we knew that containers would be the next fundamental platform for infrastructure, and we built Elotl to address the unique challenges of this new paradigm.”
Challenges of Multi-Cloud Environments
Those challenges include the rise of multi-cloud environments, where enterprises increasingly rely on multiple cloud providers and on-prem data centers for their workloads.
“What worked in the virtualization era isn’t the best fit for a multi-cloud world,” says Yechuri. “We knew we had to rethink how compute is managed from the ground up.”
Proactive Compute Resource Optimization to Reduce Cloud Costs
Unlike traditional cloud cost management tools that focus on cleaning up waste after it’s created, Elotl takes a proactive approach. “Our philosophy is simple: don’t create wasted resource in the first place,” Yechuri says.
To illustrate her vision, Yechuri uses an analogy: “Imagine paying the electricity bill for an apartment complex where every light in every room is on by default, and you hire someone to walk through and turn off lights in empty rooms. That’s how most compute management works today. At Elotl, we’re saying—why not design the system so the lights come on only when someone enters the room and turn off automatically when they leave? Better yet, why not source electricity for that light from the cheapest provider in real time?”
AI-Driven Dynamic Compute Provisioning
Elotl’s AI-powered compute management platform delivers exactly that.
By dynamically provisioning compute resources based on workload needs, the platform ensures optimal utilization while dramatically reducing costs. This approach has proven especially valuable in the age of AI workloads, where GPUs are both scarce and expensive.
“AI is built on Kubernetes, and GPUs are its lifeblood,” Yechuri explains. “Enterprises can’t afford to pre-provision GPUs or rely on reactive cost-management solutions. They need a system that’s ready to scale intelligently, on demand, across any cloud or on-prem environment.”
Real-World Success Stories: Cost Savings and Operational Efficiency with Elotl
Elotl’s innovative approach has already delivered measurable success for its customers, who report significant cost savings, operational efficiencies, and reduced infrastructure complexity.
For some, the platform has helped optimize K8s environments by reducing costs by as much as 40% to 60%.
Others have turned to Elotl to streamline operations, freeing up internal teams to focus on innovation rather than infrastructure management. By eliminating the need for manual compute oversight, customers have been able to redeploy resources to higher-value projects, enhancing both productivity and business outcomes.
A Shared Vision for Enterprise Innovation
Elotl’s vision aligns with Cisco’s commitment to empowering enterprises with innovative, scalable, and secure solutions. Our investment in Elotl reflects a shared understanding of the challenges enterprises face as they navigate the complexities of multi-cloud environments and AI-driven workloads.
It also underscores the importance of integrating compute and networking solutions seamlessly.
“You can’t have compute without networking,” Yechuri notes. “We’re enabling enterprises to dynamically scale their AI platforms while ensuring that SLAs are met and resources are used efficiently.”
Cisco’s acquisition of Isovalent in 2024 established a new standard in Kubernetes networking and security, further strengthening our commitment to secure, scalable cloud-native infrastructure. For enterprises running AI workloads on Kubernetes, the combination of advanced workload scheduling from innovators like Elotl and leading networking and security solutions from Cisco enables organizations to securely connect AI workloads across environments, no matter where GPU resources or data reside, while maintaining agility and control.
The Future of K8s Compute Management in the AI Era
As enterprises grapple with the complexities of multi-cloud environments, AI workloads, and skyrocketing infrastructure demands, Elotl is blazing a trail with its innovative approach to compute management. By eliminating waste, optimizing costs, and enabling seamless scalability, the company is empowering organizations to focus on what truly matters: driving innovation and delivering value.
“We’re at an incredibly exciting moment,” Yechuri says. “AI is pushing the boundaries of what’s possible, and we’re ensuring that compute infrastructure can keep up—efficiently, intelligently, and at scale.”
Looking to the future, Yechuri envisions Elotl becoming the standard for compute management in the AI era.
“Just as Kubernetes became the de facto standard for container orchestration, we envision Elotl to be the default compute orchestrator for AI workloads in the genAI era,” she says.