Home  /  Blog  /  Investing
Investing Technology April 13, 2026 ·  6 min read

The Infrastructure Gap: Why Nebius Has a Lane the Hyperscalers Cannot Close

Everyone assumes AWS and Azure have already won the AI infrastructure race. The assumption is wrong — and understanding why changes how you look at NBIS entirely.

Nebius — AI Infrastructure

The default assumption in AI infrastructure is that the hyperscalers win. AWS, Azure, Google Cloud — they have the balance sheets, the data centers, the enterprise relationships, and the brand. If you believe scale is the moat, the race looks over.

That assumption deserves more pressure than most investors are applying to it.

Scale is a moat when the product being delivered is relatively stable. When the underlying technology is changing as fast as AI infrastructure is changing right now, scale becomes something else: a constraint. The bigger the installed base, the harder it is to rebuild the stack around what the market actually needs. Hyperscalers are not optimizing for AI-native performance. They are optimizing for margin on general-purpose infrastructure that has been incrementally adapted for AI workloads. Those are different things, and the gap between them is exactly where Nebius is building.

What Nebius Actually Is

Nebius (ticker: NBIS) is not a startup that rented some GPUs and called itself a cloud. It spun out of Yandex — the company that built one of the most technically sophisticated internet infrastructures outside of the American hyperscalers — with the explicit mandate to build AI infrastructure from scratch, for AI, at scale. The engineering culture that produced Yandex's search, maps, and self-driving division came with it.

That origin matters more than it appears on a surface read. Most GPU cloud challengers are either startups without the operational depth to serve serious enterprise workloads, or legacy infrastructure players trying to retrofit data centers built for a different era. Nebius sits in a different category: an organization with hyperscaler-grade engineering that has never had to unlearn anything, because it was built for this moment rather than adapted to it.

The reason AWS and Azure will lose ground in AI infrastructure has nothing to do with price. It is that they were never actually built for this.

The Stack Advantage That Is Easy to Underestimate

What separates purpose-built from adapted infrastructure is not visible in a spec sheet. It shows up in the details of how the stack is architected end to end. Hyperscalers deliver GPU compute as a component layered onto broader infrastructure designed for storage, networking, and general compute workloads. That design produces friction at exactly the points that matter most for AI: data movement, training throughput, inference latency, and the ability to run large distributed workloads without hitting the seams between systems that were not designed to work together.

Nebius built the networking, the storage, the tooling, and the compute layer as a unified system with AI workloads as the design constraint, not an afterthought. That produces real performance differences on the workloads that serious AI builders actually run. And it produces a developer experience that is meaningfully different from wrestling with abstractions that were designed for a different era.

This matters for a reason that goes beyond pure performance. The customers who are building the next generation of AI products are not choosing infrastructure purely on price per GPU-hour. They are choosing infrastructure based on how quickly they can move, how much engineering overhead they spend managing the platform versus building their product, and whether the platform is being actively evolved by people who understand what they are trying to do. Nebius is competitive on all three.

The Market Structure That Makes This Interesting

AI compute demand is growing at a rate that no single provider can fully absorb. That is not a thesis — it is already observable in the capacity constraints, wait times, and pricing dynamics that AI companies have been navigating for the past two years. In that environment, the question is not whether alternatives to the hyperscalers will capture demand. They already are. The question is which alternatives have the operational depth to hold and grow that demand as the market matures.

Nebius — AI-native infrastructure at scale

Nebius (NBIS) — purpose-built for AI infrastructure from the ground up.

This is where Nebius's Yandex lineage becomes a genuine differentiator rather than just an interesting backstory. Running infrastructure at the scale Yandex operated at — across multiple continents, under demanding reliability requirements, with engineering teams that built rather than bought most of their core systems — produces an organizational capability that cannot be replicated quickly. Startups chasing the same market are typically 18 to 36 months away from the operational maturity that Nebius already has. By the time they close that gap, the market will have already sorted its preferred vendors.

What the Valuation Conversation Is Missing

When investors look at Nebius, the default frame is comparison to hyperscalers on revenue multiples, which makes it look expensive, or comparison to startup GPU clouds on growth rates, which makes it look like one of many. Both comparisons miss the actual category.

Nebius is being valued as though it is either a small AWS or a large Lambda Labs. It is neither. It is the only at-scale, AI-native infrastructure provider with the engineering depth to serve enterprise workloads, the full-stack design advantage that comes from building rather than adapting, and the financial backing to expand capacity ahead of demand rather than chasing it.

That is a category of one. And markets are notoriously slow to price category-of-one positions until the evidence is so overwhelming that the multiple has already re-rated.

The risks are real. Infrastructure is capital-intensive. Competition is intense and well-funded. Nebius is still in the phase of proving that its engineering advantage translates into durable customer retention at scale. Geopolitical complexity around its Yandex origins creates headline risk that is difficult to model. These are not trivial concerns and any honest thesis has to hold them.

But the framing that matters is not whether Nebius can beat AWS. It is whether the AI infrastructure market is large enough and fragmented enough to support a purpose-built challenger that operates at genuine scale. The answer to that question looks increasingly clear. What remains uncertain is how long it takes the market to price in what the infrastructure layer is actually building toward.

By the time that becomes consensus, the opportunity will already be smaller than it is today.

Fima Burshtein

Fima Burshtein

Investor, AI builder, and founder of FB Enterprises LLC. Fima combines real-world investing experience with hands-on AI implementation — building the systems that give modern investors a genuine edge.

Ready to go deeper?

Build the system behind the decisions.

The Market Operating System is a 68-chapter program that gives you the complete analytical, psychological, and decision-making framework — built for investors who want to operate with genuine edge, not guesswork.

Explore The Program
← Back to all articles