GPU has no "price": The four major indices are competing, and the computing power market is more chaotic than you think

Compute power is being stockpiled and sublet, just like short-term rental apartments.

Author: David Lopez Mateos

Compiled by: Deep Tide TechFlow

Deep Tide Briefing: The media likes to use a single number to summarize how GPU compute power pricing goes up and down, but in reality: on the Bloomberg terminal, the quotes given by four index providers deviate from each other by more than $2, and their direction and pacing are also inconsistent. The article’s author is the founder of the GPU compute trading platform Compute Desk, David Lopez Mateos. Using transaction-level trading data, he breaks down the true pricing structure of H100 and B200, revealing a raw market with no consensus benchmark, no standardized contracts, and no forward curve—compute power is being stockpiled and sublet like short-term rental apartments.

Media headlines will make you think GPU compute power prices are surging. That narrative is comfortable; it fits perfectly into the macro framework of “tight supply + an AI demand bottomless pit,” and it even implies something reassuring: we have a well-functioning market where price signals are clear and easy to read.

But we don’t. This narrative is built almost entirely on a single index, and the implication it carries should not be taken for granted: the GPU leasing market is efficient enough to be represented by a single number as the global state.

Tight supply is real, but the kind of tightness different people feel is completely different—depending on who you are, where you are, what contract you’re trading, and what compute asset you hold. In the face of this kind of opacity, the market’s natural response is not orderly price discovery, but hoarding: locking in GPU hours you may not need yet, because you’re not sure whether you’ll be able to buy them next month at any price. Where there is hoarding and no transparent benchmark, fragmented secondary markets emerge. At Compute Desk, we have already enabled tenants to sublet their clusters like apartments during major sporting events. This is not a hypothetical—it’s happening.

Indexes don’t converge

In mature commodity markets, indexes built from different methodologies tend to converge. Brent crude and WTI have a spread of a few dollars due to geography and crude quality, but they move in sync in terms of direction (Figure 1). This kind of convergence is a hallmark of efficient markets.

Caption: Comparison of Brent and WTI crude oil price trends—highly consistent directionality

Now on the Bloomberg terminal, there are three GPU pricing index providers: Silicon Data, Ornn AI, and Compute Desk. SemiAnalysis has just released a fourth one—an H100 one-year contract price index built from survey data of more than 100 market participants. Silicon Data and Ornn publish daily H100 leasing indexes, Compute Desk aggregates data at the Hopper architecture level, and SemiAnalysis captures negotiated contract prices rather than list prices or scraped prices. Different methodologies, different frequencies, and different angles of insight into the same market. Put them together and the disagreements are obvious (Figure 2).

Caption: Four GPU indexes overlaid for comparison—both the price levels and the trends are clearly divergent

So where exactly did the price increases happen?

Using Compute Desk data, we can break down H100 price movements by provider type and contract structure, and overlay Silicon Data’s SDH100RT index (Figure 3). All indicators show prices are rising, but their starting points and magnitudes differ dramatically depending on the index and contract type.

Caption: H100 price trends split by contract type overlaid with the SDH100RT index

Compute Desk’s new cloud (neocloud) data tells a more specific story than aggregate indexes. On-demand pricing stays relatively stable throughout the winter, around $3.00 per hour, then sharply spikes to $3.50 in March. Spot pricing is noisier and also lower, with only a modest upward trend in March. Silicon Data’s SDH100RT shows a smoother, steady rise over the same period—from $2.00 to $2.64. The two indexes remain at different price levels throughout, and they describe the time dynamics differently as well: Compute Desk says there was a jump in March, while Silicon Data says it was a slow climb.

One-year reserved pricing was basically flat before February, then at the end of March it surged from $1.90 to $2.64—not a gradual catch-up, but a sudden repricing. This looks more like providers tightening the on-demand market and then adjusting contract rates in one batch, rather than a continuous structure-driven demand effect.

The March story for B200 is even more intense (Figure 4). Compute Desk’s on-demand index exploded from $5.70 to above $8.00 within a few weeks. Silicon Data’s SDB200RT jumped from $4.40 to $6.11, then fell back to $5.47. Both indexes capture this run, but their starting points differ by more than $2, and the shapes of the rise and subsequent pullback are also different. With B200, there is less than five months of data, fewer providers, and a larger price spread—so the two indexes are observing the same event through very different lenses.

Caption: On-demand versus reserved price trends for B200—Compute Desk and Silicon Data data overlaid

An infrastructure problem, not just a regional difference

Commodity markets have a basis differential. Appalachian natural gas is the textbook case: massive reserves sit atop structurally constrained pipeline capacity. Utilization rates in the Pennsylvania–Ohio corridor often exceed 100%, and new projects like Borealis Pipeline don’t come online until the late 2020s.

The GPU market has something similar. A single H100 in Virginia and a single H100 in Frankfurt are not the same economic good. But relying only on regional differences can’t explain why indexes measuring the same market disagree so widely. The misalignment in the GPU market runs deeper than the Appalachian natural gas issue. With natural gas, the problem is a single missing link: pipeline capacity connecting supply and demand. The infrastructure gap in the compute market exists on both the supply and demand sides. Physical infrastructure—an interoperable network consistency required for reliable compute distribution, predictable provisioning, predictable availability—is not mature, and sometimes simply doesn’t work. Financial infrastructure—standardized contracts, transparent benchmarks, and arbitrage mechanisms that can compress price spreads despite physical differences—also does not exist.

Data tells a story. The real-world experience of trying to procure compute power in early 2026 tells an even more painful story. On-demand capacity for all GPU types is, in practice, fully sold out. Getting 64 H100s is difficult: Compute Desk shows that 90% of providers’ on-demand cluster available capacity is zero, and the reserved market isn’t much better. In a well-functioning market, this level of scarcity would have already pushed prices to a new equilibrium point. But in reality, it hasn’t. This indicates that providers themselves lack real-time pricing intelligence to adjust. Prices are rising, but rising too slowly to clear the market. The gap between list prices and actual willingness to pay is being filled by hoarding, subletting, and informal secondary market trading.

What needs to change

The current GPU compute market has seven core problems:

No consensus benchmark. Multiple indexes coexist, using different methodologies, and their conclusions conflict with one another.

Aggregation narratives conceal structure. A single number like “H100 price” masks huge differences across provider types and contract tenors.

Lack of trade-level data. In a bilateral market, the gap between list prices and actual transaction prices is very large.

No contract standardization. Most GPU leasing is bilateral negotiation with different terms. Shorter, more standardized contract tenors can improve liquidity and price discovery.

Delivery quality isn’t guaranteed. Differences in interconnect topology, CPU pairing, network stack, and runtime can be substantial. Before making commitments, buyers need to know what quality of compute they are actually purchasing.

Contracts lack liquidity. If demand changes during the reserved period, choices are very limited: either absorb the cost or do an informal sublet. The market needs infrastructure to transfer or resell already-committed compute, so capacity flows to the people who need it most.

No forward curve. Without forward pricing, you can’t hedge. This is why lenders apply a 40%-50% discount to GPU collateral, keeping financing costs high.

Building a normally functioning market for one of the most important commodities of this century can’t be done by pushing forward just one line. Measurement, standardization, contract structure, delivery quality, and liquidity—these must move forward together; until then, no one can truly say what an hour of GPU compute is worth.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments