The Year of Compute
Why Compute and Memory are becoming the hottest new Commodity to trade
The Rise of Compute (and Memory)
If 2025 was the year of AI, then 2026 will be the year of Compute and Memory. The core of my argument centers on the foundation of cloud computing and highlights the new “resources” market emerging in the race for AI “domination.”
Traditional large-scale cloud service providers such as AWS, GCP, Oracle, and Microsoft have been unable to keep up with the specific and demanding needs of new start-ups, AI labs, and traditional firms, all clamoring to create the perfect foundational model or breakthrough algorithm while maintaining the lowest prices in the market.
Thus giving rise to new GPU wholesale rental brokers and independent data center providers, with some names larger than others, such as CoreWeave, Lambda Labs, TensorPool, Hydrohost, Aethir, Crusoe, VULTR, and Vast AI.
The other side of this new market is memory, and we have seen an incredible heating up of the compute market. On January 10th, Kif Leswing from CNBC wrote, “Prices for computer memory, or RAM, are expected to rise more than 50% this quarter compared to the last quarter of 2025.”
To continue from Leswing’s article:
“”” Three primary memory vendors — Micron, SK Hynix and Samsung Electronics — make up nearly the entire RAM market, and their businesses are benefitting from the surge in demand.
″We have seen a very sharp, significant surge in demand for memory, and it has far outpaced our ability to supply that memory and, in our estimation, the supply capability of the whole memory industry,” Micron business chief Sumit Sadana told CNBC this week at the CES trade show in Las Vegas. “””
For financial engineers, we realize that there is a new (and potentially tradeable) market – the wholesale compute and memory commodities markets.
Independent Data centers risk shifting customer demand due to uncertain utilization levels and future revenue potential, creating refinancing risk.
AI Labs risk under-supply due to uncertain expenses and limited access during critical development phases.
Financial liquidity providers risk information asymmetries as spreads widen and liquidity depth shrinks.
Market Structure and Trading Compute/Memory
As many traditional market participants intuitively understand, there is an order to market architecture, starting with the consolidation of information. The difficulty with pricing compute and memory is that each unit is not created equal. A few considerations that affect pricing:
Region
Minimum Term
Hardware configuration
For market participants, this creates new questions for transacting and hedging:
Where do I get the price?
What should I consider when developing hedging strategies?
How do I determine the current and correct price?
Let’s work through these questions together as it currently stands.
Where do I get the price?
As of today, many people are interested in the data center and in provisioning GPUs for customers’ development. Whether we call them Cloud Providers, GPU Platforms, GPU brokers, or Memory vendors… Let’s identify the primary players (in no particular order):
Amazon Web Services
Google Cloud Provider
Microsoft Gloud (Azure)
Oracle
Lambda Labs
Coreweave
Crusoe
Tensorpool
Vast AI
Aethir
Hydra host
VULTR
Micron
SK Hynix
Samsung Electronics
If you go to any of these vendors directly, you run into the same frictions as traditional asset classes – idiosyncratic differences that may not be efficiently reflected in the real-time relative value pricing of these resources. The price you see on AWS, while easily accessible and deployable, may be 10-20x the price of a disruptor such as Coreweave or Cruscoe for the same SKU. Unfortunately, the SKU is not the only consideration; the deployment region is also a major factor, especially in finance and High-Frequency Trading, where differences of microseconds can determine the viability of a trading strategy.
As we see in most nascent markets, the relationship you have with a vendor may also determine your ultimate spot and future pricing. For example, the synergies with OpenAI, Oracle, and NVIDIA create relationship-based pricing that others do not have equal access to (also, the scale of capital transactions creates asymmetrical access for AI Lab “incumbents”).
In the compute and memory provisioning industries, there is competition for innovation, scale, and dominance in adaptive supply. A key to this market’s stabilization is the consolidation and normalization of the pricing for these resources.
What should I consider when choosing hedging strategies?
Decoupling compute and memory from traditional energy markets and commodities is difficult – in energy markets, there is a simple 3-factor heuristic for evaluating energy demand growth (I am stealing this framework from Tyler Rosenlicht, a Senior Vice President at Cohen & Steers - check out his recent Oddlots episode with Tracy Alloway and Joe Weisenthal here):
Is the global population increasing?
Will the global economy grow?
What is the energy intensity of the economic growth (energy input unit -> economic output unit)?
Rosenlicht’s firm predicted that energy demand would grow from 178,000 terawatt hours to about 220,000 terawatt hours in 2040. Rosenlicht admits that in the last 12 months a lot has changed - indicating “that global energy demand assumption for 2040 has risen.”
This large demand increase makes a traditionally slow and steady sector a very exciting and head-turning industry where multiple strategies emerge, and Rosenlicht lists a few considerations with differing prioritization depending on the market participant:
You need MORE energy
You need a STABLE energy source
You want CLEAN energy
Rosenlicht goes on to say, “If you’re a data center CEO… you kind of wake up with night sweats about the power going out, right?” and Galloway references the recent CME outage on November 28th, 2025, due to a data center malfunction that halted trading and affected many market participants (article link here).
Rosenlicht also indicates that many data centers are installing natural gas pipelines directly into their facilities, further intertwining traditional commodities with compute and memory.
If you are a market participant, all of these webs of interconnectedness make hedging your risk even more important today – giving rise to interesting questions about the future, such as:
Will AI labs have trading desks for commodities and utilities?
Will new instruments be created to further securitize and transfer risk?
How do I determine the current and correct price?
In the final question, the most pertinent market infrastructure question is, as far as I can tell, industry standards and pricing are set by the providers, and there isn’t much customers can do about it besides sourcing offers from multiple providers.
Oftentimes, providers base prices on their own demand generation, leading to $10+ price differences between one provider’s GPU and another’s for the same rental time and SKU.
This emphasizes the need for a standard pricing reference – that’s when I found what’s potentially the only company working to solve this problem: Ornn AI.
Ornn AI is seeking to normalize pricing at the SKU level (as much as possible with idiosyncratic instruments). From Yahoo’s coverage of Ornn AI’s quick fundraising, it appears they offer an index product that can serve as an industry reference point by consolidating and normalizing post-trade transaction data (link).
Here’s an article from a few days ago highlighting the company and its new partnership with Architect to bring crypto-style perpetual futures to the space and add even more hedging instruments to market participants (link).
While I believe Ornn AI and Architect are taking the first steps toward creating hedging and liquidity in the space, there is more work to be done.
I can imagine a world where liquidity depth is sufficient to require institutional-level pre-trade axe data. This is what I believe is a natural progression, and we will see more development in the space that will hopefully lead to greater transparency, liquidity, and trading.
From the institutional world, we know that with idiosyncratic instruments such as bonds, TRACE was and is not sufficient for institutional traders, but that did not translate into an industry standard for pre-trade information in Fixed Income.
What do you expect to see here?
As of now, there’s no call for industry standards in trading compute and memory. There is only institutional need to understand how to transact, first.
The next question will be: how many institutional investors, hedge funds, and AI labs will clamor into this trading space this year – and how will they interact with traditional market participants and guide the industry for the years to come?
Regardless of the answer, the trajectory is clear: the Compute and Memory space will continue to be the hot, new, growing industry everyone will be dying to figure out!
I expect commodity hedge funds to understand it first and exploit the industry, while AI labs lag behind trying to figure out how to retain and recruit trading talent.
For all participants, I wish the best of luck and can only say: Welcome to the Year of Compute!



Super interesting!!
Amazing write-up MJ!