The silicon guilds and their dubious fortunes¶
Or how to separate the genuinely rich from the merely loud
The chip market resembles nothing so much as a particularly rowdy Ankh-Morpork tavern on payday, everyone’s shouting about their prospects, coins are changing hands at alarming speeds, and somewhere in the corner, a few sober wizards are quietly counting the real money. The main characters in this farce.
NVIDIA and the art of printing money¶
Market position: ~90% of AI chip market share
The pitch: We’re not just selling GPUs, we’re selling the future
The reality: They’re absolutely printing money, but can it last?
The numbers that make eyes water¶
NVIDIA’s transformation from graphics card peddler to AI kingmaker represents one of the most spectacular pivots in tech history. The H100 chip alone generated an estimated €35 billion in 2023, with projections suggesting it could contribute over €86 billion in a single fiscal year. That’s more than many Fortune 500 companies’ entire revenue.
In Q3 fiscal 2025, NVIDIA reported revenue of €32.5 billion, up 17% quarter-over-quarter and 94% year-over-year. The company’s market capitalisation ballooned past €1.85 trillion, making it one of the most valuable companies on Earth. For context, this is a company that makes expensive calculators.
The profit margins that would make Vetinari raise an eyebrow¶
Financial analysis suggests NVIDIA earns approximately 823% profit on each H100 GPU, with street prices around €23,000-€27,500 against estimated costs of €3,100 per chip. Now, those cost estimates may not capture the full R&D burden or the engineering army required to design such chips, but even being generous, these are margins that would make a medieval guild master weep with envy.
Mind you, that’s assuming NVIDIA are being entirely honest about their costs. In Ankh-Morpork, this sort of profit margin usually involves either a monopoly, a protection racket, or selling something that’s 90% air with a fancy label. NVIDIA appear to have achieved all three simultaneously.
The customer concentration problem, or how to build a house of cards¶
Here’s where it gets interesting. Approximately 46% of NVIDIA’s €27.8 billion Q2 revenue came from just four customers. Microsoft, Meta, Amazon, and Google are essentially NVIDIA’s entire business. In 2023, Microsoft and Meta each received an estimated 150,000 H100 GPUs, far ahead of other buyers.
What happens when these hyperscalers decide they’ve bought enough? Or worse, when they start building their own chips? It’s rather like being the only umbrella seller in Ankh-Morpork, splendid right up until everyone realises they can just stay indoors.
Supply constraints or manufactured scarcity¶
The waiting period for H100 chips stretched as long as a year during peak demand. NVIDIA maintains this is genuine supply constraint, complex chips on cutting-edge nodes at TSMC don’t grow on trees. Sceptics note that keeping demand perpetually ahead of supply is excellent for maintaining premium pricing.
It brings to mind Cut-Me-Own-Throat Dibbler’s sausage cart. The queue’s always longest when there are only three sausages left, and somehow there are always only three sausages left.
The Blackwell transition, or how to make expensive look cheap¶
The GB200 NVL72 system promises AI inference performance 30 times faster than equivalent H100 systems, with GB200 GPUs expected to sell for €27,500-€37,000 each. The message is clear, if you thought H100s were expensive, wait until you see what’s next. And the one after that.
This is the Ankh-Morpork Boots Theory in reverse. Rich customers can afford to spend €37,000 on a chip that’ll be obsolete in two years, poor customers are stuck with whatever they can afford, and NVIDIA laughs all the way to the bank.
NVIDIA is riding the wave of a lifetime, but this level of dominance invites both competition and customer resentment. The CUDA moat remains formidable, but every moat can be bridged with enough gold. And there’s a lot of gold sloshing around AI right now.
AMD, or how to be second best and hope that’s enough¶
Market position: Mid-single digit data centre GPU share, climbing fast
The pitch: We’re the alternative you’ve been looking for
The reality: Technically impressive, software ecosystem still catching up
The MI300 momentum and the slow climb upwards¶
AMD’s MI300 series represents their most serious challenge to NVIDIA yet. In Q2 2024, AMD sold over €920 million worth of MI300X chips, leading the company to raise full-year data centre GPU revenue guidance to over €4.2 billion. That’s up from earlier estimates of €3.7 billion, suggesting genuine market pull.
Among early adopters, AMD is gaining significant share. At Meta, AMD accounted for 43% of GPU shipments with 173,000 units versus NVIDIA’s 224,000 in 2024. At Microsoft, one in six of the 581,000 GPUs purchased were from AMD. These aren’t trivial numbers.
They’re also not exactly world-conquering numbers. AMD is in the position of the second-best pickpocket in Ankh-Morpork, impressive technical skills, steady income, but everyone knows who really runs the guild.
The memory advantage that actually matters¶
The MI300X features 192GB of HBM3 memory with 5.3TB/s peak bandwidth, compared to the H100’s 80GB and 3.35TB/s. For large language models that increasingly bump against memory constraints, this is a genuine technical advantage. The MI300X can run a 70-billion parameter model on a single chip, something the H100 cannot do.
In plain language, AMD’s chip can remember more things at once. For AI models that need to juggle enormous amounts of information, this is rather like having a larger workshop. You can fit more projects on the bench without constantly shuffling things about.
The CUDA problem nobody wants to talk about¶
Here’s the uncomfortable truth. The MI300X suffers from poor internal testing and requires considerable tuning work to use effectively, with numerous software bugs obstructing a good user experience. AMD’s ROCm software stack, while improving, remains nowhere near CUDA’s maturity.
Developers face a chicken-and-egg dilemma. They’re hesitant to invest in a platform with limited adoption, but the platform depends on their support. AMD can build the best hardware in the world, if the software experience frustrates developers, they’ll stick with what works.
It’s rather like opening a lovely new tavern with excellent ale, only nobody comes because all their friends drink at the other place. The ale doesn’t matter if there’s no atmosphere.
Market share reality check¶
AMD is approaching 10% data centre GPU share less than three quarters after MI300 launch, which is simultaneously impressive (given NVIDIA’s dominance) and sobering (90% is still a massive gap). A glide path to 10-12% market share by 2027 might be conservative, but it’s also not exactly world-conquering.
The long game and desperate customers¶
AMD has one advantage NVIDIA can never claim, they’re not NVIDIA. Every hyperscaler wants an alternative supplier to reduce dependency and improve negotiating leverage. This thirst for an NVIDIA alternative is driving adoption even before AMD’s software catches up. The question is whether AMD can capitalise on this goodwill before patience runs out.
It’s a bit like being the second clacks company in town. Everyone wants you to succeed because they’re tired of the monopoly’s prices, but you still need to actually deliver messages reliably.
AMD is making genuine progress with impressive hardware, but the software gap remains real. They’re capturing share from customers actively seeking alternatives, but convincing developers to abandon CUDA requires more than better specifications. Still, in a market this hot, even 10% is serious money.
Intel, or how the mighty are currently falling¶
Market position: Declining in all major segments
The pitch: We’ll be the number two foundry by 2030
The reality: Bleeding billions whilst trying to reinvent themselves
The scale of the problem, which is enormous¶
Intel’s foundry business incurred operating losses of €6.5 billion in 2023 and an additional €5.4 billion in Q3 2024. The company’s stock shed over 60% of its value in 2024, its worst year on record. For context, Intel was once the most valuable chip company on Earth. Now it’s fighting for survival.
CEO Pat Gelsinger was ousted in December 2024, with Lip-Bu Tan taking over in March 2025, adding leadership instability to Intel’s many challenges. When you’re trying to execute a multi-year manufacturing turnaround, frequent leadership changes are not helpful.
This is rather like watching The Patrician’s Palace undergo a coup whilst simultaneously trying to renovate the plumbing. The metaphor perhaps breaks down because Vetinari would never allow such chaos, but you take the point.
The foundry gambit, or betting the farm on an inside straight¶
Intel’s IDM 2.0 strategy, transforming from an integrated device manufacturer to a competitor with TSMC, represents an enormous bet. As of mid-2025, Intel had not secured any significant external foundry customers for its nodes, and prospects for the Intel 14A node remain uncertain.
Think about that. Intel is spending tens of billions to build foundry capacity, and nobody wants to use it. That’s not a business model, that’s a very expensive hobby.
It’s rather like building an enormous bridge to nowhere and then standing at one end shouting about how marvellous the bridge is whilst everyone else uses the perfectly good bridge next door.
The market share slide, steady and remorseless¶
Over the past five years, Intel’s CPU market share has slipped from approximately 97% down to 75%, with AMD capturing the rest. In the data centre, formerly Intel’s crown jewel and cash cow, the erosion continues. Intel’s x86 server processor share declined to 72.7% in Q2 2025 from 94.2% in Q2 2020.
Meanwhile, Intel did not rank in the top ten foundries in Q3 2024, despite massive investments. TSMC commands 62% of the foundry market, Samsung has 10%, and Intel has, well, a rounding error.
The AI opportunity they’re somehow missing¶
Intel has Gaudi 2 and Gaudi 3 AI accelerators. They’re reportedly competitive with NVIDIA’s H100 at half the price. Sounds great, right? Except the Gaudi chips are manufactured by TSMC, not Intel Foundry. Intel can’t even use its own fabs for its products designed to compete in the hottest market in tech.
This is the equivalent of the Alchemists’ Guild accidentally inventing something useful and then having to get the Artificers to actually build it because their own workshop is currently on fire.
Can they turn it around, or is this curtains¶
Intel’s 18A process node is supposedly competitive with TSMC’s best. Strategic partnerships with Microsoft and AWS, plus a €1.85 billion investment from SoftBank, provide validation. But validation doesn’t equal volume production, and volume production doesn’t equal profit.
Intel expects the foundry business to hit breakeven operating margin “about midway” through the decade. That’s 2025-2027, meaning more years of losses. How many billions can a company burn before investors lose patience entirely?
Intel is attempting one of the most ambitious turnarounds in tech history whilst simultaneously losing ground in its core businesses. The “five nodes in four years” roadmap is audacious. Whether it’s achievable whilst bleeding cash and customers is another question entirely. Betting on Intel requires believing they can execute flawlessly whilst competitors continue advancing. History suggests this is optimistic.
ARM, or the landlord who owns everything but makes modest rent¶
Market position: Licensing to 1,000+ partners globally
The pitch: We’re the foundation of mobile, and we’re coming for everything else
The reality: Steady royalties, but at the mercy of others’ success
The licensing model, which is actually quite clever¶
ARM doesn’t make chips. They design architectures and licence them to others, collecting upfront fees (€920,000-€9.2 million typically) and per-chip royalties (1-2% of selling price). Royalties make up roughly 50% of ARM’s total revenues, licensing fees just over 33%, with the remainder split between software tools and technical support.
For fiscal year 2024, ARM’s revenues were €3 billion, up from €2.5 billion the prior year. That’s solid growth, but we’re talking billions, not tens of billions. ARM is a successful business, not a gold mine.
It’s rather like owning the patent on hinges. Everyone needs hinges, everyone pays you for hinges, but you’re never going to be as rich as the people building entire buildings.
The Apple effect, or when your tenant buys the nicest house on the street¶
Apple’s M1 chip launch in November 2020 represented a watershed moment for ARM architecture in high-performance computing. Apple’s Mac division brought in a record €34.6 billion in calendar year 2021, up from stagnant levels around €23 billion in 2019, with CEO Tim Cook attributing the boost “very much because of M1.”
The M1 served as a hybrid between power-hungry traditional processors and energy-efficient ARM chips, bringing lower power consumption to laptops without sacrificing performance, a combination that forced competitors to respond. Intel and AMD had to rethink their entire roadmaps because Apple proved ARM could deliver serious performance.
But here’s the thing. Apple’s success is Apple’s success. ARM gets higher royalties from Armv9 implementations (royalty rates for Armv9 products are typically at least double those for equivalent Armv8 products), but the bulk of that revenue growth accrues to Apple, not ARM.
The cloud ambitions and the uphill battle¶
ARM has only captured about 10% of the royalty total addressable market in cloud and networking, representing the data centre. That’s the opportunity, and the challenge. Data centre chips command higher prices (meaning higher royalties), but Intel and AMD aren’t going to surrender this territory without a fight.
The customer concentration risk, or all your eggs in a few very large baskets¶
In fiscal year 2024, ARM derived approximately 54% of total net revenue from its top five customers. Loss of a major customer would hurt, badly. And with customers increasingly designing their own chips (Apple, Google’s TPUs, Amazon’s Graviton), ARM’s role becomes simultaneously more important and more vulnerable.
It’s rather like being the architect everyone hires. You’re indispensable right up until they hire their own in-house architect, and then you’re still getting paid for the old buildings but nothing for the new ones.
The valuation question, or why the share price makes no sense¶
ARM’s market capitalisation has swelled on AI enthusiasm, but between Q4 fiscal 2022 and Q3 fiscal 2023, ARM brought in €5.3 billion in revenues but only €535 million to the bottom line, a mere 10.1% profit margin. Two quarters had losses during that period. This is not the financial profile of a company worth €73 billion (as valued in early 2024) or whatever fever-dream number it’s trading at this week.
ARM is the indispensable architecture provider for mobile and increasingly for everything else. Their business model is elegant, design once, licence many times, collect recurring royalties. But they’re a picks-and-shovels play in a gold rush, not the miners striking it rich. Steady profits, yes. Explosive growth? Only if their licensees succeed explosively, and even then ARM gets a sliver of the pie.
The common thread, which is uncertainty¶
What unites all four players is this, nobody truly knows where this market is heading. NVIDIA’s dominance looks unassailable until it doesn’t. AMD’s technical prowess means little without software support. Intel’s turnaround requires simultaneous success on multiple fronts, any one of which could fail. ARM’s ubiquity is both strength and constraint.
The chip bubble has inflated fortunes and expectations alike. The question isn’t whether these companies are valuable, they manifestly are. The question is whether current valuations reflect reality or hope. And in technology, hope is an expensive commodity.
As The Patrician might observe whilst watching the guild masters squabble over the newest innovation, “The thing about bubbles is not that they burst. It’s that they always do, but nobody ever thinks theirs will.”
Next: The AI chip gold rush, examining whether demand is real or speculative