Proprietary technology and struggling rivals spell victory for the tech giant
By MILES BARRY — mabarry@ucdavis.edu
For the past few years, Google has been written off as the new International Business Machine (IBM) — an organization that invented cutting-edge technology, but couldn’t use it as well as their competitors. Google researchers are arguably responsible for modern artificial intelligence (AI) progress; after all, they invented transformers, the “T” in ChatGPT.
Transformers are a breakthrough neural network architecture that most modern AI models are built on. Google also develops TensorFlow, one of the most popular machine learning libraries, and builds their own chips called Tensor Processing Units (TPUs), which are designed specifically for AI training. Despite these innovations, their flagship AI model, Gemini, has lagged behind those built by smaller rivals (like OpenAI and Anthropic) for the past few years.
On Nov. 18, Google released their Gemini 3 model to raving reviews. It has risen to the top of the leaderboard across several benchmarks designed to rate large language model (LLM) performance. On LMArena, a platform where users blindly interact with several chatbots and vote for the best one, Google has topped the charts across most categories.
So how did Google’s Gemini become so successful, especially after their earlier models failed to make such an impact? One reason is the aforementioned Tensor Processing Units. Gemini 3 was trained entirely on Google’s proprietary chips. By doing so, Google has been able to sidestep the Nvidia tax — a premium that every other AI company must pay for access to scarce, expensive, industry-standard GPUs made by Nvidia, a large tech company that develops chips used for AI training, supercomputing and gaming. Some experts estimate that Nvidia is making an 823% profit on each chip they sell, meaning that Google has significantly reduced their AI spending via TPU development. TPUs are also extremely energy efficient, using significantly less power per computation than traditional graphics processing units (GPUs). This lowers Google’s costs further.
Aside from their proprietary hardware, Google’s ability to distribute their AI products is unmatched (they recently published a list of 1000 companies who use their AI products for vastly different tasks, from creating an ad campaign to drafting legal contracts). They own some of the world’s most-used websites and apps, including YouTube, Gmail and Google Maps, along with Android, the largest smartphone operating system. This expansive reach will allow Google to easily distribute Gemini to millions of users without striking up external deals. Google’s ownership of these large sites keeps their revenue high, meaning they can charge less for AI usage than their smaller competitors, OpenAI and Anthropic, who are already running their models at a loss.
Google’s domestic rivals are also facing struggles. OpenAI, the company behind ChatGPT, has committed to a series of blockbuster deals, promising to spend $1.4 trillion on data centers within roughly the next decade, yet suffered a $12 billion loss last financial quarter. According to (unverified) data from industry analyst Ed Zitron, OpenAI’s quarterly inference costs — the computational cost of responding to user queries — consistently exceed quarterly revenues. This raises questions about their path to profitability, despite ChatGPT’s early popularity among consumers.
Meta’s AI spending is also causing concern among their investors. On their Oct. 29 earnings call, Chief Executive Officer (CEO) Mark Zuckerberg laid out plans to increase their capital expenditures from $66 billion to $72 billion, citing spending on data centers and chips. In the month following the announcement, their stock dropped about 20%. Despite offering AI researchers higher salaries than some National Football League (NFL) quarterbacks, Meta has seen many swift departures — including former OpenAI researchers threatening to quit within weeks of their hiring date.
Google’s last large domestic competitor, xAI, is pursuing funds from investors that would value them at $230 billion. But their growth has been marred by a series of incidents. In May 2025, their chatbot Grok began ranting about “white genocide” in South Africa while answering completely unrelated questions. In July 2025, Grok — which has been given the ability to interact with users on X (formerly Twitter) — went on a racist, antisemitic tirade, including sexual harassment directed at X’s then-CEO Linda Yaccarino, who resigned the following day. Both incidents reveal that xAI has failed to implement basic safety measures for public-facing AI. Google, by contrast, has spent years building content moderation systems for search and YouTube — an experience that positions them to deploy their models responsibly at scale.
While the AI landscape is volatile — a new breakthrough from any company could change the landscape entirely — I believe that Google will become the leading AI company in the United States due to its vertical integration, diverse revenue streams and the struggles of its competitors. The race isn’t over, but Google has pulled ahead.
Written by: Miles Barry— mabarry@ucdavis.edu
Disclaimer: The views and opinions expressed by individual columnists belong to the columnists alone and do not necessarily indicate the views and opinions held by The California Aggie.

