Google competes with Nvidia in data centers using AI processors
Google has begun deploying its in-house artificial intelligence (AI) chips, referred to as ‘tensor processing units’ (TPUs), in data centres run by smaller cloud providers. These firms have largely depended on Nvidia’s market-leading graphics processors. The company has reached out to multiple firms, including CoreWeave and Crusoe, regarding the hosting of TPUs. It has secured a deal with London-based Fluidstack to install Google’s chips in a new data centre in New York. Previously, TPUs were exclusively utilized for Google’s internal services, including its Gemini AI models, or made available selectively via Google Cloud to select companies like Apple and the image-generator Midjourney.
Google appears to be broadening access and decreasing its dependence on Nvidia by permitting third-party providers to host them, as Nvidia’s chips have traditionally set the industry standard. Originally designed for gaming, its graphics chips (GPUs) have emerged as the preferred choice for constructing AI systems due to their flexibility and robust software tool support. Cloud providers such as CoreWeave and Crusoe have embraced this trend, purchasing Nvidia chips in large quantities and leasing them to AI startups and major players like OpenAI and Microsoft. Nvidia has invested in several of these companies, further solidifying their dependence on its technology. Google’s TPUs operate in a distinct manner. They are specifically designed for AI, resulting in increased speed and efficiency in managing machine learning. However, they lack the versatility of Nvidia’s chips and have primarily remained confined to Google’s own systems until now. By supplying TPUs to the same companies, Google is stepping into Nvidia’s territory.
However, the challenge for Google is not merely selling a chip; it’s also about persuading developers to shift from their established practices. Most are already familiar with Nvidia’s tools, and Google must provide genuine incentives to encourage adoption. Google is offering cash to enhance the agreement. Fluidstack pledged up to $3.2 billion as a financial backstop for the New York data centre lease. Fluidstack is using that guarantee to secure funding for the construction of the facility. The report states that although cloud firms and AI developers are increasingly interested in diversifying beyond a single supplier, encouraging developers to adopt TPUs will prove challenging. Google is the sole manufacturer of TPUs; however, it is not the only entity engaged in the development of specialized AI hardware.
Amazon has developed Inferentia and Trainium chips; Microsoft has created its own Maia and Cobalt processors; Meta is advancing MTIA chips; and Apple incorporates a Neural Engine in its devices. These efforts illustrate a wider race to lessen reliance on Nvidia, whose GPUs continue to serve as the foundation of current AI systems. Nvidia’s GPUs continue to be the preferred option at this time. The company’s chief executive, Jensen Huang, has dismissed rivals, stating, “Developers stick with Nvidia because of its versatility and software support.”








