The relationship has apparently been explored for several months between OpenAI and Google. To meet its computing demands, OpenAI is apparently preparing to leverage Google Cloud’s services.
According to reports, OpenAI intends to employ Google Cloud’s services to support its processing needs. According to the article, the Mountain View IT giant and the San Francisco-based artificial intelligence (AI) company have been talking about this collaboration for a number of months. By doing this, OpenAI may be able to transfer part of the compute needs for Sora and ChatGPT to Google. Although the financial specifics of this transaction are unknown at this time, it is certain that the Gemini-maker is receiving a fair amount of compensation.
A Reuters story claims that the company that created ChatGPT has reached an agreement to use Google Cloud’s servers to meet ChatGPT’s processing demands. According to the report, which cited three unnamed insiders, OpenAI sought this alliance because of the need for more cloud servers, which has grown recently.
The San Francisco-based AI startup is probably going to transfer some of the server requirements for ChatGPT and Sora to Google as a result of this big move. According to Reuters, OpenAI has apparently agreed to use the Cloud servers owned by Alphabet to support ChatGPT’s computing requirements.
Although the two organizations apparently negotiated the relationship for several months before finalizing it in May, not many details regarding the transaction are known. The deal’s financial details were not disclosed.
Given that OpenAI and Google are presently vying for the same portion of the AI industry, their alliance is noteworthy. Both businesses have been actively releasing big language models, broadening the chatbots’ use cases, and coming up with innovative methods to connect with consumers (OpenAI has been looking for OEM collaborations, while Google is incorporating Gemini into all of its products).
Remarkably, OpenAI unveiled its o3-pro model less than a week after Google unveiled the updated Gemini 2.5 Pro model. Similarly, the ChatGPT manufacturer announced an 80 percent price reduction on its o3 model after the former claimed lower application programming interface (API) prices for its most recent AI models in comparison to the latter.
But OpenAI cannot overlook its urgent need to boost compute capacity in spite of the competition. Altman wrote that the company’s GPUs were “melting” following the launch of the GPT-4o-powered picture creation feature. Separately, there was a major outage on Tuesday due to ChatGPT servers going down for several hours.
Both businesses have been introducing large language models (LLMs), broadening the scope of their chatbots’ applications, and investigating the newest methods of user outreach (OpenAI is presently investigating partnerships with OEMs, while Google is integrating Gemini into all of its products).
However, OpenAI cannot overlook the necessity of increasing computing capacity in spite of the fierce competition. Altman wrote that the company’s GPUs were “melting” after the GPT-4o-powered picture creation feature was introduced.
In addition, ChatGPT servers experienced a massive outage that affected thousands of customers for several hours on Tuesday.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.