OpenAI released new data Monday showing enterprise usage of its AI tools has surged dramatically over the past year, with ChatGPT message volume growing 8x since November 2024 and workers reporting they’re saving up to an hour daily. The findings arrive a week after CEO Sam Altman sent an internal “code red” memo about the competitive threat of Google.
The timing underscores OpenAI’s push to reframe its position as the enterprise AI leader, even as it faces mounting pressures. While close to 36% of U.S. businesses are ChatGPT Enterprise customers compared to 14.3% for Anthropic, per Ramp AI Index, the majority of OpenAI’s revenue still comes from consumer subscriptions — a base that’s being threatened by Google’s Gemini. OpenAI also must compete against rival AI firm Anthropic — whose revenue comes mainly from B2B sales – and, increasingly, open-weight model providers for enterprise customers.
The AI giant has committed $1.4 trillion to infrastructure commitments over the next few years, making enterprise growth essential to its business model.
“If you think about it from an economic growth perspective, consumers really matter,” Ronnie Chatterji, OpenAI’s chief economist, said during a briefing. “But when you look at historically transformative technologies like the steam engine, it’s when firms adopt and scale these technologies that you really see the biggest economic benefits.”
OpenAI’s new findings suggest that adoption among larger enterprises is not only growing but becoming more integrated into workflows. Employees aren’t only sending more messages — organizations using OpenAI’s API (its developer interface) are consuming 320 times more “reasoning tokens” than they were a year ago, suggesting companies are using AI for more complex problem-solving. That, or they are experimenting heavily with the new tech and burning through tokens, without necessarily getting long-term value.
That increase in reasoning tokens, which correlates with increased energy usage, could be expensive for companies and therefore not sustainable in the long term. TechCrunch has asked OpenAI about enterprise budget allocation for AI and the sustainability of this growth rate.

Beyond raw usage metrics, OpenAI is also seeing changes in how companies deploy its tools. Use of custom GPTs — which companies use to codify institutional knowledge into assistants or automate workflows — jumped 19x this year, now accounting for 20% of enterprise messages, the report found. OpenAI pointed to digital bank customer BBVA, which it says regularly uses over 4,000 custom GPTs.
Techcrunch event
San Francisco
|
October 13-15, 2026
“It shows you how much people are really able to take this powerful technology and start to customize it to the things that are useful to them,” said Brad Lightcap, OpenAI’s chief operating officer, during the briefing.
These integrations have led to meaningful time savings, according to OpenAI. Participants reported saving 40 to 60 minutes per day with OpenAI’s enterprise products — though that may not include time spent learning the systems, prompting, or correcting AI output.
The report found that enterprise workers are also increasingly leveraging AI tools to expand their own capabilities. Three quarters of those surveyed say AI enables them to do things, including technical tasks, they couldn’t do before. OpenAI reported a 36% increase in coding-related messages outside of engineering, IT, and research teams.
While OpenAI drove home the idea that its technology is democratizing access to skills, it’s important to note that more vibe coding could lead to more security vulnerabilities and other flaws. When asked about this, Lightcap pointed to OpenAI’s recent release of its agentic security researcher Aardvark, which is in private beta, as a potential way to detect bugs, vulnerabilities, and exploits.

OpenAI’s report also found that even the most active ChatGPT Enterprise users aren’t using the most advanced tools available to them, like data analysis, reasoning, or search. During the briefing, Lightcap mused that this was because fully adopting AI systems requires a mindset shift and deeper integration with enterprise data and processes. Adoption of advanced features will take time, he said, as companies retool workflows to better understand what’s possible.
Lightcap and Chatterji also stressed a report finding that showed a “growing divide in AI adoption,” with some “frontier” workers using more tools more often to save more time than the “laggards.”
“There are firms that still very much see these systems as a piece of software, something I can buy and give to my teams and that’s kind of the end of it,” Lightcap said. “And then there are companies that are really starting to embrace it, almost more like an operating system. It’s basically a re-platforming of a lot of the company’s operations.”
OpenAI’s leadership — which certainly feels the pressure of the firm’s $1.4 trillion in infrastructure commitments — framed this as an opportunity for laggards to catch up. For workers training AI systems to replicate their work, “catching up” might feel more like a countdown.











Add Comment