My name is Aris Xenofontos and I am an investor at Seaya Ventures. This is the market update version of the Artificial Investor that covers the top AI developments of the previous month.
6.5 billion dollars for a “GPT wrapper” and a pre-product AI hardware company, Google admitting the shift in the Search user behaviour, the “AI job augmentation vs. job replacement” debate, further innovation in the AI software engineering space, two new revolutionary AI models in science and movie-making, and the 180-degree turn in US AI politics.
All this and much more in our AI market update for May 2025.
🚀 Jumping on the bandwagon
Last month’s largest rounds highlight continued investor appetite for verticalised AI (1.1 billion dollars for AI coding solution Cursor and LegalTech application Harvey), disruption of Search (500 million dollars for Perplexity), foundational models (300 million dollars for AI21 Labs) and AI infrastructure (450 million dollars for high-speed database ClickHouse and datacentre provider TensorWave).
In terms of exits, the biggest news came from OpenAI and its acquisitions of Windsurf (AI coding solution) for 3 billion dollars and io (AI hardware developer) for 6.5 billion dollars. We wrote a deep dive on what these acquisitions mean about GPT wrappers and OpenAI’s future strategy in “The Artificial Investor #52 - The revenge of the GPT wrappers”. Other M&A news include CoreWeave’s acquisition of Weights & Biases (an AI developer platform to build AI agents, applications and models), Weave Communications’s acquisition of TrueLark (an AI-powered receptionist and front-desk automation platform) and Earnix’s acquisition of Zelros (a GenAI-driven insurance recommendation engine)
📈 On pink paper
🔍 Search wars and shifts in user behaviour
Perplexity launched in December 2022 an innovative AI-based search application that looks up the Internet comes up with a summary of results and links to the underlying sources. Two and a half years later, in May 2025, Google announced during its I/O conference the launch of AI Mode for Search (for US users only), which is basically the same as Perplexity. Just a note on The Innovator’s Dilemma: it took two and a half years for an incumbent to catch up with a startup, which in the meantime reached 22 million active users.
This is effectively Google accepting that there is a big shift in user behaviour, where users are clearly seeing the benefits of AI summaries for a large part of their Search use cases. A big challenge here is obviously the monetisation, given that AI effectively causes fewer clicks for users. Google has been reported to be testing ads in AI mode, but for now, the monetisation model seems to be weaker, which can affect Google's revenues in the future.
At the same time, it was reported that OpenAI's ChatGPT referral traffic to publishers has increased: about 300 million visits were generated in 250 publisher websites in April 2025 from ChatGPT redirects, which is about double the amount of January 2025, only four months earlier. In any case, despite the increase, publishers have reported that referral traffic from ChatGPT is not that impactful yet.
Despite Google launching AI Mode in Search, the stock was down last month. The reason is that Apple's Senior Vice President of Services, Eddy Cue, revealed in an interview that Safari searches have decreased for the first time in 22 years as more people are using AI instead of Google (which is the default search engine for Safari). So, in one specific channel, the mobile Safari app, Google Search now has a lower market share versus the AI applications available for people to search. On the back of this news Google lost 120 billion dollars of market value, and the US hyperscaler is now trading down at 15 times forward profits instead of 21 that it was trading a couple of months ago.
Google Search still has about 90 percent share of the broader Search market. The question is: “Is Google's AI Mode for Search launch timely and effective enough and will the monetisation model be strong enough so that Google navigates successfully this market disruption in the long run? It remains to be seen.
💳 AI monetisation is maturing
Talking about AI monetisation, Google announced a revamp of its consumer pricing plans grouping everything in two packages, Google AI Pro and Google AI Ultra.
Google AI Pro gives AI away almost for free. It comes at 25 dollars per month, with Gmail, G Suite, and storage for Google Drive. In this plan, AI products effectively cost a couple of dollars to users, taking into account the price of the non-AI services and a price inflation. Also, it's noteworthy that users have no way to deactivate AI and pay less. This package comes with access to older AI models and some limits to usage. The Ultra plan comes with more storage and costs 250 dollars per month, which is actually 200 dollars on top of the cost of non-AI services. This is in return for (practically) unlimited access to cutting-edge AI models.
This is a reflection of an emerging trend of the AI pricing structure in the broader market:
“Pay in data”. Users get AI for free to incentivise as much usage as possible to capture user data and use it to build better models and better products.
“Profitable model”. A model with a very high price to cover the still-high AI infrastructure costs. The idea is for this revenue model to be sustainable.
We have also seen this pricing structure in OpenAI ChatGPT, as well as the AI coding solution market. Devin launched under the profitable model a 200-dollar-per-month subscription. On the other end of the market, OpenAI launched Codex and Google launched Jules for free under a “Pay in data” model last month, while there are also solutions that are not free, but are clearly highly subsidised, Windsurf and Cursor priced at 15 and 20 dollars per month, respectively.
🎱 AI ecosystem games
Last month, we also saw a couple of important partnerships evolving in the AI space. xAI and Telegram announced a big partnership, demonstrating that the convergence of Social and AI is a clear trend. It's a very smart move from xAI, given that Telegram is a social platform that doesn't have any AI capabilities in-house, opening up the opportunity for xAI to access its one billion users.
The partnership details that were announced were that Telegram will pay 300 dollars million in cash and equity, and what it gets in return is AI capabilities (Grok for its users) and a revenue share for any new subscribers xAI gets out of it.
The other partnership that's taken up some air space in May’s headlines is the one between Microsoft and OpenAI. The two companies are renegotiating their partnership bringing on the table equity stakes, access to OpenAI's IP and revenue share for different products.
This is key for OpenAI's future as it would pave the way for its transition to a for-profit organisation, which is important given that 1) Softbank’s recent investment came with a clause that it would have to return the funds if it remained a non-profit organisation and 2) the company could not IPO without that change.
These negotiations are also not that easy, given that there's been increasing competition between the two players. OpenAI launched Codex (see agent section below) and it acquired Windsurf last month; both compete directly with Microsoft's GitHub Copilot. On the other hand, Microsoft has launched its own LLMs, such as the Phi model family, which compete with OpenAI's GPT models.
⚔️ A double-edged sword
📢 US/China tension continues
We had indeed predicted that there would be a trade deal between the U.S. and China probably this year, but found premature any celebrations for the 90-day import tariff pause between the two countries. May’s news has proven that the tension continues. We also saw two new dimensions added.
We talked about the fact that the U.S. controls the AI software layer, when we did our analysis of the AI value chain in The Artificial Investor #50: What AI tells us about the future of the US/China trade war. As such, it was strange to see the software layer was excluded initially from any import tariffs and export restrictions. Well, now it looks like the U.S. government has taken note, as it has banned the export of any semiconductor-related software sales and services to Chinese chip manufacturers. This market is called EDA software, which, despite accounting for a small part of the semiconductor industry, plays a crucial role in chip development. And American companies like Synopsys, Cadence, and Siemens EDA dominate China's market. We think that will be a big blow for for Chinese chip manufacturers.
Furthermore, the U.S. government also opened up the dimension of fund flows. It was reported that the U.S. regulators are investigating Benchmark's investment in Manus AI. Manus AI is a Chinese AI agent company and Benchmark is a leading U.S. venture capital fund. This is a very good example on how difficult it is to make technology regulations effective. First of all, Manus AI doesn't develop their own models (it’s what people call a “GPT wrapper” company). So, what kind of innovation is the U.S. VC funding? Second, Manus AI’s founders are Chinese but the company is registered in Singapore. Does this make it a Chinese company? Nowadays, in a digital era where anyone from anywhere can sell software online, what makes a business an American or a Chinese company?
🐕🦺 AI augmentation vs. automation
Does AI augment humans? Is it a tool that helps us become more efficient and 10x better? Or does AI automate completely our jobs, which means that it replaces them? This is a debate we've spent a lot of time on this year, and it will continue to preoccupy us in the foreseeable future for sure.
Now, our friend Devansh wrote a great piece demonstrating how the statements “AI is only an augmentation tool” and that “AI will not steal your job, but someone using AI will” are too generic and false:
Jobs change a lot throughout the years. Think of how different a journalist or a social media manager job is now versus 5 or 10 years ago.
Business models of companies change completely. Think about how important a head of logistics was for a company like Blockbuster and how this job disappeared in a company like Netflix.
Entire market segments disappear. What happened to the well-paid specialist job of typists when computers and word processors came out?
Nevertheless, there were recent pieces of news that are in favour of the AI augmentation case. Amazon reported that since the increase of the robots in its warehouses, it has seen new job categories created that are related to robot maintenance and it has launched a big programme of reskilling and upskilling of its workforce. Amazon is not mentioning what the net impact is between job loss and job creation.
Also, there was an interesting study in the New York Times that was talking about how demand for radiologists has increased, as it looks like AI is enhancing their job instead of replacing them. Radiology is a very interesting use case to monitor. This was one of the first, if not the first, use cases where it was proven that AI is more capable than humans due to the strides made in computer vision and the complexity of the task at hand.
On the other hand, we also had a piece of news that was in favour of the AI automation case. There was a report about graduate hiring in Tech being 50% down versus pre-pandemic. This is something very serious to consider and it is something that we are looking at as well in our company. If AI does the job of an intern or a first-year analyst, then what would graduates do? They cannot simply do the job of an intern or a first-year analyst, as they would compete with AI, which is faster and cheaper. It looks like graduates, when they join a company, need to be at the associate level while they are graduating. But at the same time, the question is: “is education catching up fast enough to prepare graduates to be at the right level when they enter a company?”
Yes, it's true that we see that AI is being used already in the curriculum of schools across the U.S. and actually the schools with the highest AI usage are the ones that are being positioned in the top 2% of school rankings. The question is, will this be enough?
📝 The rules are still being written
A significant advancement in the area of copyrights and AI was made at the beginning of this month when the US Copyright Office released part 3 of its report for guidance on AI and copyrighted work.
The report states that “fair use” by AI developers (which means the fact that AI is actually learning something and not copying it, and then uses its knowledge to create something new) is not a universal assumption and needs to be evaluated on a case-by-case basis. This evaluation takes into account four dimensions: What is the purpose of the creation? What is the nature of the creation? What is its size and volume? And what is the potential impact on the market? Here, the focus is on the potential harm by the AI on the market, meaning can unlicensed AI damage creators' ability to monetise their work?
In the meantime, it seems the market has been functioning as the US Copyright Office reported that it has registered more than 1,000 AI-enhanced creative works. The topic that remains open is: what about work that has been completely created by, that is AI-developed end-to-end, not simply AI-enhanced?
The legal and economic standards in AI, creator royalties and copyrights are being established as we speak. There was a study that found that O'Reilly Media content was found in OpenAI's GPT-4o outputs, which raised concerns about the fact that AI training data is opaque and there is money left on the table for publishers. On the other hand, we saw the New York Times announcing an agreement with Amazon to monetise its content for the training of Amazon's AI models, which indicates how copyright owners have been shifting from lawsuits to monetisation of their content.
In other AI risk-related news:
Artificial intelligence's energy consumption is surging, with projections indicating that U.S. data centers could consume as much electricity by 2027 as the entire state of California, largely driven by AI workloads.
➰ Uncertainty in AI governance remains
Finally for this section, this is really what happens when governments and regulation start interfering with technology: uncertainty.
We saw uncertainty on AI governance peaking last month. On one hand, we had Trump's administration complete reversal of the Biden's administration AI executive order, which had export bans for AI models done in a tiered structure, where you had Tier 1 countries that had no restrictions, Tier 2 countries that had some access to China effectively having some moderate restrictions, and then China being in Tier 3 having some very high restrictions. The new U.S. government's preference is on bilateral deals, so they've scrapped that executive order a week before it would come into effect.
On the other hand, in the last 3-4 years, we've had various states in the U.S. introducing different AI regulations. There is now a bill proposed in Congress that would ban any state AI regulations for 10 years, so another reversal of the trend. In this case, despite the uncertainty, we actually think that it does make sense to recognise and be against fragmented regulation in the United States, as it leads to chaos, state arbitrages, and a framework that is effectively against innovation. At the same time, it's a very interesting topic to see how it evolves, because it does touch on the fundamental principles of a state federation.
🤖 Running on autopilot
Two new big innovations in the AI coding agent space
OpenAI's Codex and Factory's Droids represent two cutting-edge AI agents redefining how software gets built - each with a unique approach. Codex, launched as a research preview by OpenAI, is a Cloud-based software engineering application that enables multiple agents to autonomously and simultaneously complete coding tasks, such as feature development and bug fixing directly within a user’s repository. It’s powered by the Codex-1 model and operates in isolated Cloud sandboxes, offering traceable outputs and real-time progress tracking for every task. Meanwhile, Factory’s Droids go beyond just code; they’re purpose-built agents designed to automate the entire software development lifecycle: from writing production-ready features to managing on-call incidents, generating specs and reviewing pull requests. Droids have support for organizational memory and context retention. Both tools point to a near future where engineers increasingly collaborate with AI, delegating not just code, but entire workflows.
AI agents penetrate the Enterprise further
Enterprise adoption is accelerating. AI agents have moved from prototypes to production, with Microsoft reporting over 10,000 organizations using its agent development platform, Foundry, in just 4 months. As a lower-cost alternative, companies can use Mistral’s recently-launched Agents API.The development community knows that the only way to get to mass adoption is to have agent interoperability, hence Microsoft has been working on an Agent2Agent (A2A) protocol, which it released in May. On the smaller business side, Claude has expanded its agentic capabilities to include autonomous research and integrating with tools like Jira and Zapier, while finance giants like Visa and PayPal are equipping agents with payment capabilities.
In other autonomous AI news:
US federal investigators dive into Tesla's Austin robotaxi plans amid safety concerns
Aurora Innovation has clocked over 1,000 miles with autonomous 18-wheelers in Austin
A mine in Inner Mongolia now operates the largest fleet of electric autonomous haul trucks (100+), powered by Huawei’s 5G network.
🧩 Laying the groundwork
🔢 Models
Amidst the I/O conference, Google stole the show with two big model launches.
On the creative side, Google launched Veo 3, the third generation of its video-generation model, and the preview is outstanding. The model produces 4K video with cinematic realism, including physically accurate simulations (e.g. motion, environment interactions) and handling of complex visual phenomena, like light scattering through mud, fluid dynamics or fine object textures. Audio generation is also built-in, with synchronised sound effects, ambient sounds and character dialogues.
One of the biggest challenges of earlier video-generation models, consistency, is also managed very well. Users can input image references to maintain character appearance and animation across scenes. Some other cool features include: 1) first/last frame transitions (give it two images to generate a video with what happened in-between), 2) video manipulation by inserting or erasing elements while maintaining realism (shadows, reflections, scale), and 3) merging of videos and images to incorporate specific movements (a video with the user moving their head and an image of a panda, resulting in the panda moving in the same way).
Google evaluated Veo 3 on MovieGenBench benchmark datasets released by Meta, consisting of 1,003 prompts for video and 527 prompts for video+audio, vs. Meta’s MovieGen, Kling 2.0, Minimax and Sora Turbo. Veo 3 performed best on overall preference, particularly for its capability to follow prompts accurately. All videos are watermarked with SynthID to indicate AI generation and the model is available through Google’s professional video editor software, Flow.
On the reasoning side, Google announced AlphaEvolve, a Gemini-powered coding agent for designing advanced algorithms. The model is the response to GenAI critics that have argued that AI simply recycles concepts from its training data. AlphaEvolve was presented with 50 open mathematical problems: in 75% of the cases it rediscovered state-of-the-art human solutions and in 25% of cases it improved existing best results. Most notably, it discovered a novel algorithm that multiplies 4×4 complex matrices using only 48 scalar multiplications, outperforming the best human algorithm that has been used since 1969 (Strassen).
AlphaEvolve combines Gemini LLMs (Flash and Pro) for creative code generation, automated evaluators for correctness and performance, and an evolutionary framework that iteratively improves candidate solutions. The BigTech company published some impressive contributions to its internal AI functions:
It recovered 0.7% of global compute capacity, translating to massive energy and cost savings.
It proposed an optimisation (Verilog) that was integrated into its next-gen AI chips (TPUs)
It optimised matrix multiplication kernels critical to LLM training, speeding up Gemini by 23% and reducing training time by 1%.
In the near future, the model will be applied in material science, drug discovery, climate modeling and operations research.
Many AI labs launched new software engineering models last month to capitalise on the most popular Gen AI use case. The fourth child of the king of coding models was born: Claude 4 by Anthropic. OpenAI launched Codex-1, the model finetuned for coding and reasoning that sits behind its new agentic product. OpenAI’s recent acquisition, Windsurf, introduced the SWE-1 family of models in an attempt to benefit from the usage knowledge graph it has created since the launch of its AI agent. Mistral’s Devstral is the open-source community’s attempt to catch up with private coding models. Finally, at a step before software programming, Google announced Stitch to bridge the gap between design ideas and development.
Creative generation models remain popular, and, in addition to Google’s Veo 3, we had a number of releases in May. New video models were introduced by Google DeepMind (SkyReels-V2 for infinite-length video generation) and Alibaba (Wan2.1), while Black Forest Labs released FLUX.1 Kontext for in-context image generation and editing. On the audio side, real-time voice cloning went open source with Chatterbox, and Stability AI collaborated with ARM to release Stable Audio Open Small, a text-to-audio model for mobile phones.
Other model-related news include:
Intelligence Internet launched medical AI model II-Medical-8B
Amazon announced Nova Premier for complex workflows
Sakana introduced Continuous Thought Machines
Microsoft launched 4th generation of Phi model family
Godela helps advance robot learning with physics-aware simulation engine
DeepSeek released DeepSeek-Prover-V2, a formal mathematical reasoning
Google launched small version of Gemma 3 for mobile AI
Alibaba launched ZeroSearch, an AI model that “googles” itself
🧱 Infrastructure
↪️ 180 degrees turn
The recent Gulf tour by President Trump has catalysed a wave of AI megadeals, unlocking over 600 billion dollars in U.S.-Middle East partnerships spanning AI chips, Cloud infrastructure and sovereign AI investments. This is a 180-degree turn for American foreign policy, as the Biden administration had classified some Middle Eastern countries as medium-risk importers, which would have resulted last month in AI chip export restrictions.
UAE and Saudi Arabia were in the AI spotlight. At a government level, the U.S. and UAE signed a new AI technology framework, clearing regulatory hurdles on chip exports and formalising commitments to route compute power through the U.S.-approved hyperscalers. On the private sector side, Nvidia and AMD secured landmark agreements with Saudi Arabia’s Humain to supply hundreds of thousands of advanced AI chips, deploying 500 megawatts of data center power. The American 500-billion-dollar AI initiative, Stargate, which is spearheaded by OpenAI, Oracle and SoftBank, seems to not be limited to local infrastructure and will be expanding to friendshoring in Abu Dabi. Is this compatible with the Trump administration’s “America first” dogma?
Amazon also announced a 5-billion-dollar strategic partnership with Humain to build an “AI Zone” in Saudi Arabia, including the development of Arabic large language models and upskilling 100,000 citizens in GenAI and Cloud. In case you are wondering who Humain is, it’s a new AI venture launched by the Saudi prince and has the backing of Saudi Arabia’s Public Investment Fund (PIF).
In other hardware-related news:
Apple to release smart glasses in 2026 and scraps plans for camera-enabled watch
Nvidia expands Cloud presence with new GPU marketplace solution
OpenAI's 11.6 billion-dollar funding boost for Texas data center construction
🍽️ Fun things to impress at the dinner table
Don’t do this at home. An impressive video of challenges generated with Google Veo 3.
One global language. Google demonstrated the new real-time speech translation with Google Beam, which indicates we are not far from everyone doing a video conference in their own language.
Inside the Mind of ChatGPT. The sympathetic prompt of a chatbot user turns them to AI’s therapist.
Autonomous army of humans. The story of Builder.ai, the AI coding app that was once a unicorn and is now filing for bankruptcy.
The society of the future. A study shows that AI agents can autonomously develop social conventions.
See you next time for more AI insights.