ChatGPT: the wild card that almost always fits
ChatGPT remains the go-to tool for most people, and it’s not just because of habit. It’s because it works reasonably well for almost everything.
You can use it for writing, studying, coding, analysing documents, preparing for a meeting, organising your thoughts or generating images. OpenAI has been adding cross-session memory, connectors to external tools, deep reasoning modes and agents that can act on your behalf.
As of March 2026, ChatGPT has over 769 million monthly active users. No competitor can match that ecosystem of integrations, plugins and tools built on top of it.
Its weakness is that very success. As it’s good for everything, people sometimes use it for tasks where another tool would clearly be better: for long, nuanced writing, Claude outperforms it. For research using sources, Perplexity outperforms it. For working within Office, Copilot outperforms it.
Who it’s for: anyone who wants a single AI for everyday use and doesn’t want to overcomplicate things. The most natural entry point into the world of AI.
Gemini: the most misunderstood AI on the market
Gemini is probably the most underrated AI outside the technical sphere.
Google isn’t just building a chatbot to compete with ChatGPT. It’s integrating AI across its entire ecosystem: Gmail, Google Docs, Sheets, Slides, Drive, Calendar, Android, Search and Cloud. That integration, when it works well, offers real value that a standalone chatbot cannot replicate.
Furthermore, Gemini has a very specific technical advantage: the ability to process massive amounts of context natively. Being able to feed it two hours of video, a complete code repository or a large set of documents and have it process it all together makes a real difference for certain use cases.
The problem with Gemini is that many users try it expecting ‘another ChatGPT’ and do not immediately perceive its unique advantage, because that advantage is more evident in integrated workflows than in isolated conversations.
Who it’s for: individuals and teams already working intensively within the Google ecosystem. There, its value is clear and hard to match.
Claude: the quiet favourite of writers and analysts
Claude, from Anthropic, has a very solid reputation in professional circles, although it is less well known outside them.
Its great strength is well documented: it writes very well, maintains very long contexts and structures arguments with a clarity that many users find superior. It is therefore no surprise that in the enterprise segment it has become the benchmark for document analysis, contract review and technical writing. Anthropic currently controls between 32% and 40% of the enterprise language model market.
Its main weakness is not technical: it’s the ecosystem. It doesn’t have a productivity suite as ubiquitous as Microsoft’s or Google’s behind it. But Claude Cowork, launched in 2026, is starting to change that.
Who it’s for: writers, analysts, consultants, researchers. Anyone who works extensively with long-form text and needs precision and elegance.
Microsoft 365 Copilot: the best AI if your office runs on Microsoft
Copilot is a fascinating case because many people judge it by comparing it to ChatGPT as if they were the same type of product. They are not.
Copilot doesn’t compete on ‘who answers open-ended questions best’. It competes on ‘who helps you best whilst you’re working in Word, Excel, PowerPoint, Outlook and Teams’. And in that arena, it wins.
In Excel, it generates complex formulas in natural language, detects trends and summarises tables.
In PowerPoint, it turns documents into presentations with a narrative structure. In Outlook, it prioritises and summarises email threads. All without leaving the work environment.
But we must be honest about its limitations: the initial result almost always requires review. And if the company has poorly organised data or a mess of internal permissions, Copilot inherits that mess. It doesn’t fix it.
Who it’s for: teams and individuals already using Microsoft 365. There, the value is undeniable.
Perplexity: when you need to know something right now, backed by sources
Perplexity has a very simple yet powerful proposition: you search for something and, instead of a list of links, you receive a direct answer built on current sources that you can verify.
For anyone who needs to research recent topics, prepare for a meeting or cross-check data, it is extraordinarily convenient. Not because it is smarter than ChatGPT, but because it was built with the mindset that the traceability of information matters.
There is an interesting phenomenon: many people in professional settings already use it as their primary daily search engine, ahead of Google, for queries requiring an immediate, substantiated answer.
Who it’s for: journalists, analysts, researchers, executives. Anyone who needs up-to-date and verifiable information quickly.
NotebookLM: probably the quietest and most powerful change
Google’s NotebookLM does something different from everything else, which is why it’s a bit tricky to pin down.
Its original concept was to work exclusively with the documents you provide: you upload your reports, PDFs, presentations and notes, and from there you can ask it anything, ask it to cross-reference information, identify contradictions, or explain concepts using only your own sources. Without inventing anything external. Without straying from the corpus you have defined.
But that has changed. Google has added the ability to connect it to the internet as well, so that it can now combine your own documents with real-time searches of external sources. This makes it something of a hybrid: part classic NotebookLM, part Perplexity. You can anchor the answer in your internal documentation whilst simultaneously enriching it with up-to-date information from the web. For research that blends your own sources with external context, this combination is very powerful.
It also has a feature that many are unaware of: it can generate a summary in podcast format, with two voices, which narrates the content of your documents as if it were a radio programme. Strange at first, but incredibly useful once you try it.
Who it’s for: researchers, students, consultants, teams working with a lot of their own documentation. And now also for anyone who needs to cross-reference that documentation with what’s happening on the internet in real time.
Genspark: the fastest-growing one that few people know about
Genspark is last year’s surprise hit. It started as a search engine that, instead of showing you links, generated an organised page with all the information on a topic. But in April 2025 it launched Super Agent, a system that can take action: make phone calls, generate presentations, create videos, book restaurants, plan entire trips.
The most striking figure: it reached $36 million in annual turnover in just 45 days following the launch of Super Agent. Technically, it coordinates more than 80 specialised tools and several language models simultaneously, selecting at any given moment which is the most suitable for each part of the task.
It is one of the clearest examples of what the future of AI will look like: not a model that responds, but a system that orchestrates.
Who it’s for: people researching broad topics, who want to move from question to deliverable quickly, and those who want to explore now what it means to have an agent that takes action.
Monica: an aggregator, not a model
Monica deserves an important clarification that many people overlook: it is not an AI in itself. It is an aggregator. An access layer that brings together models from other providers in a single interface: ChatGPT, Claude, Gemini and others, depending on the plan.
Its aim is not to be the smartest, but to be the most convenient for certain light everyday tasks. It lives mainly in the browser and allows you to translate a page, summarise an article or rewrite a paragraph without switching between applications. The friction of switching tools matters more than it seems when these are tasks you do twenty times a day.
That said, it’s worth bearing in mind that when you use Monica, you’re using other people’s models. What you’re paying for is convenience, not its own capabilities.
Who it’s for: users who prioritise speed and convenience for light, frequent tasks, and who already know that the power of the integrated models will suffice.
Mistral AI: Europe’s most serious contender
Mistral AI, founded in Paris in 2023 by researchers from Google DeepMind and Meta, represents something significant beyond its technical capabilities: it is the genuine European alternative.
Its models are open-source: they can be run on your own infrastructure without data leaving the corporate environment. For regulated sectors such as telecommunications, banking, healthcare or public administration, the question is not always ‘which AI performs best’, but ‘which can I deploy with the guarantee that my data stays where it should be’. Here, Mistral clearly wins out.
Furthermore, its Mixture of Experts architecture is highly efficient: it activates only a fraction of the model’s parameters for each query, reducing computational cost without sacrificing performance.
Who it’s for: organisations with strict data privacy requirements, European companies in regulated sectors, anyone seeking total control.
Qwen: the most underrated competitor
Qwen, from Alibaba Cloud, rarely features in general discussions about AI. And that is a mistake.
Its benchmarks in programming, mathematical reasoning and long-form contexts are very impressive. Qwen 2.5 and Qwen 3 offer performance comparable to much larger models, with remarkable parameter efficiency. It is trained on over 119 languages and can run on relatively modest hardware.
The obstacle to its adoption in the West is not technical: it is contextual. It comes from China, from a group operating under a different regulatory environment, and this raises legitimate questions about data governance that must be assessed on a case-by-case basis.
For whom: technical professionals, researchers, laboratories, and companies with the capacity to assess the regulatory framework for each variant.
Image, video and creativity: the other major family
So far we have discussed tools that generate text, search for information or automate tasks. But there is another family of AIs that is transforming visual and audiovisual creation, and which deserves its own section.
In image generation, Midjourney remains the benchmark for artistic aesthetic quality: it produces visually stunning results, although it operates via Discord and does not have such a mature enterprise API. Adobe Firefly is the commercially safest option, integrated into Creative Cloud and trained on licensed content, making it the natural choice for marketing and corporate communications departments. DALL-E, integrated within ChatGPT, is the most accessible option for those who want to generate images without leaving their usual conversation flow.
Reve Image 1.0 is the latest surprise in this space: a Californian start-up that has quickly gained a reputation for its accuracy in following the prompt and for the aesthetic quality of its results, particularly in portraits and text-in-image. It is a very serious alternative to Midjourney that many people are not yet aware of.
In video, the market remains wide open.
Runway has been around longer and has a more mature product for professional use. OpenAI’s Sora and Google’s Veo represent the big tech firms’ offerings. HeyGen has established itself in the very specific niche of dubbing and lip-syncing: it allows a speaker to speak dozens of languages using their own voice and gestures, something particularly valuable in international corporate communication.
And for generative music, Suno and Udio allow you to create complete songs from a text description, complete with instrumentation and vocals, which offers real value for creative prototypes or branded content.
A practical note: in image and video generation more than in any other field, it is advisable to check the terms of use before using the output for commercial purposes. Not all models have the same licences.
Llama: the foundation of the open ecosystem
Meta launched Llama as a family of open-source models that anyone can download, run and modify. It is not primarily intended as an end-user assistant, but as infrastructure: the foundation upon which companies, researchers and developers can build their own AI solutions without relying on external services.
This makes it different from all the others. It doesn’t compete on ‘who writes best’: it competes on ‘who gives you more control’. A company deploying Llama on its own infrastructure knows exactly where its data is and can customise the model’s behaviour for its specific use case.
Who it’s for: organisations seeking total control, researchers, and technical teams needing a solid foundation to build their own solutions without vendor lock-in.
An important note that applies to all: invisible bias
There is a key concept that needs to be explained clearly before we continue, as it affects all the tools we have looked at.
An AI does not form opinions. Nor does it respond from a position of absolute neutrality. It responds based on the vast volumes of information it was trained on. And that data contains patterns: dominant language, dominant culture, the frequency of certain ideas, and prevailing argumentative styles.
The practical result is that most of these tools still perform better in English than in other languages. And more importantly: they think from a logic heavily influenced by the Anglo-Saxon cultural context. Tone, nuance, humour, examples, contextual sensitivity. For someone working in Spain or Latin America, that matters. An AI can produce a technically correct text that sounds completely out of place to its actual audience.
Bias also appears in more subtle ways: the examples it chooses, the implicit associations it makes, how it summarises a situation with multiple perspectives. That is why the most important rule is not to choose the right tool, but to review what it produces. Especially when that output is going to reach real people.
Managing the use of AI effectively is not just about choosing the right tool. It is about understanding for which task it minimises risk the most, and then keeping a close eye on the result.







