What's in a Name?

6/13/2025 • 5 minute read

Author: Atanas Tonchev

What’s in a name? That which we call a rose by any other name would smell just as sweet.

This quote from Shakespeare's Romeo and Juliet reminds us that a name is empty: things smell, feel, and look the way they do no matter what we call them. So it doesn't matter whether we call a transformer Artificial Intelligence or Large Language Model, right? The math is the same.

Still, a name changes how something is perceived. It can clarify, but it can also mistify. And in the case of modern transformer-based algorithms (such as GPT, Claude, Gemini etc.), their two names carry different meanings.

AI: Hype, Scare, and Market Share

Some 4 years ago, AI was a term confined to game development and science fiction. I still remember my first prompts to GPT-3's developer playground, just one or two months before it was released as ChatGPT. It felt like something from an Asimov novel. My first reaction was to frantically start thinking of ways of how to examine my students in the presence of generative models. This mixture of fear and excitement that these advanced models caused to each of us when we first "communicated" with them is best expressed with the term Artificial Intelligence: a machine capable of being a partner to human intelligence.

And this feeling has been well exploited: OpenAI has been receiving record funding since 2023, while the IT industry has been laying off its junior developers en-masse. After all, if something is capable of articulating itself as good as a human, surely it should be its equal in more trivial tasks, right?

...right?

Nomen Omen or Why Shapespeare is Wrong

A name is a sign, or so the Romans said. I would say it is even more than a sign: it is a relationship. The way I name something shapes the way I will relate to it. So, if I name it "intelligence", I will expect something similar to me: with its own opinions, biases, and strengths. Hence why I have heard so many times the phrase "I asked *insert LL model* what it thinks on *topic*..." And this is a problem. ChatGPT generates words, not opinions. If we keep forcing LLMs into something they are not, we will keep getting disappointed.

What now? It's a map, not a brain.

The way AI functions is, contrary to its marketing, not like a brain. Instead, a closer analogy is a map. An LLM is a massive multi-dimensional map of language itself, or rather "languageness" (in the sense of the French 'langage'). Yes: it can use this map to give you the next likely word. But it can also do so much more. Naming LLMs as what they are (language models and not intelligence) will let us develop and use them as what they are and not what we would like to imagine.

Word count: 495