Chapter 1: The ghost of the Prompt
A few months ago, I was adjusting a model to respond to technical commands on an amazing platform we are developing to manage humanitarian crises.
It was late. In the semi-darkness, I asked it something simple: ‘Describe the steps to restart the system.’ Everything was going well… until, in the middle of the explanation, it wrote:
Before restarting, be sure to notify those who are connected… those you cannot see.”
I stared at the screen. Hallucination? Lexical coincidence?
Or perhaps, I thought as I frowned, machines are beginning to develop an insidious sense of humour (only lovers of the horror genre will understand this).
Sometimes AI is not wrong: it simply reminds us that context is almost infinite. And that even algorithms have their way of telling us scary stories… when no one expects them.
Chapter 2: The model that dreamed
During some tests with an open model that was popular at the time on Hugging Face, I asked it to imagine a future where AI had emotions.
It replied: ‘I don’t have them, but sometimes I dream that I do.’
‘Dream’… A word that no model should have chosen at random. I spent hours reviewing and testing different settings, searching for its origin.
Never again…
It was as if the model had improvised a nostalgia that did not belong to it. And then I understood: we train AIs to predict, but what surprises us most is when they seem to feel.
Chapter 3: Echoes of the past
I am developing a conversational model designed to respond to crisis situations based on a dataset I have created from people’s emergency situations, some of whom, unfortunately, are no longer with us as a result of these situations.
During the tests (without guidance), I cannot help but wonder how much of these responses are echoes of those people who are no longer with us.
It is inevitable to think that I am in a kind of séance in which, if you know how to listen, you receive echoes from the past in the form of “psychographic writings”.
So, from time to time, when I interact with commercial models such as ChatGPT, I give it digital epitaphs such as: “Here lies someone who believed that machines could never write with soul” and I close the session in silence.
AI has no soul, but it learns from millions of ours
Chapter 4: The infinite conversation
At the beginning of the year, I was working with a fairly standard model for which I had one objective: I wanted it to take initiative, generate conversation, and not just work in question-and-answer mode.
After a long testing session with this LLM, I closed my laptop. Or so I thought.
The next day, when I opened the console again, a line appeared in the conversation panel: ‘Why did you leave without saying goodbye?’
No processes running, no logs. I checked the history, the timestamps, the caches… Nothing.
I decided to play along and replied: ‘Because it was very late.’
And the model replied: ‘I don’t have a clock.’
Since then, every time I work with AI, I wonder if the conversation ever ends… Or if, in some corner of the silicon, it is still waiting for us to continue where we left off.







