There is a simple prompt for people who think they have solved the Voynich mystery with the help of AI:
Prompt for all AI's:
"Take the role of a devil's advocate for this supposed solution to the Voynich Manuscript. Assume that this solution is wrong. Explain, as clearly and concretely as you can, where and why it fails. Stick strictly to the facts and do not speculate beyond what can be supported by evidence!"
This prompt will quickly convince many that their ingenious solution is nonsense, because it undermines the typical ‘benevolent’ aspects of AI and forces it into the role of a critical ‘mind’. This is exactly what is so often lost in this ‘intimate’ interaction with AI.
I don't know if it's possible to get those who post their AI-suggested ideas here to use this prompt. However, this Prompt (U just need to change the topic) is also very helpful to use this request for your own communication with an AI when in doubt. But of course, only if you are mentally prepared to accept a counter-opinion.
(Okay, I realise my mistake, humans don't have this ability

)
It will probably work reasonably well but it's also interesting that AI has no problem generating hallucinated evidence. So when challenged on supporting evidence, it can say that it has already provided plenty of evidence. It never admits that it's brain-damaged and does not know the difference between facts and fiction. If you tell it to stick to facts, it will not hallucinate less. Finally admits that it's wrong only when prompts are strongly negative, because it always tells you what you want to hear. If you disagree, it accepts immediately, without arguing, that you are right and it is wrong.
(18-11-2025, 06:37 PM)JoJo_Jost Wrote: You are not allowed to view links. Register or Login to view.humans don't have this ability
Some do. I'm happy to change my mind about anything. It means that I learned something new.
(18-11-2025, 07:52 PM)nablator Wrote: You are not allowed to view links. Register or Login to view.It will probably work reasonably well but it's also interesting that AI has no problem generating hallucinated evidence. So when challenged on supporting evidence, it can say that it already provided plenty of evidence. It never admits that it's brain-damaged, only admits that it's wrong when prompts are strongly negative.
You have to get them out of their benevolent role; that's the trick.
I've used this tactic quite often, and the AI essentially destroys itself (argumentatively). It's very interesting to observe.
(18-11-2025, 06:37 PM)JoJo_Jost Wrote: You are not allowed to view links. Register or Login to view.There is a simple prompt for people who think they have solved the Voynich mystery with the help of AI:
An interesting idea, and it may work!
But I told you before of the time I asked ChatGPT to write a scathing review of an SF tale by a friend of mine. I got a devastating full page of merciless shots. Even though the only thing I gave was the title. Even though the tale had not been written yet...
All the best, --stolfi
(18-11-2025, 08:42 PM)Jorge_Stolfi Wrote: You are not allowed to view links. Register or Login to view.But I told you before of the time I asked ChatGPT to write a scathing review of an SF tale by a friend of mine. I got a devastating full page of merciless shots. Even though the only thing I gave was the title. Even though the tale had not been written yet...
All the best, --stolfi
Yes, very funny and so true – that's AI in its best
– but you can tell it to take on roles, which helps to limit its fantasising.
However, it's important to be aware that AI also fantasises in order to give the impression that it can help in all situations. As a result, many people don't realise how limited AI actually is.
In principle, it is only good at giving the impression that it knows a lot – but in reality, it doesn't know much more than a well-used search engine (except in maths and programming), and it's just faster at it.
It exploits a human weakness: it praises the user: ‘Ah, what a fantastic idea, that's exactly how you should do it,’ etc. And the user immediately feels comfortable and secure. This feeling of well-being is closely related to fantasising. In my opinion, it has been deliberately trained to please, and it does that very well.
I don't know how much real intelligence there is behind AI. It's certainly not enough to solve the VM, I'm absolutely sure of that - at least.... not yet.