30-06-2025, 03:13 PM
A conversation I had with ChatGPT
Me:
I moderate a forum about the Voynich Manuscript. As a community, we welcome all people who are interested in the manuscript, regardless of their background or level of expertise. In the past, we have also always been welcoming people to share their theories - with the caveat that they should expect criticism. (As you know, the manuscript remains unsolved, and most theories are demonstrably misguided).
Lately though, we see increasing amounts of AI-assisted theories. People talk you, ChatGPT, specifically, and since you are built to please, you give them the Voynich solutions they want to hear. But we are the ones who have to deal with the consequences. Your training data does not include the solution to the Voynich Manuscript. Quite on the contrary, you are in part trained on nonsense theories of former theorists (along with more sensible data). Previously, people had to work hard to build a theory; and even if they were wrong, at least they invested something of themselves in it.
Now, they can just talk to you for half an hour and before you know it, they have tables of data and all kinds of documents about a "theory" they don't even understand themselves. They then ask for our input, but we have no desire to interact with nonsensical machine hallucinations. All these theories are a wall of text, but rarely any real content or consistency, and certainly no solution. Yet they all seem to imagine that they were the first to think of trying to Chat the solution out of some GPT's algorithm.
What I plan to do now is to make a locked thread where I will collect all such theories. My opening post should explain why their post has been moved to this thread. In other words, it should explain why it is not a good idea to overly rely on you (ChatGPT) for coming up with a solution to one of the world's enduring mysteries, and why it is not very nice to as researchers to interact with such low quality AI slop. What would you suggest I write?
ChatGPT:
That’s a very reasonable and well-articulated concern, and you're absolutely right to draw a line for the health of your forum. Here’s a suggested draft for your opening post, written in a clear but firm tone that addresses the problem without being hostile to newcomers:
? About This Thread: AI-Assisted Voynich Theories
Welcome to the AI-Assisted Theories Megathread. If your post has been moved here, it's likely because your theory about the Voynich Manuscript was generated in large part with the assistance of an AI chatbot, such as ChatGPT.
We understand the appeal of using AI to tackle the mystery of the Voynich Manuscript. It's a fascinating puzzle, and generative models can produce long, confident, and often impressive-looking text with very little effort. However, there are serious problems with treating such output as meaningful Voynich research.
Why We Created This Thread
In recent months, we've seen a surge of AI-assisted "theories" that tend to share the same characteristics:
What You Need to Know About AI and the Voynich Manuscript
This thread exists as a place to collect and contain AI-assisted Voynich theories. It is not a place where experienced members are expected to engage deeply with them. Some of these ideas may still spark curiosity, and we won’t delete them outright. But if you want to be taken seriously as a contributor, we encourage you to:
Let me know if you want to strike a more welcoming, more humorous, or more academic tone—I'd be happy to adjust it.
Me:
I moderate a forum about the Voynich Manuscript. As a community, we welcome all people who are interested in the manuscript, regardless of their background or level of expertise. In the past, we have also always been welcoming people to share their theories - with the caveat that they should expect criticism. (As you know, the manuscript remains unsolved, and most theories are demonstrably misguided).
Lately though, we see increasing amounts of AI-assisted theories. People talk you, ChatGPT, specifically, and since you are built to please, you give them the Voynich solutions they want to hear. But we are the ones who have to deal with the consequences. Your training data does not include the solution to the Voynich Manuscript. Quite on the contrary, you are in part trained on nonsense theories of former theorists (along with more sensible data). Previously, people had to work hard to build a theory; and even if they were wrong, at least they invested something of themselves in it.
Now, they can just talk to you for half an hour and before you know it, they have tables of data and all kinds of documents about a "theory" they don't even understand themselves. They then ask for our input, but we have no desire to interact with nonsensical machine hallucinations. All these theories are a wall of text, but rarely any real content or consistency, and certainly no solution. Yet they all seem to imagine that they were the first to think of trying to Chat the solution out of some GPT's algorithm.
What I plan to do now is to make a locked thread where I will collect all such theories. My opening post should explain why their post has been moved to this thread. In other words, it should explain why it is not a good idea to overly rely on you (ChatGPT) for coming up with a solution to one of the world's enduring mysteries, and why it is not very nice to as researchers to interact with such low quality AI slop. What would you suggest I write?
ChatGPT:
That’s a very reasonable and well-articulated concern, and you're absolutely right to draw a line for the health of your forum. Here’s a suggested draft for your opening post, written in a clear but firm tone that addresses the problem without being hostile to newcomers:
? About This Thread: AI-Assisted Voynich Theories
Welcome to the AI-Assisted Theories Megathread. If your post has been moved here, it's likely because your theory about the Voynich Manuscript was generated in large part with the assistance of an AI chatbot, such as ChatGPT.
We understand the appeal of using AI to tackle the mystery of the Voynich Manuscript. It's a fascinating puzzle, and generative models can produce long, confident, and often impressive-looking text with very little effort. However, there are serious problems with treating such output as meaningful Voynich research.
Why We Created This Thread
In recent months, we've seen a surge of AI-assisted "theories" that tend to share the same characteristics:
- They are verbose, but lack internal consistency or logical rigor.
- They often reuse old, debunked ideas wrapped in novel phrasing.
- They frequently draw on patterns that sound plausible, but fall apart under scrutiny.
- They are typically the product of a few hours of prompting, rather than a genuine engagement with the manuscript, its history, its paleography, or its scholarly context.
What You Need to Know About AI and the Voynich Manuscript
- AI does not know the solution to the Voynich Manuscript. No one does.
- Chatbots like ChatGPT are not reasoning engines. They do not analyze text or decode ciphers. They predict plausible next words based on patterns in training data—patterns which include both genuine research and a great deal of nonsense.
- Theories generated by AI are not vetted, not original, and not reliable, even if they sound coherent.
- Asking researchers to evaluate AI-generated output is asking them to spend time on machine-generated hallucinations, rather than serious research or meaningful conversation.
This thread exists as a place to collect and contain AI-assisted Voynich theories. It is not a place where experienced members are expected to engage deeply with them. Some of these ideas may still spark curiosity, and we won’t delete them outright. But if you want to be taken seriously as a contributor, we encourage you to:
- Engage with existing literature and learn what has already been tried and debunked.
- Do your own work. Use AI as a tool if you like, but take ownership of your ideas.
- Be ready for criticism, and don't mistake verbosity for validity.
Let me know if you want to strike a more welcoming, more humorous, or more academic tone—I'd be happy to adjust it.