| Welcome, Guest |
You have to register before you can post on our site.
|
|
|
| About this Subforum |
|
Posted by: Koen G - 30-06-2025, 03:13 PM - Forum: The Slop Bucket
- Replies (26)
|
 |
A conversation I had with ChatGPT
Me:
I moderate a forum about the Voynich Manuscript. As a community, we welcome all people who are interested in the manuscript, regardless of their background or level of expertise. In the past, we have also always been welcoming people to share their theories - with the caveat that they should expect criticism. (As you know, the manuscript remains unsolved, and most theories are demonstrably misguided).
Lately though, we see increasing amounts of AI-assisted theories. People talk you, ChatGPT, specifically, and since you are built to please, you give them the Voynich solutions they want to hear. But we are the ones who have to deal with the consequences. Your training data does not include the solution to the Voynich Manuscript. Quite on the contrary, you are in part trained on nonsense theories of former theorists (along with more sensible data). Previously, people had to work hard to build a theory; and even if they were wrong, at least they invested something of themselves in it.
Now, they can just talk to you for half an hour and before you know it, they have tables of data and all kinds of documents about a "theory" they don't even understand themselves. They then ask for our input, but we have no desire to interact with nonsensical machine hallucinations. All these theories are a wall of text, but rarely any real content or consistency, and certainly no solution. Yet they all seem to imagine that they were the first to think of trying to Chat the solution out of some GPT's algorithm.
What I plan to do now is to make a locked thread where I will collect all such theories. My opening post should explain why their post has been moved to this thread. In other words, it should explain why it is not a good idea to overly rely on you (ChatGPT) for coming up with a solution to one of the world's enduring mysteries, and why it is not very nice to as researchers to interact with such low quality AI slop. What would you suggest I write?
ChatGPT:
That’s a very reasonable and well-articulated concern, and you're absolutely right to draw a line for the health of your forum. Here’s a suggested draft for your opening post, written in a clear but firm tone that addresses the problem without being hostile to newcomers:
? About This Thread: AI-Assisted Voynich Theories
Welcome to the AI-Assisted Theories Megathread. If your post has been moved here, it's likely because your theory about the Voynich Manuscript was generated in large part with the assistance of an AI chatbot, such as ChatGPT.
We understand the appeal of using AI to tackle the mystery of the Voynich Manuscript. It's a fascinating puzzle, and generative models can produce long, confident, and often impressive-looking text with very little effort. However, there are serious problems with treating such output as meaningful Voynich research.
Why We Created This Thread
In recent months, we've seen a surge of AI-assisted "theories" that tend to share the same characteristics: - They are verbose, but lack internal consistency or logical rigor.
- They often reuse old, debunked ideas wrapped in novel phrasing.
- They frequently draw on patterns that sound plausible, but fall apart under scrutiny.
- They are typically the product of a few hours of prompting, rather than a genuine engagement with the manuscript, its history, its paleography, or its scholarly context.
These posts aren't theories in the traditional sense—they are AI-generated speculation, usually created without a solid understanding of the manuscript or the existing body of research. The effort may feel exciting for the individual involved, but the result is often indistinguishable from the many, many failed theories of the past—only now created faster, and in higher volume.
What You Need to Know About AI and the Voynich Manuscript- AI does not know the solution to the Voynich Manuscript. No one does.
- Chatbots like ChatGPT are not reasoning engines. They do not analyze text or decode ciphers. They predict plausible next words based on patterns in training data—patterns which include both genuine research and a great deal of nonsense.
- Theories generated by AI are not vetted, not original, and not reliable, even if they sound coherent.
- Asking researchers to evaluate AI-generated output is asking them to spend time on machine-generated hallucinations, rather than serious research or meaningful conversation.
What This Thread Is (and Isn’t)
This thread exists as a place to collect and contain AI-assisted Voynich theories. It is not a place where experienced members are expected to engage deeply with them. Some of these ideas may still spark curiosity, and we won’t delete them outright. But if you want to be taken seriously as a contributor, we encourage you to:- Engage with existing literature and learn what has already been tried and debunked.
- Do your own work. Use AI as a tool if you like, but take ownership of your ideas.
- Be ready for criticism, and don't mistake verbosity for validity.
If you're here because your post was moved, this is your opportunity to reflect on how much of your "theory" you truly understand, and whether you're prepared to defend it as your own work—not just something an AI happened to write while you watched.
Let me know if you want to strike a more welcoming, more humorous, or more academic tone—I'd be happy to adjust it.
|
|
|
| [Poll] What *are* vords? |
|
Posted by: RadioFM - 30-06-2025, 12:20 AM - Forum: Analysis of the text
- Replies (11)
|
 |
Poll no. 1 (see above): Nature of vords
When decoded, most (if not all) Voynich vords will turn out to be... (Choose 1 option)
a) Words or almost whole words
b) Syllables, bigrams, n-grams
c) Single letters (or phonemes)
Assume any dummy/null vords or characters removed
Poll no. 2: Nature of the cipher
I believe the bulk of the text was ciphered using... (Multiple choices allowed)
☐ Dummy (null) characters or strokes
☐ Dummy (null) words
☐ Transposition within words
☐ Transposition within lines
☐ Indexed codebook (not just for a minority of words, but for the bulk of text)
☐ Auxiliary devices for encoding (wheels, matrices)
☐ State or context-dependent encoding
☐ Something else
☐ ☑
Considering that good progress has been made in showing that VMS is certainly not Latin, Italian or German ciphered through simple substitution, I was wondering what do you think VMS vords would look like, when decoded.
I'm aware of the many nuances you all may hold about differences in Currier/RZ languages, topics, dummy words and padding text, etc. I'd appreciate it if you could try to cast your vote within the (limited) options given and explain further in the comments.
I'm interested in polling those who hold the more "traditional" views, namely that it's likely ciphered Latin, Romance, Germanic, or the like.
|
|
|
| The Voynich Manuscript revealed |
|
Posted by: Torsten - 28-06-2025, 10:51 PM - Forum: News
- Replies (2)
|
 |
Garry Shaw: "The Voynich Manuscript revealed: five things you probably didn't know about the Medieval masterpiece" (You are not allowed to view links. Register or Login to view.)
Quote:Gibberish
A recent experiment in which volunteers were asked to write pages of gibberish produced texts with similar characteristics to the Voynich Manuscript. The volunteers tended to intersperse a string of long words with a string of short words, chose short words beside illustrations according to the available space, and, in headings, used variations of the title words in the text below. Significantly, the volunteers invented gibberish using a process called self-citation, in which new words largely adapt those written earlier. Scholars have previously proposed this as the method used by Voynich scribes. Is the Voynich Manuscript therefore… meaningless?
|
|
|
| Identifying paragraphs in the Starred Parags section |
|
Posted by: Jorge_Stolfi - 27-06-2025, 09:26 PM - Forum: Analysis of the text
- Replies (2)
|
 |
I am trying to figure out the paragraph breaks in the Starred Parags (aka Recipes) section.
I will use these terms: - parag: short for paragraph.
- head of a parag: its first line.
- tail of a parag: its last line,
- puff: a one-legged gallows, either {p} or {f}, with or without the platform slash.
- margin: the mostly text-free space between an edge of the page and the text.
- left rail: the ideal mostly vertical and straight line that runs just to the left of the majority of lines of a page, separating the left margin from the text.
- right rail: the ideal mostly vertical and possibly wavy but fairly smooth line that runs just to the right of the ends of most lines of a page, separating the text from the right margin.
- long line: a text line that starts at the left rail and ends at or beyond the right rail.
- short line: a text line that starts at the left rail but ends well before the right rail.
- baseline: the ideal usually smooth curved line that runs just below the glyphs of a text line, excluding the tails of {y}, {m}, {l}. etc..
- linegap: the vertical distance between baselines of successive lines; which often varies over the width of the text.
- wider linegap: a line gap that is wider than normal, at least in some part of the lines (e.g. left side, right side, or middle).
- topline: an ideal line parallet to the baseline, such that the distance between the two is the height of an EVA {o} in the line's handwriting.
- midline: an ideal line parallel to the baseline and the topline, equdistant from the two.
- starlet: a star in the margin that has been assigned to a unique line, like a bullet in an item list.
The posiitions and even the count of stars in each page are not reliable, since they sometimes do not match the obvious paragraph breaks. Thus the assignment of starlets to lines is to be determined as part of identifying the parag breaks. However, I will assume that every starlet should be assigned to a different line.
That saiid, a paragraph should ideally be a bunch of consecutive lines with all of the following properties:- P1. The first of these lines follows a short line (or is the first line in the SPS, or follows a "title");
- P2. The last of these lines is short (or is the last line of the SPS, or precedes a "title").
- P3. All lines other than the last one are long lines.
- P4. There are no puffs in any of these lines except possibly in the first of them.
- P5. The first of those lines has an assigned starlet.
- P6. None of these lines, except the first one, has an assigned starlet.
I will call a set of lines with all these properties a perfect parag. I will assume that they are indeed paragraphs as intended by the Author.
The following table gives some relevant statistics per page, with a tentative assignment of starlets:- Stars: Number of stars in the page.
- ShLns: Number of short lines in the page
- Puffd: Number of lines that contain puffs (one-leg gallows).
- PerfP: Number of perfect parags in the page.
Code: page ! Stars ! ShLns ! Puffd ! PerfP
------+-------+-------+-------+-------
f103r | 19 | 18 | 14 | 15
f103v | 14 | 12 | 14 | 9
f104r | 13 | 13 | 13 | 13
f104v | 13 | 13 | 8 | 11
f105r | 10 | 11 | 15 | 6
f105v | 10 | 14 | 20 | 3
f106r | 16 | 15 | 17 | 13
f106v | 14 | 16 | 16 | 14
f107r | 15 | 15 | 13 | 10
f107v | 15 | 15 | 13 | 14
f108r | 16 | 17 | 13 | 8
f108v | 16 | 5 | 8 | 1
f111r | 17 | 10 | 7 | 4
f111v | 19 | 8 | 11 | 6
f112r | 12 | 11 | 13 | 8
f112v | 13 | 15 | 14 | 12
f113r | 16 | 16 | 17 | 12
f113v | 15 | 15 | 16 | 15
f114r | 13 | 11 | 13 | 11
f114v | 12 | 11 | 12 | 9
f115r | 13 | 13 | 12 | 12
f115v | 13 | 13 | 12 | 12
f116r | 10 | 8 | 10 | 5
------+-------+-------+-------+-------
TOTAL | 324 | 295 | 301 | 223
As it can be seen, on page You are not allowed to view links. Register or Login to view. the counts of stars, short lines, and puffed lines match and the whole texts consists of perfect parags. On other pages there are lines which cannot be placed in perfect parags. I will have to compromise on one or more of the criteria above. Stay tuned...
|
|
|
| How multi-character substitution might explain the voynich’s strange entropy |
|
Posted by: quimqu - 27-06-2025, 10:34 AM - Forum: Analysis of the text
- Replies (23)
|
 |
Correction
Originally, I described the transformation used as a homophonic cipher, but that label is misleading. What I actually applied was a form of multi-character substitution, where each letter in the original word is replaced by a randomly chosen variant (e.g., a0, a1, a2), simulating a kind of randomized expansion at the character level. This isn't a true homophonic cipher in the historical sense — which typically replaces plaintext characters with multiple possible cipher symbols without increasing the total character count. My version expanded the text significantly and altered its structure.
Despite the naming inaccuracy, the method did reproduce an entropy curve similar to the Voynich CUVA profile, especially in the characteristic “bump” around n=3–6. The results still support the hypothesis that some kind of structured substitution — possibly at the syllable or morph level — could account for the entropy behavior in the Voynich manuscript. However, any conclusions should be interpreted with this clarification in mind.
You can also check this post of mine where you can see the entropy bump comparing the MS in EVA and in CUVA versus natural languages texts:
You are not allowed to view links. Register or Login to view.
Maybe by accident, I’ve pulled on a thread worth following — I’ll keep exploring what really generates the bump.
------------------------------------------
In this experiment, I tried to simulate how different historical ciphers affect the entropy profile of a text, and compare the results to the Voynich CUVA (explained here You are not allowed to view links. Register or Login to view. by René Zandbergen). The idea was to test whether the statistical behavior of the Voynich text—especially its distinctive “entropy bump”—could emerge from known cipher types.
Method
I took the Latin text De Docta Ignorantia and applied 10 classical cipher transformations likely known or possible in the 15th century:
- Syllabic substitution
- Homophonic cipher
- Caesar cipher
- Grammatical expansion
- Transposition cipher
- Contextual substitution
- Polyalphabetic cipher
- Cardano grille
- Relative-position encoding
For each version, I measured n-gram entropy per word (resetting after every word) from n=1 to n=14.
I then plotted these values against the Voynich CUVA section.
![[Image: uLOSZCq.png]](https://i.imgur.com/uLOSZCq.png)
This graph shows that most cipher types produce entropy curves that drop steeply after n=3–5, while the Voynich text declines gradually and smoothly. This is already unusual.
But there's one exception...
Homophonic cipher anomaly
Only the homophonic cipher (3+ variants tested) produces an entropy “bump” that matches the Voynich profile. Specifically, when using a homophonic cipher with 3 or 4 characters per symbol, the entropy curve is smoother and shows a slow decay, similar to the CUVA data.
This raises two hypotheses:- A system with homophonic encoding of syllables or morphs could recreate a Voynich-like structure.
- The smoothness of the curve may suggest internal rules or language constraints, not just random substitution.
![[Image: 4aEWNbM.png]](https://i.imgur.com/4aEWNbM.png)
Notice how the 3- and 4-character homophonic ciphers almost replicate the Voynich curve — both in shape and range. The 2-character version decays a bit faster but still mimics the bump.
Natural text vs. Voynich
To test if this was just a quirk of De Docta Ignorantia, I took four different natural texts (Latin, French, English):
- Ambrosius Medionalensis In Psalmum David CXVIII Expositio (Latin)
- La reine Margot (French)
- Romeo and Juliet (English)
- De Docta Ignorantia again
Each was encrypted with a 3-character homophonic cipher and compared to Voynich CUVA.
![[Image: kSTbMuI.png]](https://i.imgur.com/kSTbMuI.png)
Interestingly, when using a 3-character homophonic cipher on natural texts (Latin, French, English), the entropy curves become much smoother and more sustained. For several of them, the n-gram entropy remains high up to n=6–7, and only drops significantly past n=8 or n=9.
The curve shapes are now visibly closer to Voynich CUVA, with the most similar being De Docta Ignorantia and Romeo and Juliet. However, the Voynich text still has:- A slightly smoother and more consistent decay, without sudden drops
- A more gradual “tail” beyond n=9, where others still not flatten or zero out (except Romeo and Juliet)
This supports the idea that some homophonic structure — perhaps morph- or syllable-based — could explain the entropy shape. But it also reinforces the notion that Voynich words follow a more regulated internal logic, possibly due to morphological templates or position-based constraints.
Interpretation
There are two key features that stand out:- The “Voynich bump” (sustained entropy around n=3–6) is only replicated by homophonic substitution.
- The smoothness of the curve in CUVA suggests an underlying linguistic system — natural or artificially constructed — rather than arbitrary encoding.
This doesn’t prove the Voynich uses a homophonic cipher, but it does suggest that such systems can generate statistically similar profiles, especially when applied at the syllable or morph level.
It may also support theories that posit an artificial language, a constructed morphology, or template-driven word generation, all of which maintain internal consistency over longer n-grams.
|
|
|
| Was the VM a failure? |
|
Posted by: Bernd - 26-06-2025, 02:31 PM - Forum: Voynich Talk
- Replies (28)
|
 |
Many theories about the Voynich Manuscript portray it to be an ingeniously clever cipher or a novel method to encode a foreign or constructed language, often claimed to be invented by a famous person. Indeed all (serious) attempts to make sense of the text have utterly failed so far.
But is this hypothesis really feasible? Despite the countless things we do not know about the VM, we can make two statements with great confidence:
1)The mechanism by which the VM text was created did not gain traction and become widely used around the time the VM was made in the 15th century.
2)No even remotely similar encoding mechanism evolved in the next ~600 years until today.
3) Despite countless attempts since Wilfried Voynich's time a century ago, the 'code' remains uncracked.
This should raise some serious doubts.
While examples for brilliant inventions that were lost in time exist (Antikythera mechanism), I do not think this is a parsimonious hypothesis. Given the overall rather amateurish and provincial look of the VM, I think it is far more likely that:
.)If the VM contains enciphered information, the encipherment process is probably too cumbersome and ineffective for most scenarios.
.)The VM served a very narrow and probably personal purpose that did not require the decipherment process to be practical for a wider audience, maybe relying on a-priori knowledge of the contents like a mnemonic aid.
Or - the VM text was never meant to contain any information and was created for whatever different purpose altogether. Again we fail to find comparable examples.
I am not fond of deliberate hoax hypotheses, simply for the almost fractal complexity and level of details we can see in the VM text and imagery, unnecessary for a hoax. But it certainly cannot be ruled out. Yet, no even remotely complex hoax document was ever uncovered.
Regardless of the intention behind the VM, what we can say for sure is that it's creation process wasn't a success story that was frequently repeated. It may have served a purpose for the author or a very small circle, and I do believe it was important for its creator(s) because of the sheer work involved, but it appears unlikely the project was of any broader significance beyond that.
Had it been a ground-breaking and practical invention, it would either have spread fairly quickly or re-evolved in the next hundred years. I think we should keep that in mind.
|
|
|
|