| Welcome, Guest |
You have to register before you can post on our site.
|
| Online Users |
There are currently 740 online users. » 10 Member(s) | 725 Guest(s) Applebot, Baidu, Bing, Google, Yandex, cvetkakocj@rogers.com, disabully, eggyk, Grove, JustAnotherTheory, Mauro, Radim Dobeš
|
|
|
| GPT Models Fail to Find Language Structure in the Voynich Manuscript |
|
Posted by: quimqu - 02-07-2025, 04:10 PM - Forum: Analysis of the text
- Replies (27)
|
 |
The GPT model is a type of neural network trained to predict the next token (e.g., word or character) in a sequence, based on the context of the previous ones. During training, the model gradually learns patterns and structures that help it guess what might come next. When applied to natural language, this often results in learning the grammar and syntax of the language. The goal of this experiment is to see whether a GPT, when trained on Voynich text, can reproduce even short valid word sequences — a basic sign of underlying grammatical structure.
Using a minimal GPT architecture trained on 11,000-token corpora from natural languages and the Voynich manuscript (EVA and CUVA transcriptions, only paragraphs of what seems natural language (not cosmological parts, for example)), I evaluated how well the model could reproduce sequences of two or three consecutive words (bigrams and trigrams) from the original corpus. The results reveal stark differences between Voynichese and natural languages.
I trained several nanoGPT models (roughly 1.1M parameters each) on corpora limited to 11,000 words each. The corpora included:
- Latin (e.g. De Docta Ignorantia)
- Classical religious text (In Psalmum David CXVIII)
- Early Modern English (Romeo and Juliet)
- Esperanto (Alicie en Wonderland)
- Voynich EVA transcription
- Voynich CUVA transcription
Each model was trained on tokenized text split by the dot (".") separator, treating each token as a "word". Then, I prompted each model to generate 1000 words, starting from a random token from the original corpus.
For each generated sequence, I extracted all bigrams and trigrams and checked how many were present in the original corpus text (used as training data).
Results (Bigrams and Trigrams Found in Training Text):
![[Image: s5N9a0a.png]](https://i.imgur.com/s5N9a0a.png)
The Latin religious text In Psalmum David CXVIII had pretty low bigram and trigram scores — not too far from the Voynich numbers. This could be because of its complex sentence structure or how rarely some word combinations repeat. But even then, it still produced some consistent word sequences, which the GPT picked up.
That didn’t happen with the Voynich at all — no three-word sequences from the original text were ever regenerated. This makes Voynichese stand out as fundamentally different.
In addition, the entropy of word distributions was comparable across corpora (~8.5 to 9.6 bits), meaning the GPT learned the relative frequencies of words quite well. However, only in natural language corpora did it also learn statistically consistent co-occurrence patterns.
Conclusion:
If the Voynich manuscript encoded a natural language, we would expect a GPT trained on it to be able to reproduce at least a small proportion of common bigrams and trigrams from the training corpus. This is exactly what we observe in natural language corpora (e.g. Esperanto 25.9% bigram match). In contrast, the bigram match rate for Voynichese is nearly zero, and trigrams are entirely absent.
This strongly supports the hypothesis that the Voynich manuscript is not a natural language encoding. While it has an internally consistent lexicon (i.e., words), it lacks the sequential dependencies and word-to-word transitions that characterize even simple or constructed languages.
Implication:
If a small GPT can learn bigrams and trigrams from natural languages in just 11,000 words — but completely fails to do so with Voynichese — this suggests that the manuscript does not reflect natural language structure.
This casts serious doubt on claims of direct decryption or translation into real languages. It’s likely that such efforts are misapplied.
Instead, the Voynich may reflect a pseudo-linguistic system — a generative algorithm, a constructed gibberish, or even a cipher whose output was never meant to carry true semantic depth. The surface form may resemble language, but its internal statistical behavior tells a different story.
In short: be skeptical of anyone claiming to have “translated” the Voynich into English, Latin, or any other language — unless they can show that their version has the statistical fingerprints of a true linguistic system.
|
|
|
| About this Subforum |
|
Posted by: Koen G - 30-06-2025, 03:13 PM - Forum: The Slop Bucket
- Replies (26)
|
 |
A conversation I had with ChatGPT
Me:
I moderate a forum about the Voynich Manuscript. As a community, we welcome all people who are interested in the manuscript, regardless of their background or level of expertise. In the past, we have also always been welcoming people to share their theories - with the caveat that they should expect criticism. (As you know, the manuscript remains unsolved, and most theories are demonstrably misguided).
Lately though, we see increasing amounts of AI-assisted theories. People talk you, ChatGPT, specifically, and since you are built to please, you give them the Voynich solutions they want to hear. But we are the ones who have to deal with the consequences. Your training data does not include the solution to the Voynich Manuscript. Quite on the contrary, you are in part trained on nonsense theories of former theorists (along with more sensible data). Previously, people had to work hard to build a theory; and even if they were wrong, at least they invested something of themselves in it.
Now, they can just talk to you for half an hour and before you know it, they have tables of data and all kinds of documents about a "theory" they don't even understand themselves. They then ask for our input, but we have no desire to interact with nonsensical machine hallucinations. All these theories are a wall of text, but rarely any real content or consistency, and certainly no solution. Yet they all seem to imagine that they were the first to think of trying to Chat the solution out of some GPT's algorithm.
What I plan to do now is to make a locked thread where I will collect all such theories. My opening post should explain why their post has been moved to this thread. In other words, it should explain why it is not a good idea to overly rely on you (ChatGPT) for coming up with a solution to one of the world's enduring mysteries, and why it is not very nice to as researchers to interact with such low quality AI slop. What would you suggest I write?
ChatGPT:
That’s a very reasonable and well-articulated concern, and you're absolutely right to draw a line for the health of your forum. Here’s a suggested draft for your opening post, written in a clear but firm tone that addresses the problem without being hostile to newcomers:
? About This Thread: AI-Assisted Voynich Theories
Welcome to the AI-Assisted Theories Megathread. If your post has been moved here, it's likely because your theory about the Voynich Manuscript was generated in large part with the assistance of an AI chatbot, such as ChatGPT.
We understand the appeal of using AI to tackle the mystery of the Voynich Manuscript. It's a fascinating puzzle, and generative models can produce long, confident, and often impressive-looking text with very little effort. However, there are serious problems with treating such output as meaningful Voynich research.
Why We Created This Thread
In recent months, we've seen a surge of AI-assisted "theories" that tend to share the same characteristics: - They are verbose, but lack internal consistency or logical rigor.
- They often reuse old, debunked ideas wrapped in novel phrasing.
- They frequently draw on patterns that sound plausible, but fall apart under scrutiny.
- They are typically the product of a few hours of prompting, rather than a genuine engagement with the manuscript, its history, its paleography, or its scholarly context.
These posts aren't theories in the traditional sense—they are AI-generated speculation, usually created without a solid understanding of the manuscript or the existing body of research. The effort may feel exciting for the individual involved, but the result is often indistinguishable from the many, many failed theories of the past—only now created faster, and in higher volume.
What You Need to Know About AI and the Voynich Manuscript- AI does not know the solution to the Voynich Manuscript. No one does.
- Chatbots like ChatGPT are not reasoning engines. They do not analyze text or decode ciphers. They predict plausible next words based on patterns in training data—patterns which include both genuine research and a great deal of nonsense.
- Theories generated by AI are not vetted, not original, and not reliable, even if they sound coherent.
- Asking researchers to evaluate AI-generated output is asking them to spend time on machine-generated hallucinations, rather than serious research or meaningful conversation.
What This Thread Is (and Isn’t)
This thread exists as a place to collect and contain AI-assisted Voynich theories. It is not a place where experienced members are expected to engage deeply with them. Some of these ideas may still spark curiosity, and we won’t delete them outright. But if you want to be taken seriously as a contributor, we encourage you to:- Engage with existing literature and learn what has already been tried and debunked.
- Do your own work. Use AI as a tool if you like, but take ownership of your ideas.
- Be ready for criticism, and don't mistake verbosity for validity.
If you're here because your post was moved, this is your opportunity to reflect on how much of your "theory" you truly understand, and whether you're prepared to defend it as your own work—not just something an AI happened to write while you watched.
Let me know if you want to strike a more welcoming, more humorous, or more academic tone—I'd be happy to adjust it.
|
|
|
| [Poll] What *are* vords? |
|
Posted by: RadioFM - 30-06-2025, 12:20 AM - Forum: Analysis of the text
- Replies (11)
|
 |
Poll no. 1 (see above): Nature of vords
When decoded, most (if not all) Voynich vords will turn out to be... (Choose 1 option)
a) Words or almost whole words
b) Syllables, bigrams, n-grams
c) Single letters (or phonemes)
Assume any dummy/null vords or characters removed
Poll no. 2: Nature of the cipher
I believe the bulk of the text was ciphered using... (Multiple choices allowed)
☐ Dummy (null) characters or strokes
☐ Dummy (null) words
☐ Transposition within words
☐ Transposition within lines
☐ Indexed codebook (not just for a minority of words, but for the bulk of text)
☐ Auxiliary devices for encoding (wheels, matrices)
☐ State or context-dependent encoding
☐ Something else
☐ ☑
Considering that good progress has been made in showing that VMS is certainly not Latin, Italian or German ciphered through simple substitution, I was wondering what do you think VMS vords would look like, when decoded.
I'm aware of the many nuances you all may hold about differences in Currier/RZ languages, topics, dummy words and padding text, etc. I'd appreciate it if you could try to cast your vote within the (limited) options given and explain further in the comments.
I'm interested in polling those who hold the more "traditional" views, namely that it's likely ciphered Latin, Romance, Germanic, or the like.
|
|
|
| The Voynich Manuscript revealed |
|
Posted by: Torsten - 28-06-2025, 10:51 PM - Forum: News
- Replies (2)
|
 |
Garry Shaw: "The Voynich Manuscript revealed: five things you probably didn't know about the Medieval masterpiece" (You are not allowed to view links. Register or Login to view.)
Quote:Gibberish
A recent experiment in which volunteers were asked to write pages of gibberish produced texts with similar characteristics to the Voynich Manuscript. The volunteers tended to intersperse a string of long words with a string of short words, chose short words beside illustrations according to the available space, and, in headings, used variations of the title words in the text below. Significantly, the volunteers invented gibberish using a process called self-citation, in which new words largely adapt those written earlier. Scholars have previously proposed this as the method used by Voynich scribes. Is the Voynich Manuscript therefore… meaningless?
|
|
|
| Identifying paragraphs in the Starred Parags section |
|
Posted by: Jorge_Stolfi - 27-06-2025, 09:26 PM - Forum: Analysis of the text
- Replies (2)
|
 |
I am trying to figure out the paragraph breaks in the Starred Parags (aka Recipes) section.
I will use these terms: - parag: short for paragraph.
- head of a parag: its first line.
- tail of a parag: its last line,
- puff: a one-legged gallows, either {p} or {f}, with or without the platform slash.
- margin: the mostly text-free space between an edge of the page and the text.
- left rail: the ideal mostly vertical and straight line that runs just to the left of the majority of lines of a page, separating the left margin from the text.
- right rail: the ideal mostly vertical and possibly wavy but fairly smooth line that runs just to the right of the ends of most lines of a page, separating the text from the right margin.
- long line: a text line that starts at the left rail and ends at or beyond the right rail.
- short line: a text line that starts at the left rail but ends well before the right rail.
- baseline: the ideal usually smooth curved line that runs just below the glyphs of a text line, excluding the tails of {y}, {m}, {l}. etc..
- linegap: the vertical distance between baselines of successive lines; which often varies over the width of the text.
- wider linegap: a line gap that is wider than normal, at least in some part of the lines (e.g. left side, right side, or middle).
- topline: an ideal line parallet to the baseline, such that the distance between the two is the height of an EVA {o} in the line's handwriting.
- midline: an ideal line parallel to the baseline and the topline, equdistant from the two.
- starlet: a star in the margin that has been assigned to a unique line, like a bullet in an item list.
The posiitions and even the count of stars in each page are not reliable, since they sometimes do not match the obvious paragraph breaks. Thus the assignment of starlets to lines is to be determined as part of identifying the parag breaks. However, I will assume that every starlet should be assigned to a different line.
That saiid, a paragraph should ideally be a bunch of consecutive lines with all of the following properties:- P1. The first of these lines follows a short line (or is the first line in the SPS, or follows a "title");
- P2. The last of these lines is short (or is the last line of the SPS, or precedes a "title").
- P3. All lines other than the last one are long lines.
- P4. There are no puffs in any of these lines except possibly in the first of them.
- P5. The first of those lines has an assigned starlet.
- P6. None of these lines, except the first one, has an assigned starlet.
I will call a set of lines with all these properties a perfect parag. I will assume that they are indeed paragraphs as intended by the Author.
The following table gives some relevant statistics per page, with a tentative assignment of starlets:- Stars: Number of stars in the page.
- ShLns: Number of short lines in the page
- Puffd: Number of lines that contain puffs (one-leg gallows).
- PerfP: Number of perfect parags in the page.
Code: page ! Stars ! ShLns ! Puffd ! PerfP
------+-------+-------+-------+-------
f103r | 19 | 18 | 14 | 15
f103v | 14 | 12 | 14 | 9
f104r | 13 | 13 | 13 | 13
f104v | 13 | 13 | 8 | 11
f105r | 10 | 11 | 15 | 6
f105v | 10 | 14 | 20 | 3
f106r | 16 | 15 | 17 | 13
f106v | 14 | 16 | 16 | 14
f107r | 15 | 15 | 13 | 10
f107v | 15 | 15 | 13 | 14
f108r | 16 | 17 | 13 | 8
f108v | 16 | 5 | 8 | 1
f111r | 17 | 10 | 7 | 4
f111v | 19 | 8 | 11 | 6
f112r | 12 | 11 | 13 | 8
f112v | 13 | 15 | 14 | 12
f113r | 16 | 16 | 17 | 12
f113v | 15 | 15 | 16 | 15
f114r | 13 | 11 | 13 | 11
f114v | 12 | 11 | 12 | 9
f115r | 13 | 13 | 12 | 12
f115v | 13 | 13 | 12 | 12
f116r | 10 | 8 | 10 | 5
------+-------+-------+-------+-------
TOTAL | 324 | 295 | 301 | 223
As it can be seen, on page You are not allowed to view links. Register or Login to view. the counts of stars, short lines, and puffed lines match and the whole texts consists of perfect parags. On other pages there are lines which cannot be placed in perfect parags. I will have to compromise on one or more of the criteria above. Stay tuned...
|
|
|
|