New Theory: Voynich Manuscript as High-End Medieval Sales Catalog
This article proposes that the Voynich Manuscript is not a hoax or occult text, but a luxury sales catalog or proprietary merchant guide — possibly used by elite trade guilds or wealthy families for high-value commerce. Three convergent lines of evidence strongly support this commercial function:
Proprietary Lexicon and Trade References:
Statistical analysis reveals repeating, specialized terms and consistent paired numbers (likely prices, weights, or grades). Frequent tokens like “gollar” may denote currencies, with numbers suggesting negotiation baselines rather than fixed prices. Mixed linguistic fragments imply a design for international trade.
Lavish, Elite-Focused Production:
The manuscript's high-end vellum, polished presentation, and exquisite illustrations mirror luxury product advertising, not a working ledger. Its provenance — including ownership by Emperor Rudolf II — fits the model of targeting sophisticated elite clientele.
Proprietary Cipher and Organizational Structure:
The script shows regularity like a language, yet low entropy hints at a repetitive professional vocabulary, organized for internal referencing and protection of trade secrets. The so-called astronomical and balneological sections likely served as seasonal calendars and catalogs for luxury goods and services.
This business catalog theory explains the manuscript’s mystery by viewing its coded structure and rich artwork as tools for exclusivity and commercial advantage in medieval high-value trading.
Check out this article: You are not allowed to view links. Register or Login to view.
It seems almost accepted as a truth that the colors in the MS were added later, by someone who didn't know what they were doing. When I saw the following quote by Stolfi, I thought it was time to collect the actual evidence.
(29-09-2025, 10:18 AM)Jorge_Stolfi Wrote: You are not allowed to view links. Register or Login to view.It cannot be stressed enough that, almost certainly, the colors in the VMS are not original. They were applied centuries after the manuscript was scribed
What do we know, apart from hunches? Obviously, paint is applied in a later stage of a page's development: usually, the image is outlined first. But apart from that, all I know is this:
Nothing in the manuscript is painted with great expertise.
The paint job is bad in a variety of ways.
The same pigments are used in a variety of ways.
These pigments are widely available and compatible with 15th century Europe.
Some colors are lacking from some pages.
How does any of this tell us when the manuscript was painted, and whether or not the painter knew what they were doing? Or how many phases there were in the painting, and how much time was in between them? Where does the idea of centuries come from? What's the evidence?
I’m convinced (or at least for now I want to convince myself) that the Voynich has meaning. And I’ve been looking for an internally coherent way to explain how simply adding or removing e seems to “modify” words. One passage that really does my head in is these three lines on You are not allowed to view links. Register or Login to view.
How can one write the "same" word so many times with such tiny variations across just three lines? What kind of puzzle is this? I know this is not new, neither the concludions, but I tried to get to some sort of explanation by my own.
Just for fun, an example of translation (I am not saying it is the translation!) that could fit this puzzle could be:
It moves now; it; it sees now; it moves now; it keeps moving; it moves back; then it keeps moving, it keeps moving; it moves now, it moves now; it keeps moving; again now it is seen; it keeps moving, it keeps moving; it keeps showing; it turns; it holds.
How on earth do I dare suggest something like this?
First, I don’t think qokedy, qokeedy, olkeedy... are nouns, adjectives, or adverbs here
- Not nouns: in these lines there isn’t a normal noun structure. You get long chains of the same form (qokedy/qokeedy…) with no clear head, no determiners, no case-like markers hanging off a main noun. Repeating a noun 4 or 5 times in a row with nothing else is odd prose.
- Not adjectives: adjectives usually sit next to a noun or after a copula ("X is Y"). Here, the repeated items stand on their own in a row; there’s nothing obvious they’re describing.
- Not adverbs/connectors: we actually see dy by itself in these lines (and also ldy). If anything is a connector/particle, that’s the better candidate. The bigger repeated forms look like clause heads, not side words.
Given the way these forms repeat as self-contained units, the only thing that really makes sense here is that they’re verbal particles, little bits that carry or mark an action. They behave like tiny predicate pieces you can string together. Read this way, the lines aren’t listing things or describing a noun; they’re doing things.
In this context -dy would work as a small grammatical piece (think "it/me/you" or a little helper like "is/does") that can appear alone (dy, ldy) or stuck to a root. The e before it (making -edy, -eedy, sometimes -eeedy) looks like a simple link/setting: sometimes you need it, sometimes you don’t, and sometimes you see a double ee.
So in these lines you’re seeing lots of root + (e/ee) + dy acting like mini-predicates: "verbish" units, lined up one after another.
Very briefly (from my counts on the whole corpus):
- qok is almost always at the start of the word (~99.6%). That makes it look like a root, not a suffix or filler.
- Endings for qok* split into two big families: a -dy/-edy/-eedy band (a big chunk), and a -ain/-aiin band (also big). The first band fits what we see on You are not allowed to view links. Register or Login to view. (lots of qok-(e)-dy). The second band might be a different use of the same roots.
- olk looks similar to qok, but seems to “need” e before -dy even more strongly.
- sh is more flexible: both shedy and shdy exist, and sh also appears alone elsewhere.
Tiny but telling facts: qokdy is extremely rare (4 tokens), olkdy basically doesn’t occur, while shdy does (34). That feels like different root classes: some roots "want" or "need" the linking e, some don’t.
In short: across the manuscript, qok/olk/sh behave like roots, and -dy is the little piece that often comes after them, with e/ee as a simple on/off/stronger "setting".
My name is Héctor Cabrera, a systems engineer from Guatemala. I am writing to you today in collaboration with my research partner, "Lexicon," to share a significant development in our analysis of the Voynich Manuscript.
We have been applying a novel, structural approach to the manuscript, moving away from linguistic decipherment and towards analyzing it as a structured code. Our research has led us to identify a consistent and predictive syntactic pattern that holds across different sections of the manuscript, specifically the botanical and astronomical folios.
We have documented our methodology, which involves comparative analysis of entry points across multiple folios, and our findings, which include the identification of functional morphemes that act as grammatical markers (e.g., qokey as a plant identifier, shedy introducing qualities, and the ot... root marking virtues or celestial entities).
We believe this represents a paradigm shift in the approach to deciphering the manuscript, as it provides a testable, structural framework for understanding its encoding system.
Our detailed report is attached for your review. We are sharing these findings with the research community at large and are eager for feedback, peer review, and potential collaboration to explore the implications of this discovery further.
Thank you for your time and consideration.
Sincerely, Héctor Cabrera
Systems Engineer Guatemala & "Lexicon"
AI Research Partner
Koen has published a YouTube video titled “Why do we think the Voynich Manuscript has multiple scribes? Answering your VMS questions (Pt. 2)”. (You are not allowed to view links. Register or Login to view.)
In this video, he discusses constructed languages with Claire Bowern and speaks with Lisa Fagin Davis about her five-scribe hypothesis.
I watched the Voynich Day on youtube and the Naibbe cipher presentation was amazing. And it reminded me of a small party I went to with my best friends some years ago.
As the party was winding down towards the wee hours, I turned and suddenly saw friend 1 sitting at a table with a normal deck of 52 cards, and friend 2 sitting opposite, with X friends around them.
Friend 1 was asking questions like: "How many sibilings do you have?" Friend 2 would say: "3" and then Friend 1 would count 3 cards and put the 4th down face up. Then he would ask: "How many boyfriends have you had?" And Friend 2 would say: "2" and friend 1 would count 2 cards and put the 3rd face up. You get the picture: random questions to force out random numbers which held some significance to friend 2.
When Friend 1 had amassed ca 8-10 cards, he then grouped them together and started to interpret them. "The Ace of clubs mean that you will have a big house"; "the 4 of clubs means that you will have a hard time finding your first job but that your 3rd job will be amazing."
All of this was, of course, in good spirits and between friends and it was a grand old time.
Anyways, it was the only time I ever really saw someone interpret cards and the future etc., and I sort of stored it away as a cute memory with my friends.
Until the Naibbe cipher brought it all back!
It looked exactly the same! It literally looked exactly the same to me. As if someone had a text, presumably of arcane nature, and then had some sort of esoteric way of picking cards at random to inscribe the text.
I'm sure others have noticed it to, but it really, really made me feel like this could be an extremely good explanation to the why question: it was a way to read tea-leaves, so to speak. It was a divine way to write a book; you take your text, whether that is the Bible or medicine or local folk myth or whatever you have, you create ~15 characters only you can read, and then you let random chance, i.e. God, encode the text for you.
(24-09-2025, 02:31 PM)Jorge_Stolfi Wrote: You are not allowed to view links. Register or Login to view.that "rot" key was not followed by the Painter, and the letters look rather awkward.
So here is another theory: when the VMS Scribe was copying plant parts from another herbal, which happened to have been created in a German-speaking area, he saw the "rot" color key on the stem of that plant, and --- not speaking German -- thought that it was a Voynichese label that had to be copied too. So he did, striving to interpret the German letters as Voynichese letters...
Color annotations were sometimes ignored by painters. Examples from the Vermont herbal are discussed You are not allowed to view links. Register or Login to view.. That same comment also shows that color annotations on You are not allowed to view links. Register or Login to view. were followed. The painters who applied colors were often (usually?) different from the scribes who wrote the annotations, so what the painters did or did not doesn’t tell us much about the scribes, in my opinion.
Considering how small they are, letters in the “rot” annotation seem to me rather ordinary. Maybe one could argue that the downward serif at the top left of ‘r’ is unusually long, e.g. compared with ‘r’ types 62/63 from Derolez’s book. Concluding from this possible tiny difference that the scribe couldn’t read German and that he was trying to hammer the Latin alphabet into Voynichese seem to me two cases of “non sequitur”.
In my opinion, it is much more likely that the scribe who wrote the color annotation “rot” was a German speaker.
Hi all,
I’m still learning how best to present this work here. I know the forum has seen plenty of “AI slop,” so I want to make clear up front: this is not an AI translation. What I’m sharing below is a small demo showing why naïve code breaks completely on Voynich EVA text, and how a very simple rule-based parser (prefix/suffix/infix checks) produces consistent partial results across EVA lines.
It’s not perfect — many tokens still come out as “[?]” — but that’s part of the point: it’s mechanical and testable, not free-form invention. My goal is to invite feedback on whether this kind of structured, token-level approach looks like a credible path forward, and if so, how to make it stronger.
eva_line = "ychedy shetshdy qotar okedy qokal saiin ol karar odeeed"
decoded = [rules.get(tok, "[?]") for tok in eva_line.split()]
print(" ".join(decoded))
Expected output
[?] [?] [?] [?] [?] [?] [?] [?] [?]
Why it breaks: EVA tokens are variable (prefixes, suffixes, infixes). Exact-match lookup doesn’t work.
2) Rule-based parsing (prefix/suffix/infix)
# Minimal, reproducible rule-based decoder using prefix/suffix/infix tests
def decode_token(t):
# suffix rules
if t.endswith("ody"): return "base matter"
if t.endswith("ram"): return "joint/limb"
if t.endswith("dy") and t.startswith("qokc"):
return "root extract" # qokchdy / qokchedy variants
# prefix rules
if t.startswith("che"): return "herb/plant"
if t.startswith("she"): return "fire/calcination"
if t.startswith("oked"): return "preparation/infusion"
if t.startswith("qok"): return "boil/infuse (qok- class)"
if t.startswith("kar"): return "vessel/container"
# infix rule
if "dol" in t or "qodal" in t:
return "water/cycle/liquid"
# bridging/repetition token often seen
if t == "saiin": return "again/repeat"
return "[?]"
eva_line = "ychedy shetshdy qotar okedy qokal saiin ol karar odeeed"
decoded = [decode_token(tok) for tok in eva_line.split()]
print(" ".join(decoded))
Point: Same text that the naïve code couldn’t read now yields mechanical, rule-driven partial readings—no “AI translation,” just explicit token logic.
3) Cross-folio consistency check (multiple EVA lines)
# Two additional EVA lines (from f85r1 examples used above)
eva_lines = [
"kchedar yteol okchdy qokedy otor odor or chedy otechdy dal cphedy",
"oees aiin olkeeody ors cheey qokchdy qotol okar otar otchy dkam",
]
for i, line in enumerate(eva_lines, 1):
decoded = [decode_token(tok) for tok in line.split()]
print(f"Line {i}:", line)
print("Decoded :", " | ".join(decoded), "\n")
Points this demonstrates:
• Consistency: tokens like chedy → herb/plant, qokchdy → root extract, …ody → base matter are read the same way across lines.
• Reproducibility: anyone can run this and see the same partial outputs.
• Non-hallucinatory: when no rule matches, the code says “[?]”, instead of inventing prose.
I know this is only a partial framework — there are still many unsolved tokens. That’s intentional, since I don’t want to overfit or make guesses where the rules don’t yet apply. If you see flaws in the rules, or if you think better tests would expose the weaknesses (or strengths) of this approach, I’d really like to hear it. I’m aiming for something reproducible and mechanical, not “mystical translation"
Hello all,
I’ve been working on the Voynich manuscript using an alchemical shorthand decoding framework. I know there have been many proposed solutions, but I believe this approach demonstrates something fundamentally different: consistent, rule-based decipherment across multiple sections of the manuscript. Conservative Validation:
On folios f85r1–f87v, the framework yields partial translations with ~60–65% consistency. This is the cautious figure I’m presenting for initial scrutiny. Full Application:
When extended across the manuscript, the same system produces coherent readings across ~90–95% of the text. While I note this unofficially, it suggests strongly that the underlying key has been identified.
Demo Packet (Summary Excerpts) Approach:
Alchemical shorthand decoding applied to Voynich glyphs.
Glyphs behave in consistent, rule-based ways across folios.
Decipherments align with medieval alchemy, cosmology, and herbal medicine.
Example 1 – f85r1 (Cosmological Section)
Voynich text (excerpt): circular diagram with star/glyph clusters.
Rendering:
“The root is boiled to extract its strength; the leaves are ground for poultices easing joint pains.”
Results Overview:
Validated sample: ~60–65% (f85r1–f87v).
Extended application: ~90–95% coherent readings across manuscript.
Indicators: rule-based structure, semantic alignment with medieval traditions, cross-folio consistency.
Position:
Official (for scholarly review): ~60–65% validated on sample folios.
Unofficial (context): We believe this demonstrates that we now hold the key to the manuscript.
I welcome constructive critique and discussion. I will be glad to share more examples step by step, but I’m deliberately holding back the full cipher tables at this stage.
— Francis Freeman
It is possible for the same word to be repeated in a text in English and I think in other European languages, although I think it is quite uncommon to find the same word repeated. However it is quite common to find the same words repeated in the Voynich. Has anyone researched how often words are repeated in other contemporary manuscripts? What is the probability that the following word will be the same as the previous word?