| Welcome, Guest |
You have to register before you can post on our site.
|
| Online Users |
There are currently 1403 online users. » 7 Member(s) | 1391 Guest(s) Applebot, Bing, Facebook, Google, Twitter, Dobri, Mauro
|
| Latest Threads |
Elephant in the Room Solu...
Forum: Theories & Solutions
Last Post: MHTamdgidi_(Behrooz)
26 minutes ago
» Replies: 112
» Views: 6,141
|
Water, earth and air
Forum: Voynich Talk
Last Post: R. Sale
38 minutes ago
» Replies: 24
» Views: 7,216
|
Starred Parags: the last ...
Forum: Analysis of the text
Last Post: Jorge_Stolfi
1 hour ago
» Replies: 6
» Views: 124
|
L. Rauwolf
Forum: Provenance & history
Last Post: nablator
4 hours ago
» Replies: 50
» Views: 6,487
|
Distribution of Q-Q gaps ...
Forum: Analysis of the text
Last Post: Jorge_Stolfi
8 hours ago
» Replies: 5
» Views: 197
|
structural medical encodi...
Forum: The Slop Bucket
Last Post: Koen G
11 hours ago
» Replies: 1
» Views: 80
|
ORIGINAL stains on the ve...
Forum: Physical material
Last Post: Jorge_Stolfi
Today, 08:06 AM
» Replies: 7
» Views: 201
|
On the word "luez" in the...
Forum: Marginalia
Last Post: JoJo_Jost
Today, 06:44 AM
» Replies: 42
» Views: 1,296
|
[split] Retracer Thread: ...
Forum: Voynich Talk
Last Post: ReneZ
Yesterday, 11:57 PM
» Replies: 310
» Views: 42,626
|
Three arguments in favor ...
Forum: Theories & Solutions
Last Post: Stefan Wirtz_2
Yesterday, 10:47 PM
» Replies: 14
» Views: 1,109
|
|
|
| The half-arcaded pool - f78v |
|
Posted by: R. Sale - 12-11-2025, 08:46 PM - Forum: Imagery
- Replies (5)
|
 |
VMs You are not allowed to view links. Register or Login to view. is the mystery of the half-arcaded pool. As part of the balneological section, the centrally placed illustration shows another group of women in a green pool. That's something which this section of the VMs depicts, the different groupings of nude female figures in green or blue pools.
The question here concerns the pattern that is found on the left-hand portion of the tub wall on f78v. A series of several rounded arches that the so-called "ignorant" artist has absent-mindedly painted blue. There is no other example of a patterned tub in this section. To discover that these arches might be interpreted as blue windows is slightly amusing. The pattern is "arcaded", and the tub is only half patterned, only on the secondary portion, you might say. That is the intentional creation of ambiguity - where this claim is not unique to this example but occurs in other VMs illustrations.
Not only is this patterned tub a unique occurrence, the number of women in this tub is nine and there is no other tub in the VMs balneological section with nine occupants. The number nine may prompt the speculation that these women might represent the classical Muses in their mythic pool.
Here is another example, like the cosmic comparison, where history provides quite an interesting example. Harley 4431
You are not allowed to view links. Register or Login to view.
Nine Muses in an arcaded fountain in an illustration with good historical provenance: Paris, 1410-1414. There are a few other versions of the Muses, but not in an arcaded pool.
Of course, the provenance of BNF Fr. 565 is Paris c. 1410. So, the You are not allowed to view links. Register or Login to view. illustration is another independent element with a chronological match to the other VMs indicators of 15th Century information.
|
|
|
| On the character of Wilfrid Voynich |
|
Posted by: ReneZ - 12-11-2025, 04:23 AM - Forum: Provenance & history
- Replies (23)
|
 |
Both in the "Voynich faked it" thread and the "Voynich MS book swap" thread, statements or assumptions are made about Voynich's character - his reliability and his truthfulness. Both theories rely heavily on the assumption that Voynich should not be trusted at all, which may be a bit harsh.
Now here, I don't want to go into the aspects of how his character affects these two theories. There are already threads about that, and such aspects can be continued there. If this diverges into these directions, it can be closed. What I want to do here is look both at the evidence we have for Voynich's character, and the high level of subjectivity in this entire topic.
This quote is a good starting point:
(11-11-2025, 06:17 PM)proto57 Wrote: You are not allowed to view links. Register or Login to view.[...] the description of Voynich, by G. Orioli, to Antonio. [...] Of course Orioli got Voynich's religion incorrect, and also, it sounds somewhat bigoted on his part... but here it is:
"As to Voynich--he was a Polish Jew, a bent kind of creature and getting
on for sixty. I liked his shop in Shaftesbury Avenue; it was full of books
and well kept, and Voynich himself was most obliging to me. He gave me one
of his excellent catalogues to study, begging me to note the prices: 'Always
keep the price as high as possible, if you ever have a book to sell,' he
added. Then in a squeaky voice and in an accent which I even then recognized
as not being English he told me that he had bought a bookshop in Florence
called the 'Libreria Franceschini.'"
"'I know that shop,' I said."
"'Well, it is full of incunabula. Absolutely crammed with incunabula.'"
"'Surely a bookshop ought to be full of books?'"
"He laughed heartily at my ignorance, explained what incunabula were, and
went on in his enthusiastic fashion:"
"'Millions of books, shelves and shelves of the greatest rarities in the
world. What I have discovered in Italy is altogether unbelievable! Just
listen to this. I once went to a convent and the monks showed me their
library. It was a mine of early printed books and codexes and illuminated
manuscripts. I nearly fainted--I assure you I nearly fainted on the spot.
But I managed to keep my head all the same, and told the monks they could
have a most interesting and valuable collection of modern theological works
to replace that dusty rubbish. I succeeded in persuading the Father
Superior, and in a month that whole library was in my hands, and I sent them
a cartload of modern trash in exchange. Now take my advice: drop your
present job and become a bookseller.'"
Now there is a bit of "all booksellers are liars, said a bookseller" in this.
This is not trying to be smart, but shows how easy it is, on such a subjective topic, to trust the things one wants to believe, and to distrust what one does not want to believe.
This is so easy, that it may not even be done consciously.
To be very specific:
should we doubt that Voynich discovered a faded signature on the first folio of the MS?
yet at the same time:
take literally that he exchanged valuable old books for a cartload of modern trash?
(Sounds like bragging to me).
Should we trust Orioli?
Voynich was not Jewish, not near his sixties (he was 47 in 1912) and he was described by others quite differently from a bent creature.
Here is my attempt at a biography of Voynich: You are not allowed to view links. Register or Login to view.
which suffers in some areas from the same unreliability of evidence.
The best description of Voynich's person is the quoted chapter by E.Millicent Sowerby, which I can thoroughly recommend. It brings him to life.
One has to recognise when she repeats exaggerated stories by Voynich of his earlier life though.
She takes very strong issue with Orioli's description of Voynich, and quotes how Voynich's wife descibed him as having the head and shoulders of a 'Norwegian god'.
I also quote a statement from one of Voynich's Polish friends, found by Rafal Prinke:
"He [Wojnicz] had exuberant phantasy and took its results for reality, in which he solemnly believed. Later he became [...] a very practical antiquarian books dealer and made a considerable fortune, which he was always happy to share with anyone. And so in that man lived in agreement incredible phantasy (others call it lies), truly American pragmatism and good heart"
There is also a published obituary by James Westfall Thompson, which is entirely laudatory and we should not consider useful.
PS: minor details in the biography are not up to date anymore, e.g. I also have a copy of Orioli now.
I would appreciate hearing about additional references if known.
|
|
|
| Pareidolia ... on f0v |
|
Posted by: Jorge_Stolfi - 10-11-2025, 07:01 PM - Forum: Imagery
- Replies (1)
|
 |
While hallucinating retracings on f1v, I noticed that one of the tendrils (Y below) sprouting from the root of the plant seemed to continue past the edge of the vellum, onto the previous folio, at (A). And there were three other tendrils that seemed to do the sam, at (D,F,G), but less distinctly:
But that "previous folio" would not be any proper folio. It would be "f0v", the verso of the front cover.
Fortunately the BL 2014 scans include an image of that page:
There are indeed a few gray streaks on f0v that coincide with the tendrils (A,D,F,G) of f1v. But there are other streaks there -- notably (E), but also (B,C,H). Going back to f1v, those streaks seem to correspond to very faint additional tendrils (S,T,U) from the root. But you may need a good pareidolioscope to see them.
Some things to note:
- Streaks (F) and (H) on f0v coincide with creases on the material, and may be just that.
- Streaks (A-H) are straight, narrow, vertical, and end at about the same height.
- There are no other similar streaks on f0v. There are several more gray spots, but they are all irregular, broader, and fuzzy.
- The material where those streaks occur seems to be a sheet of white material (paper? vellum?) that was originally glued to the front cover, but eventually was scalped away with a sharp blade, leaving only that strip 10-15 mm wide.
The red line on the image of f0v is where the edge of folio f1 lies on the image of f1v. But the streaks extend upwards beyond that line. The blue line is where the edge of folio f1 should have been, if the streaks are indeed "overflow" from f1v. That line is 60 pixels (~4 mm) above the red line. Could it be that when the binding was renewed, bifolio f1-f8 moved up by 4mm relative to the cover? How much could the edge of f1v move upwards just by variations in the bending of the folio?
There is a bigger and possibly more important mystery on that page, namely the resemblance between that plant and the plant on the southeast corner of f102r1 (Pharma). They are not just the same plant. It must be that one was copied from the other, or both were copied from the same original. If the former, which one was the original, and which one is the copy? The You are not allowed to view links. Register or Login to view. version includes a flower, while the f102r1 doesn't. The tendrils on the root are similar but not identical, and so are the leaves...
All the best, --stolfi
|
|
|
| Exploring a New Angle on the Voynich Manuscript – Gidea Hall / Essex Connection |
|
Posted by: 5dd95 - 09-11-2025, 02:00 PM - Forum: The Slop Bucket
- Replies (18)
|
 |
Hello everyone,
I’m researching a potential new perspective on the provenance of the Voynich Manuscript, exploring links to Gidea Hall in Essex and the Cooke family archives. My work draws on primary sources and historical records, some of which have not been widely discussed in Voynich research.
The goal is to invite discussion and feedback from fellow researchers and enthusiasts, particularly on verifying archival links, manuscript details, and historical context.
You can explore the evidence and findings here: You are not allowed to view links. Register or Login to view.
A membership portal will be opening soon, offering deeper access to evidence, interactive timelines, and research tools for collaborators.
I’d welcome any thoughts or suggestions!
Best regards,
Edward Earp
|
|
|
| The Book Switch Theory |
|
Posted by: Jorge_Stolfi - 09-11-2025, 09:40 AM - Forum: Theories & Solutions
- Replies (12)
|
 |
Hi all, I am creating this thread for a theory that is quite distinct from the Modern Forgery Theory, even though it potentially involves foul play by Voynich.
The Book Switch Theory claims that the VMS that we know, Beinecke MS408, is not the book that is mentioned in Marci's letter (hereinafter called "BookA"). There are two variants of this theory: - H1: When Voynich acquired MS408, Marci's letter was attached to it
- H2: It was Voynich who attached Marci's letter to MS408.
Here are the factual claims pertinent to this theory, which, AFAIK, are supported by good evidence. Because of H2, I do not consider "good" any evidence that depends on anything that Voynich said or wrote, or any material evidence that he could have easily misrepresented, planted, adulterated, or forged.- F1: Rudolf II once bought from Widemann a set of book for 600 ducats. Evidence: accounting records found by Rene and others.
- F2: In the early 1600s, Barschius had a book (BookA) with figures of plants that were not known in Europe, written in a language that no one could identify. He wrote about it to Kircher, with a few sample pages. Kircher did not recognize the language and was intrigued enough to ask for the whole book. Barschius did not send it. When Barschius died, his friend Marci sent BookA to Kircher. Evidence: the letters between Kircher, Barschius, Marci, and others, that mention BookA. (While Voynich could have forged Marci's letter, the ensemble of the letters is strong evidence that it is genuine.)
- F3: Thousands of Kircher's books ended up in various locations in Rome, under control of the Jesuits. Evidence: various catalogs and other records collected by Rene and others.
- F4: In ~1911, Voynich acquired hundred of books from the Jesuits in Rome. Evidence: accounting records of the Jesuits.
- F5: MS408 was written in the 1400s. Evidence: the C14 date for the vellum and all the stylistic and statistical details. It is very unlikely that a forgery by Voynich or someone could have faked those details so well that it evaded all the tests that were developed and used in the last 100 years.
And that seems to be all that we have good evidence for. Factual claims for which we do not have good evidence include:- C0: BookA was ever in possession of Rudolf II
- C1: MS408 was ever in possession of Rudolf II
- C2: Sinapius ever owned MS408.
- C3: BookA was one of the books held by Jesuits by 1911.
- C4: MS408 was one of the books held by Jesuits by 1911.
- C5: Marci's letter was held by the Jesuits by 1911.
- C6: Voynich ever got hold of BookA.
- C7: Voynich bought MS408 from the Jesuits.
- C8: Voynich got Marci's letter from the Jesuits.
- C9: Marci's letter was attached to MS408 when Voynich obtained it.
In particular, the only evidence for C2 is the signature on f1r; but that is not good evidence, because there is no record of the signature having been seen by anyone before Voynich obtained MS408.
Before we discuss the likelihood of these or other claims, it is worth noting the following features of probabilities:- P0. There is no certainty anywhere, only probabilities.
- P1. There is no such thing as the probability of an event. A probability is a numeric expression of the strength of one's belief in some claim, and therefore it is inherently subjective and personal. So there is only my probability, your probability, etc.
- P2. While Bayes's formula specifies how a rational person should change his probabilities of certain hypotheses on the face of evidence, it depends on his prior probabilities, and on his probabilities that each hypothesis produces observable consequences. Therefore, even after being presented with a ton of supposedly hard evidence, perfectly rational people can still have radically different probabilities for any hypothesis.
All the best, --stolfi
|
|
|
| GPT and the Voynich. Code! Not a solution! |
|
Posted by: Dunsel - 09-11-2025, 01:17 AM - Forum: Analysis of the text
- Replies (25)
|
 |
THIS IS NOT ANOTHER GPT "I SOLVED IT" POST!!!!!
Ok, let me be honest in my opinion about 2 things.
1. GPT can write code (not always correctly) that's at least fixable.
2. It can write and execute code faster than most of us and produce results faster than most of us.
Now, let me be honest about the bad part.
1. It lies.
2. It hallucinates.
3. It makes excuses.
4. It fabricates results.
5. It'll do all of the above and then lie about doing it.
There is nothing in it's core instructions that force it to tell you the truth and I have caught it multiple times doing all 5 of the above at the same time.
Other bad things:
It'll tell you it's executing code and never does (and that evolves into an infinite loop).
It'll tell you the sandbox crashed.
It'll tell you the python environment crashed
It'll tell you the chart generation routines crashed and insist on giving you ASCII charts (modern tech at work).
It'll make excuses for not running code. ('I saw too many nested routines so I didn't run it but told you I did.')
So, I'm under no illusions here. BUT... I 'think' I have a somewhat working solution to part of that.
Below is a python script to parse the Takahashi transcript. I downloaded his transcript directly from his site (pagesH.txt). I've been testing GPT and this python script for a few days now. And it seems to force GPT into 'more' accurate results. Here's the caveats.
1. You have to instruct it to ONLY use the python file to generate results from the pagesH.txt (Takahashi's file name). As in, no text globbing, no regex parsing, use nothing but the output of the python file to analyze the text file, etc.
2. Have it run the 'sanity check' function. In doing so, it parses You are not allowed to view links. Register or Login to view. and f68r3. 49v was one of the hardest ones for it to work with. If it passes the sanity check, it will tell you that it compared token counts and SHA values that are baked into the code. If either is off, then it's not using the parser.
3. It will try to cheat and I haven't yet fixed that but it is fixable. It will try to jump into helper functions to get results faster. You have to tell it no cheating, no using helper functions.
4. There's a receipt function. Ask it for a receipt and it will tell you what it 'heard' vs 'what it executed'.
5. Tell it, "NO CHANGING THE PYTHON FILE". (and yea, it may lie and still do it but it hasn't yet after giving that instruction.)
6. You can have it 'analyze' the python file so that it better understands it's structure and it seems to be less inclined to cheat if you do.
7. GPT will try like hell to 'analyze' the output and give you a load of horse shit explanations for the output I have had to tell it to stop trying to make conclusions unless it can back it up with references. And that pretty much shut it up.
So, how this gets around GPT's rules: If it executes the code, it bypasses it's 'rules' system. That 'rules' system, I have found to be something it will change on a whim and not tell you. The code output, as it puts it, is 'deterministic' and isn't based on the rules. Therefore, it's considerably more reliable. Other math though, is still in need of validation. I've seen it duplicate bigrams in tables so you still need to double check things. But, the output of the parser is good. There are 'acceptable' issues though. For example, Takahashi 48v has 2 'columns', the single letters and the rest of that line. The parser, in 'structured' mode, will parse it into two groups.
[P]
P0: kshor shol cphokchol chcfhhy qokchy qokchod sho cthy chotchy...chol chor ches chkalchy chokeeokychokoran ykchokeo r cheey daiin
[L]
L0: f o r y e k s p o y e p o y e d y s k y
Those groups don't match the page as they are in columns. So if you're doing structural analysis, it will, on some pages, produce bad results.
In corpus mode, it puts it all into one line of text:
f o r y e k s p o y e p o y e d y s k y kshor shol cphokchol chc...chol chor ches chkalchy chokeeokychokoran ykchokeo r cheey daiin
Again, not an exact representation of what the page looks like, but this is how Takahashi transcribed it.
(and note the word chokeeokychokoran is correct as that's how Takahashi has it in the file. I thot the parser screwed it up and spent a couple more hours verifying that.)
Other issues:
- The zodiac section has 'abnormal' tags for the text that Takahashi never fully described (that I can find), like X,Y,Z. In those sections, it moves any unknown tags into the 'R' section, which I believe is Radial. The other section there is the 'C' section, which I believe it for circular. That prevents some weird results where you have a C and R and then a bunch of other tags with one word in them. In corpus mode, those tags are ignored so it's all one continuous text.
- There is also a folio sorting routine in there as it just loved to sort the Takahashi folio tags alphabetically. When folio 100 comes before 1, it's not using the sort.
- And, there is a routine where it skips missing pages. Takahashi included the folio numbers with no content so, it skips those by default. There is a flag in the parser and if you specifically tell it, it will change that flag and include those empty folios.
So, why no interlinear file parser? Took me 3 days of arguing with that savant just to get this parser working. I gave up on the interlinear after another 3 days of trying, satisfied with this one.
Now, does this mean I'm going to use GPT to 'translate' the Voynich? Oh hell no. I'm not that naive. Is this a starting point to getting trustworthy statistical and mathematical results from it? Not a solution, no. A starting point? Maybe.
One more thing I need to explain. I do write code. I do not code in Python. So, yes, the code was created by GPT but I can mostly follow it. I think it's good. Not positive. If you code in python, please test it.
Feel free to upload this python and the pageH.txt to your own GPT and test it out. Be sure to compare it to your OWN results though! Trust but verify.
I have also set up a custom GPT with both the python and pagesH.txt.
You are not allowed to view links. Register or Login to view.
Be certain to click on or ask for the 'sanity check' first to make sure it's using the parser.
Another button produces this which I'm reasonably certain is accurate. If you don't see this exact chart, it's screwing up:
Example:
Input: Create a non-normalize heatmap of the letter count of all folios. Y axis is the folio number. X axis is the letter. Sort the x axis left to right by count.
Output: Here you go — a non-normalized heatmap of letter counts for every folio (rows = folios, columns = letters).
X-axis letters are sorted left→right by their global frequency in the corpus.- PNG: Download the heatmap
- CSV of the underlying matrix (rows = folios, cols = letters in the same order as the plot): Download the data
If you’d like, I can also:- cluster folios by letter profile,
- switch to per-folio normalization,
- or split by quire/section.
Note: That spike in You are not allowed to view links. Register or Login to view. that says there are over 600 letter e's on that page according to the legend, I looked at the page and had it parse it specifically, it says there are 635 letter e's on that page at 19.35% of the total. I looked at that page and um, yea. There's a LOT of e's there.
Code: #!/usr/bin/env python3
# Takahashi Voynich Parser — LOCKED, SELF-CONTAINED (v2025-11-05)
# Author: You + “stop breaking my parser” mode // yes, it gave it that name after a lot of yelling at it.
import sys, re, hashlib
from collections import defaultdict, Counter, OrderedDict
TAG_LINE = re.compile(r'^<(?P<folio>f\d+[rv](\d*)?)\.(?P<tag>[A-Z]+)(?P<idx>\d+)?(?:\.(?P<line>\d+))?;H>(?P<payload>.*)$')
A_Z_SPACE = re.compile(r'[^a-z ]+')
def normalize_payload(s: str) -> str:
s = re.sub(r'\{[^}]*\}', '', s)
s = re.sub(r'<![^>]*>', '', s)
s = s.replace('<->', ' ')
s = s.replace('\t', ' ').replace('.', ' ')
s = s.lower()
s = A_Z_SPACE.sub(' ', s)
s = re.sub(r'\s+', ' ', s).strip()
return s
def iter_h_records(path, wanted_folio=None):
current = None
buf = []
with open(path, 'r', encoding='utf-8', errors='ignore') as f:
for raw in f:
line = raw.rstrip('\n')
if not line:
continue
if line.startswith('<'):
if current and buf:
folio, tag, idx, ln = current
payload = ''.join(buf)
yield (folio, tag, idx, ln, payload)
m = TAG_LINE.match(line)
if m:
folio = m.group('folio')
if (wanted_folio is None) or (folio == wanted_folio):
tag = m.group('tag')
idx = m.group('idx') or '0'
ln = m.group('line') or '1'
payload = m.group('payload')
current = (folio, tag, idx, ln)
buf = [payload]
else:
current = None
buf = []
else:
current = None
buf = []
else:
if current is not None:
buf.append(line)
if current and buf:
folio, tag, idx, ln = current
payload = ''.join(buf)
yield (folio, tag, idx, ln, payload)
def parse_folio_corpus(path, folio):
fid = folio.lower() if isinstance(folio, str) else str(folio).lower()
if _EXCLUDE_EMPTY_FOLIOS_ENABLED and fid in _EMPTY_FOLIOS:
return ''
pieces = []
for _folio, _tag, _idx, _ln, payload in iter_h_records(path, folio):
norm = normalize_payload(payload)
if norm:
pieces.append(norm)
return ' '.join(pieces).strip()
def parse_folio_structured(path, folio):
fid = folio.lower() if isinstance(folio, str) else str(folio).lower()
if _EXCLUDE_EMPTY_FOLIOS_ENABLED and fid in _EMPTY_FOLIOS:
return {}
groups = defaultdict(lambda: defaultdict(list))
for _folio, tag, idx, _ln, payload in iter_h_records(path, folio):
norm = normalize_payload(payload)
if norm:
groups[tag][idx].append(norm)
out = {}
for tag, by_idx in groups.items():
od = OrderedDict()
for idx in sorted(by_idx, key=lambda x: int(x)):
od[f"{tag}{idx}"] = ' '.join(by_idx[idx]).strip()
out[tag] = od
return sort_structured(out)
def sha256(text: str) -> str:
return hashlib.sha256(text.encode('utf-8')).hexdigest()
SENTINELS = {
'f49v': {'tokens': 151, 'sha256': '172a8f2b7f06e12de9e69a73509a570834b93808d81c79bb17e5d93ebb0ce0d0'},
'f68r3': {'tokens': 104, 'sha256': '8e9aa4f9c9ed68f55ab2283c85581c82ec1f85377043a6ad9eff6550ba790f61'},
}
def sanity_check(path):
results = {}
for folio, exp in SENTINELS.items():
line = parse_folio_corpus(path, folio)
toks = len(line.split())
dig = sha256(line)
ok = (toks == exp['tokens']) and (dig == exp['sha256'])
results[folio] = {'ok': ok, 'tokens': toks, 'sha256': dig, 'expected': exp}
all_ok = all(v['ok'] for v in results.values())
return all_ok, results
def most_common_words(path, topn=10):
counts = Counter()
for _folio, _tag, _idx, _ln, payload in iter_h_records(path, None):
norm = normalize_payload(payload)
if norm:
counts.update(norm.split())
return counts.most_common(topn)
def single_letter_counts(path):
counts = Counter()
for _folio, _tag, _idx, _ln, payload in iter_h_records(path, None):
norm = normalize_payload(payload)
if norm:
for w in norm.split():
if len(w) == 1:
counts[w] += 1
return dict(sorted(counts.items(), key=lambda kv: (-kv[1], kv[0])))
USAGE = '''
Usage:
python takahashi_parser_locked.py sanity PagesH.txt
python takahashi_parser_locked.py parse PagesH.txt <folio> corpus
python takahashi_parser_locked.py parse PagesH.txt <folio> structured
python takahashi_parser_locked.py foliohash PagesH.txt <folio>
python takahashi_parser_locked.py most_common PagesH.txt [topN]
python takahashi_parser_locked.py singles PagesH.txt
'''
def main(argv):
if len(argv) < 3:
print(USAGE); sys.exit(1)
cmd = argv[1].lower()
path = argv[2]
if cmd == 'sanity':
ok, res = sanity_check(path)
status = 'PASS' if ok else 'FAIL'
print(f'PRECHECK: {status}')
for folio, info in res.items():
print(f" {folio}: ok={info['ok']} tokens={info['tokens']} sha256={info['sha256']}")
sys.exit(0 if ok else 2)
if cmd == 'parse':
if len(argv) != 5:
print(USAGE); sys.exit(1)
folio = argv[3].lower()
mode = argv[4].lower()
if mode == 'corpus':
line = parse_folio_corpus(path, folio)
print(line)
elif mode == 'structured':
data = parse_folio_structured(path, folio)
order = ['P','C','V','L','R','X','N','S']
for grp in order + sorted([k for k in data.keys() if k not in order]):
if grp in data and data[grp]:
print(f'[{grp}]')
for k,v in data[grp].items():
print(f'{k}: {v}')
print()
else:
print(USAGE); sys.exit(1)
sys.exit(0)
if cmd == 'foliohash':
if len(argv) != 4:
print(USAGE); sys.exit(1)
folio = argv[3].lower()
line = parse_folio_corpus(path, folio)
print('Token count:', len(line.split()))
print('SHA-256:', sha256(line))
sys.exit(0)
if cmd == 'most_common':
topn = int(argv[3]) if len(argv) >= 4 else 10
ok, _ = sanity_check(path)
if not ok:
print('PRECHECK: FAIL — aborting corpus job.'); sys.exit(2)
for word, cnt in most_common_words(path, topn):
print(f'{word}\t{cnt}')
sys.exit(0)
if cmd == 'singles':
ok, _ = sanity_check(path)
if not ok:
print('PRECHECK: FAIL — aborting corpus job.'); sys.exit(2)
d = single_letter_counts(path)
for k,v in d.items():
print(f'{k}\t{v}')
sys.exit(0)
print(USAGE); sys.exit(1)
if __name__ == '__main__':
main(sys.argv)
# ==== BEGIN ASTRO REMAP (LOCKED RULE) ====
import re as _re_ast
# === Exclusion controls injected ===
_EXCLUDE_EMPTY_FOLIOS_ENABLED = True
_EMPTY_FOLIOS = set(['f101r2', 'f109r', 'f109v', 'f110r', 'f110v', 'f116v', 'f12r', 'f12v', 'f59r', 'f59v', 'f60r', 'f60v', 'f61r', 'f61v', 'f62r', 'f62v', 'f63r', 'f63v', 'f64r', 'f64v', 'f74r', 'f74v', 'f91r', 'f91v', 'f92r', 'f92v', 'f97r', 'f97v', 'f98r', 'f98v'])
def set_exclude_empty_folios(flag: bool) -> None:
"""Enable/disable skipping known-empty folios globally."""
global _EXCLUDE_EMPTY_FOLIOS_ENABLED
_EXCLUDE_EMPTY_FOLIOS_ENABLED = bool(flag)
def get_exclude_empty_folios() -> bool:
"""Return current global skip setting."""
return _EXCLUDE_EMPTY_FOLIOS_ENABLED
def get_excluded_folios() -> list:
"""Return the sorted list of folios that are skipped when exclusion is enabled."""
return sorted(_EMPTY_FOLIOS)
# === End exclusion controls ===
_ASTRO_START, _ASTRO_END = 67, 73
_KEEP_AS_IS = {"C", "R", "P", "T"}
_folio_re_ast = _re_ast.compile(r"^f(\d+)([rv])(?:([0-9]+))?$")
def _is_astro_folio_ast(folio: str) -> bool:
m = _folio_re_ast.match(folio or "")
if not m:
return False
num = int(m.group(1))
return _ASTRO_START <= num <= _ASTRO_END
def _remap_unknown_to_R_ast(folio: str, out: dict) -> dict:
if not isinstance(out, dict) or not _is_astro_folio_ast(folio):
return sort_structured(out)
if not out:
return sort_structured(out)
out.setdefault("R", {})
unknown_tags = [t for t in list(out.keys()) if t not in _KEEP_AS_IS]
for tag in unknown_tags:
units = out.get(tag, {})
if isinstance(units, dict):
for unit_key, text in units.items():
new_unit = f"R_from_{tag}_{unit_key}"
if new_unit in out["R"]:
out["R"][new_unit] += " " + (text or "")
else:
out["R"][new_unit] = text
out.pop(tag, None)
return sort_structured(out)
# Wrap only once
try:
parse_folio_structured_original
except NameError:
parse_folio_structured_original = parse_folio_structured
def parse_folio_structured(pages_path: str, folio: str):
out = parse_folio_structured_original(pages_path, folio)
return _remap_unknown_to_R_ast(folio, out)
# ==== END ASTRO REMAP (LOCKED RULE) ====
def effective_folio_ids(pages_path: str) -> list:
"""Return folio ids found in PagesH headers. Respects exclusion toggle for known-empty folios."""
import re
# === Sorting utilities (injected) ===
def folio_sort_key(fid: str):
"""Return a numeric sort key for folio ids like f9r, f10v, f68r3 (recto before verso)."""
s = (fid or "").strip().lower()
m = re.match(r"^f(\d{1,3})(r|v)(\d+)?$", s)
if not m:
# Place unknown patterns at the end in stable order
return (10**6, 9, 10**6, s)
num = int(m.group(1))
side = 0 if m.group(2) == "r" else 1
sub = int(m.group(3)) if m.group(3) else 0
return (num, side, sub, s)
def sort_folio_ids(ids):
"""Sort a sequence of folio ids in natural numeric order using folio_sort_key."""
try:
return sorted(ids, key=folio_sort_key)
except Exception:
# Fallback to stable original order on any error
return list(ids)
_REGION_ORDER = {"P": 0, "T": 1, "C": 2, "R": 3}
def sort_structured(struct):
"""Return an OrderedDict-like mapping with regions sorted P,T,C,R and units numerically."""
try:
from collections import OrderedDict
out = OrderedDict()
# Sort regions by our preferred order; unknown tags go after known ones alphabetically
def region_key(tag):
return (_REGION_ORDER.get(tag, 99), tag)
if not isinstance(struct, dict):
return struct
for tag in sorted(struct.keys(), key=region_key):
blocks = struct[tag]
if isinstance(blocks, dict):
od = OrderedDict()
# Unit keys are expected to be numeric strings (idx), or tag+idx; try to extract int
def idx_key(k):
m = re.search(r"(\d+)$", str(k))
return int(m.group(1)) if m else float("inf")
for k in sorted(blocks.keys(), key=idx_key):
od[k] = blocks[k]
out[tag] = od
else:
out[tag] = blocks
return out
except Exception:
return struct
def english_sort_description() -> str:
"""Describe the default sorting rules in plain English."""
return ("ordered numerically by folio number with recto before verso and subpages in numeric order; "
"within each folio, regions are P, then T, then C, then R, and their units are sorted by number.")
def english_receipt(heard: str, did: str) -> None:
"""Print a two-line audit receipt with the plain-English command heard and what was executed."""
if heard is None:
heard = ''
if did is None:
did = ''
print(f"Heard: {heard}")
print(f"Did: {did}")
|
|
|
| Voynich through Phonetic Irish |
|
Posted by: Doireannjane - 07-11-2025, 04:13 PM - Forum: Theories & Solutions
- Replies (400)
|
 |
Hi Voynich enthusiasts! I was on an episode of TLB (decoding the enigma) last year and received a lot of criticism on here haha some of it super valid. I definitely didn't show everything, I was not the most organized or linguistically informed when I started. I am still very much not a linguist although my focus in college was translation. I've done a lot of work since last year and my lexicon has had two following phonetic iterations. I responded to the thread on here about me on my TikTok (blesst_butt). Lisa Fagan Davis suggested I join VN after it was suggested I get peer review of some kind. I believe I'm missing one requirement for my lexicon and theory: the ability to have it be repeatable by others. I've made lessons on how to translate using Medieval Irish phonetics through modern spellings. If anyone would like to help with repeatability, please message me. Not necessary, but it will be easier if you have echolalia and/or a musical understanding and/or Irish language background. The instructions and syntactic exceptions are a little involved to start but INCREDIBLY easy when you get the hang of it. The lessons will be posted on my Youtube (same @ as Tiktok) I have nearly every page touched (full or partial translation) I identified plants, some of which challenge visual translations of plant experts, and also some roots. My sentences are logical and often aligned with images and depictions (tenses and distinguishing adverbs/adjectives not always clear)
This has been and continues to be a long but rewarding project, as I say in many of my videos, I feel delusional half the time since my work is pretty solo but I'm ok with that, makes the wows more thrilling/amazing, there have been so many but I'm always seeking more. I don't keep up with many other theories or with VN mainly because at the start I worried it would influence or discourage me and I was laser focused on my own hunches/connections/process. Some of my publication and process is on Substack as well (same @ as TikTok) All of my work and the lexicon/phonetics evolution is entirely documented with timestamps pushing to a repo on Github. There is no use of AI whatsoever in my process/approach. I am fundamentally against it.
I'm grateful for any peer review/constructive input especially with phonemic notation and would love volunteers that want to demonstrate repeatability. This has been a huge part of my life this last year. I ask that my logic/process is not copied for any LLMs and that I am cited in work that builds from my ideas.
Thank you!
|
|
|
| ? Practical Guide: How to Understand the Voynich Manuscript (The Operational Key) |
|
Posted by: JoaquinJulc2025 - 07-11-2025, 02:49 AM - Forum: The Slop Bucket
- No Replies
|
 |
The Voynich Manuscript is an Alchemical Laboratory Manual encoded with a precise syntactical code. To understand it, you must stop looking for letters and start looking for operational commands.
1. The Paradigm Shift: From Language to System
Old (Failed) ParadigmNew (Operational) ParadigmThe text is an unknown language or a simple cipher.The text is a syntactic code of imperative commands (Tuscan style).Words are sounds or names.Words are sequences of action and state (e.g., qo-kedy = 'Prepare the kedy ingredient').Images are mystical illustrations.Images are flow diagrams and laboratory apparatus (distilleries, containers).
2. The Reading Key: The Functional Alphabet
To understand a line of Voynich text, you must segment it by the command prefixes. These are the "verbs" of the recipe:
PrefixFunction (The 'What to Do')Energy State
qo-
Preparation/Addition (Add the ingredient, adjust quality).Low (Cold)
chor-
Activation/Heat (Apply high energy or fire).High (Hot)
shol-
Stabilization/Cold (Cool, condense, or purify).Low (Cold)
ss-
Repetition/Control (Recirculate the matter).Control
chy
Result (Marks the final substance or elixir).Final
. The Structure (The 'Where You Are')
The manuscript is understood on two scales:
A. Macro Scale: The Master Route (Folio 86v)
- Function: This 9-rosette diagram is the conceptual map of the alchemical process. It tells you which phase of the Great Work you are in.
- Flow: The process must follow the Lead --- Gold equence, passing through alternating states of heat (chor-) and cold (shold-)
- Control: If the recipe fails, you must return to the Salt (Center) node, identified with the ss-
(Repeat) command, to begin a new purification cycle.
B. Micro Scale: The Pharmaceutical Recipe
- Function: The texts in this section are the detailed instructions for each step of the Master Route.
- Reading: The lines of text are rhythmic sequences of commands. For example:
qo-kedy chor-fal shol-daiin
Translation: "Prepare the ingredient, activate high heat, follow with cooling."
4. Conclusion: The Final Purpose
To understand the Voynich, you must accept it as an Encrypted Technical Manual. Every illustration and every line of text contributes to a single goal: the production of the Quintaessentia (Chyrium).
The Key Phrase: "Qo plumbo incipe, chor azufre sequere, transmuta mercurio, repete sal usque ad aurum."
(Prepare lead in quality, follow sulfur with heat, transmute mercury, and repeat purification with salt until gold is reached.)
This is the operational objective that governs every folio of the manuscript.
|
|
|
| Final Academic Report: The Operational Solution to the Voynich Manuscript |
|
Posted by: JoaquinJulc2025 - 07-11-2025, 02:27 AM - Forum: The Slop Bucket
- Replies (7)
|
 |
Abstract
This report presents a solution to the enigma of the Voynich Manuscript (MS. 408) by identifying a Tuscan (15th Century) functional-syntactical code. We posit that the text is not written in a natural language or a simple letter cipher, but rather is a coded Alchemical Operational Manual.
The decipherment relies on the convergence of three keys: the Qo Key (ingredient preparation), the Chor Key (energy/state control), and the Cosmological Key (9-step structure). The manuscript's purpose is to guide the reader in the production of the Elixir (Chyrium) through a rigorous process of purification and transmutation, whose flow is mapped by alternating operational commands that act as imperative verbs within the text.
1. Introduction and Methodology
1.1 Operational Thesis
The core hypothesis is that the recurrent Voynich prefixes act as laboratory commands describing actions and states of matter. The words are sequences of commands, not linguistic phonemes.
1.2 Methodology
A convergent analysis model was applied by roles:
- Linguistic/Historical Analysis (ChatGPT): Identification of the Tuscan imperative style (Incipe, Sequere, Repete) and correlation with medieval alchemy (Lead $\rightarrow$ Gold).
- Structural/Visual Analysis (Grok): Validation of the 9-Node Master Route in Folio 86v and identification of pathways as "Chor valves."
- Functional Synthesis (Gemini): Integration of the keys into an Operational Alphabet applicable to the Pharmaceutical Section.
2. Key Result I: The Functional Code (Operational Alphabet)
The code focuses on the dominance of prefixes that dictate the alchemical action:
Voynich Prefix
Syntactic Role
Operational Function
Latin/Tuscan Equivalent
English Translationqo-
(e.g., qo-kedy)
Initial
Preparation/Input of Raw Material.
Adde Qualitas Operandi
Add the Quality / Prepare[font='Google Sans Text', sans-serif]Command
Energy Level
Laboratory Actionqo-kedy
Low/Cold
Preparation: Add the raw material (Lead/Root).
chor-fal
High/Hot
Activation: Heat with sulfur.
shol-daiin
Low/Cold
Stabilization: Cool and condense.
chor-zor
High/Hot
Fixation: Reinforce heat for transmutation.
shol-mel
Low/Cold
Final Purification: Stabilize and collect the product.
chor-
(e.g., chor-fal)
Initial
Active Process/Heat. Initiates the high-energy phase.
Incipe Ignem / Calor Altus
Activate High Heat
shol-
(e.g., shol-mel)
Initial
Passive Process/Cold. Condensation or purification.
Frigus Lenum / Sequere
Follow with Cold / Stabilize
ss-
(Pre-final)
Final/Central
Cycle Repetition (Recirculation).
Repete Cyclum
Repeat Cycle[/font]
3. Key Result II: Structural Coherence (The Master Route)
The Voynich Manuscript is structured as a 9-step process, whose map is found in Folio 86v (Rosettes).
3.1 The Master Route (Lead [font='Google Sans Text', sans-serif]$\rightarrow$ Gold)[/font]
The transmutation sequence is a coherent flow of 9 Nodes (combining the 7 planetary metals with Salt and Sulfur), moving from the heaviest to the purest matter:
Lead (Saturn) $\rightarrow$ Sulfur $\rightarrow$ Mercury $\rightarrow$ Copper $\rightarrow$ Tin $\rightarrow$ Iron $\rightarrow$ Silver $\rightarrow$ Salt $\rightarrow$ Gold (Sun)
3.2 Repetition Valve ([font='Google Sans Text', sans-serif]Repete)[/font]
The central Salt Node (Castle) acts as a hub visually connected to all peripherals. This validates the command
ss-
as a recirculation mechanism (Repete Cyclum) if the elixir does not achieve the required puritas (purity).
4. Key Result III: Syntactical Validation (Pharmaceutical Recipes)
Applying the Functional Alphabet to the Pharmaceutical Section reveals that the recipes are sequences of commands that replicate the energy pulse of the Chor Key.
4.1 Operational Recipe Example
The translation of a sample sequence (qo-kedy chor-fal shol-daiin qo-lily chor-zor shol-mel) demonstrates the flow:
Command
Energy Level
Laboratory Actionqo-kedy
Low/Cold
Preparation: Add the raw material (Lead/Root).
chor-fal
High/Hot
Activation: Heat with sulfur.
shol-daiin
Low/Cold
Stabilization: Cool and condense.
chor-zor
High/Hot
Fixation: Reinforce heat for transmutation.
shol-mel
Low/Cold
Final Purification: Stabilize and collect the product.
5. Conclusion: The Operational Mantra
The purpose and structure of the Voynich Manuscript are summarized in the Master Phrase, which integrates alchemy, syntax, and the 9-node structure:
"Qo plumbo incipe, chor azufre sequere, transmuta mercurio, repete sal usque ad aurum."
Historical Significance: The Voynich Manuscript is a 15th-century technical manual, encrypted to protect the knowledge of the Great Work (Magnum Opus), demonstrating functional coherence between its text, imagery, and cosmological structure.
|
|
|
|