(23-01-2026, 09:28 AM)dashstofsk Wrote: You are not allowed to view links. Register or Login to view. (23-01-2026, 09:06 AM)Jorge_Stolfi Wrote: You are not allowed to view links. Register or Login to view.{qok,ok} x {eedy,aiin}
They are close enough to parity for my liking. As I said before I do not believe in blind randomness. I do not believe that the writer was rigidly following the method.
But how would he have done it then? If he picked prefixes and suffixes independently -- by spinning two wheels, by pulling cards from two bags, etc -- we should see Pr(XY) = ~Pr(X)Pr(Y). Even if each wheel/bag is heavily biased. Even if he failed to spin or mix properly before each draw. Even if he got lazy now an then, and chose a prefix or suffix from his head, or repeated a recent one, instead of using the device.
Those deviations from independence that we see mean that he did not choose prefixes and suffixes independently while writing the text, but chose the words as a whole -- even if he initially made up his lexicon by combining prefixes and suffixes.
Guess what, that is how natural languages work...
Quote:So long as he was careful to give the text a semblance of genuineness, there was no need for strict adherence.
If the goal was to "give the text a semblance of genuineness", he should
not have chosen to generate it by combining prefixes and suffixes. That would have been twice the work than choosing whole words from a bigger bag, only to produce something that would have looked totally unnatural to an European at the time.
Not to mention that the prefixes and suffixes are not arbitrary. As per the CMC+OKOKO model, each word consists in fact of 7 + 8 slots, in a specific order, each of which can be filled or not with glyphs or glyph combinations from a small set of elements, specific to each slot. The prefix/suffix decomposition that you use seems to be the result of splitting those 15 slots in two parts, around the gallows. Why would he use such a complicated method to create the prefixes and suffixes?
And then he would have to make sure that the entropy per word, the distribution of word pairs, the distribution of words along a paragraph, etc etc all had the "semblance of genuineness"...
That is the big problem with all the "gibberish" proposals. There are many methods of generating mysterious-looking text with much higher "semblance of genuineness" that would be much easier to execute
and devise, that would be perfectly good for the time. If the method generated 20 bits per word, or a Zipf plot with three flat steps, or the token pair distribution was factorable -- no one at the time would have noticed.
And one could also simply scan the Bible back to front, copying every third word with a simple encoding. Or anything like that. Without even being too careful about it. That "cipher text" would be mysterious and impossible to decipher, but would look as "genuine" as the Bible, and would even have the right Zipf, entropy, etc..
Kelley made up an "Enochian language" complete with an "Enochian alphabet", and filled a whole book with it, without any device like wheels or bags of cards. Yet the result was good enough to fool mathematician John Dee, all the way to his grave.
I still haven't seen any solid evidence that Voynichese is
not natural language in the plain. But I have seen a lot of hints that it
is...
All the best, --stolfi