The Voynich Ninja

Full Version: How to recombine glyphs to increase character entropy?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8
That is certainly better than what I managed, Rene. Now I'm curious.
(18-04-2022, 12:55 AM)ReneZ Wrote: You are not allowed to view links. Register or Login to view. The transformation works in both directions

This is a great property! I guess the Italian_conv dot shows the result of the inverse (encoding) transformation. It will be interesting to see the results for other languages as well.
Did anyone try to analyze Venetian?
(18-04-2022, 09:56 AM)Searcher Wrote: You are not allowed to view links. Register or Login to view.Did anyone try to analyze Venetian?

These statistics are so broad at this stage that the selected dialect does not make any difference. This is something to be considered at a much later stage, in my opinion.
I don't understand anything about the peculiarities of this kind of analysis of statistical data and I don't know how to make such beautiful tables and graphs, but I still would like to know more about it, so I'll ask one more question. Would an analysis  of a relatively small amount of information, say, a few pages of text, be representative? In particular, I am interested in the analysis of text entropy of f85r, 85v and f86v. It seems to me that the entropy of the text on these pages will be much higher. What worries me is that usually the Voynich text is compared to texts, perhaps of a completely different nature. Has anyone had the opportunity to analyze the entropy of texts of medieval recipes, spells, prayers or herbals? Will they give the same result (counting and not counting spaces) ? I think that the text of f85r-v and You are not allowed to view links. Register or Login to view. is closer to the average texts, perhaps this is just an impression, so I'd like to make sure, having a numerical indicator. I think it also relates to the text of folios 58r-v, and possibly - some others.

Forgive me, if I missed something, if the entropies of various sections of the manuscript have already been discussed and compared, tell me where to read about it. I'd appreciate that.
Hi Searcher, this is the 'classic' paper on voynich entropy, it mentions spells , Hawaiian, parts of the VMS, etc:
Understanding the Second-Order Entropies of Voynich Text, Dennis J. Stallings, 1998
You are not allowed to view links. Register or Login to view.

voynich.nu has several pages on Entropy, this one has some VMS part scores:
You are not allowed to view links. Register or Login to view.

Koen G has a blog post 'entropy-hunting-bigger-and-better' with multiple values from various VMS sections:
You are not allowed to view links. Register or Login to view.

Text length should be considered :
You are not allowed to view links. Register or Login to view.


P.S If you want, put the folios you want numbers for in another thread, preferably using voynich.nu folio numbering (f86 is like 4pages)
and i'll have a go. Though it will be in EVA using TT transcription.
A quick check ( keeping spaces ) on f86v4+ f86v6+ f86v5+ f86v3 gives :h0 = 4.392317422778761,  h1 = 3.882135731286543,  h2 = 2.0307571425389397
I am currently on vacation so I can't check anything. But I would even dare to put forward the hypothesis that it should be incredibly difficult to write a Voynichese text (in EVA) that has "normal" entropy levels. I'd think that the system is inherently low entropy, independent of the page. I might be wrong though, because there are certainly internal differences within the MS. The most notable difference may be that Q13 has an even lower entropy than the rest.
(18-04-2022, 05:15 PM)RobGea Wrote: You are not allowed to view links. Register or Login to view.Understanding the Second-Order Entropies of Voynich Text, Dennis J. Stallings, 1998
It was, perhaps, for the first time that I paid attention to the part of Stallings' article concerning schizophrenic language. I don't know if our author was schizophrenic or used telegraphic style, consciously omitting prepositions etc. to make summaries of the articles he was copying?
How will the entropy vary if we replace the double e, for example, by u? Even if the change would not be significant, I would like to understand in which direction it will vary when we add more letters to our alphabet?
Usually it would increase entropy. However, in the VM this is never certain, because its predictability works on various levels. So of you introduce this new "u", chances are that this "u" itself becomes very predictable (often preceded by "ch", followed by "d" and so on).
Pages: 1 2 3 4 5 6 7 8