The Voynich Ninja

Full Version: Discussion of "A possible generating algorithm of the Voynich manuscript"
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
(17-09-2020, 06:18 PM)Alin_J Wrote: You are not allowed to view links. Register or Login to view.Perhaps a few modifications could make it work better. So because of that I am sorry that he decided to leave.

Exactly. Well said.
(17-09-2020, 06:18 PM)Alin_J Wrote: You are not allowed to view links. Register or Login to view.Perhaps a few modifications could make it work better. So because of that I am sorry that he decided to leave. As it is now, our opinions on the matter will likely remain varied. But if more facts gets put into the theory that modified it a little, maybe it would fit the observations better and could be more convincing.

It would be interesting to generalize Timm's algorithmic rules as a point in a searchable space of encoding steps.

While it may be true that Timm's rules don't adequately capture all of the statistical qualities of Voynichese, perhaps some other point in that space would.  

That space can be explored using a search method that manipulates the text-production rules and measures the result's closeness to the desired qualities of Voynichese.  Repeat until the the result is as close as possible without producing overly complex rules (preferably something a scribe could implement).  It would be an interesting demonstration of feasibility.

Has this sort of "algorithmic search for an algorithm" been done before?  It seems similar to research I've seen before of deriving formal grammars for Voynichese.
I have an algorithm for generating Voynich text but my standards are so high, it doesn't meet my own standards until it's virtually perfect, and hence I'm reluctant to post the results until I'm happy with it.

Also, I haven't had time to tweak it to improve it because I'm working on so many other things (largely the palaeography and almost 60 mostly-but-not-finished blogs). I am cursed with too many interests and there just aren't enough hours in the day to pursue them all.
D.O.,

should the search space be limited to: "taking some previous word and applying some changes to it in order to create the present word to be written"?

The above is a generic formulation of Torsten's approach, and I am not aware of any more precise one.

A forward approach (simulating it) will never generate anything like the Voynich MS text because, if this is how it was done, it was stochastic (though governed by some human behaviour).

A reverse approach (optimising an algorithm with some parameters that best fits what we observe) is what I think you are suggesting, but its validation will be limited by the same problem as for the forward approach.

Furthermore, such an algorithm must also be constrained by the well noted special features of the Voynich MS text, like word patterns, paragraph top line behaviour, line-ending behaviour etc, that necessarily makes the algorithm for 'arbitrarily' changing a previous word to the present one *very* complex.

I believe, from reading a comment at Nick Pelling's blog, that the algorithm in Torsten's app is already quite complex, but it does not yet model these aspects.
I have not checked the Java code, and I will not, but I believe that Nick has.

I wonder if this addresses your question adequately....
Rene,

Mainly, I'm just curious about the general problem of discovering the "text generating function" if it indeed exists.  For example, an experimenter could develop multiple generated texts, based on different rules, and having different statistical qualities.  These would constitute a test set.  Then, the rules are hidden away and an algorithmic search of some kind tries to rediscover the rules.  If this is successful for the test set, then if a similar rule set exists for Voynichese, there is some chance this approach might discover it.

I don't think it should be limited to stochastically self-generating texts such as Timm's algorithm.  In a general sense, we are looking for the function that turned the readable text into something unreadable.  So "self-generated" vs "encipherment or translation" are two broad categories of "rules" that could be explored algorithmically.  It is possible that come complex behavior that we observe in the text can arise from simplistic rules.  Wolfram's "You are not allowed to view links. Register or Login to view." comes to mind -- very simple rules that produce unexpected random-seeming number sequences, and remaining thus far as a uncrackable nut in mathematics.
Yes , the 'what is the minimum rule set for creating statistically correct voynichese' problem is something that is quite intriguing in its own right.
(17-09-2020, 02:55 PM)DONJCH Wrote: You are not allowed to view links. Register or Login to view.In my opinion, the forum is poorer for this loss.

Couldn't agree more, and I do hope Torsten will reconsider he's decision to leave. It is not what Voynich community and research would need.

My first contact with the VMS was somewhere around 2014-2015 when I red about it from BBC news I think, and immediately got insipired as probably many others here. But VMS remained just a curiosity, until around 2016, when I really got interested and started to gather information, books, publications, articles and so forth, the one of the first being Mary d'Imperio's Elegant Enigma.

I didn't know nor cared about any authority, community, and while I knew about something called VMS-LIST or something, I didn't want to get involved. I just wanted to build my own idea of the thing. I did not know any currents or feuds, belieds like fake - forgery - true, nor did I want to have any bias or prejudice, so I went on reading everything without precondition.
I read Hyde & Rugg's ideas, I found Ellie Velinska's thoughts, red Philip Neal, Julian Bunn, Elmar Vogt, Stolfi, Rene, Sean Palmer, Journal of Voynich Studies, and many others. I didn't know what they believed in nor what they represented in the Voynich world. They had their ideas and I learned about them. Much later I found out about Nick, JKP, Diane.

I learned about the history, LAAFU, michitonese, core-crust-mantle, about glyph positions, a and b language and lot of other interesting stuff. Good stuff, but still vague and open ended mixed bag of unrelated things. The real first integral interpretation approach, theory I'd say, I found, was when I downloaded Torsten Timm's 2015 paper. I red it a few times to be sureI understood it, and while I did not agree with it so much, it was still a solid attempt and approach and got my attention and respect, as it was a first complete approach that I have encountered. I repeat, I do not agree with it, but still autocopy theory might be possible or at least be part of the enigma. Or could be totally wrong. Nevertheless both papers deserve respect and attention, in my opinion.

From all that reading I build my idea and I pursue it, but Timm's papers have kept me thinking many options, got new ideas from it and has helped me in understanding better how Voynichese could work. All other works have contributed to this and I do hope that more and more papers like Timms and others come in. What I see is that nothing in Voynich seems to be right, little or nothing can be fixed or agreed, most everything is ambiguous, statistics point everywhere and still nowhere. There is no foundation. We need more (wrong) theories, less big egos, to get the foundation. It needs more people, more ideas, more controversy, not less.
(18-09-2020, 02:05 PM)Scarecrow Wrote: You are not allowed to view links. Register or Login to view.
...
What I see is that nothing in Voynich seems to be right, little or nothing can be fixed or agreed, most everything is ambiguous, statistics point everywhere and still nowhere...


Shy  Yes indeed.
(18-09-2020, 01:41 PM)doranchak Wrote: You are not allowed to view links. Register or Login to view.I don't think it should be limited to stochastically self-generating texts such as Timm's algorithm.  In a general sense, we are looking for the function that turned the readable text into something unreadable.

Thanks, I understand.
As far as I know, this has not been tried in any generic manner....
(18-09-2020, 01:49 PM)RobGea Wrote: You are not allowed to view links. Register or Login to view.Yes , the 'what is the minimum rule set for creating statistically correct voynichese' problem is something that is quite intriguing in its own right.

As a control group, one should also take a relatively rather simple and repetitive example of actual natural language text, perhaps along the lines of a simple Dr. Seuss children's book like Hop on Pop, and subject this text to the same test: 'What is the minimum rule set for creating statistically correct Hop on Pop text?'
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25