DonaldFisk > 10-04-2017, 08:10 PM
(10-04-2017, 06:33 PM)ReneZ Wrote: You are not allowed to view links. Register or Login to view.There are many interesting aspects to what you have done, and this can be used for some additional interesting experiments. It will be worth to come back to that later.
On the other hand...
While it may not seem obvious at first sight, I see that Nick caught on to it as well, and your approach is conceptually the same as Gordon Rugg's.
Your result is better than Gordon's in one respect, and worse in another.
You generate text that really looks very much like the Voynich text, much more so than that of Gordon.
On the other hand, Gordon presents a simple means how it could be generated (in theory), whereas you don't.
Both are methods that try to reverse engineer the Voynich text "as we know it", but the result is not an exact match.
The method (both yours and his) would require further targeted tweaking in order to match the missing bits.
These include:
- to make sure that m appears predominantly at line ends (which is not yet the case)
- to make sure that f and p appear predominantly at first lines of paragraphs (which they don't yet).
- the special properties of the line-initial words ("line as a functional unit")
Having said that, I have a couple of questions.
1) Do I see correctly that you have different state transition probabilities for the different sections of the MS
2) Is each word started 'from scratch' or do the probabilities continue over word spaces.
The most interesting thing I find is that the Zipf law is followed so well just from the word generation based on state transition probabilities.
There's much more, but it will have to be later.....
-JKP- > 10-04-2017, 08:50 PM
Anton > 10-04-2017, 10:47 PM
ReneZ > 11-04-2017, 06:03 AM
-JKP- > 11-04-2017, 01:08 PM
(10-04-2017, 09:16 PM)DonaldFisk Wrote: You are not allowed to view links. Register or Login to view.In a 240 page manuscript of meaningless text, you're bound to find those sort of co-occurrences somewhere. Also, the plants are poorly drawn and few have been positively identified.
...
Torsten > 19-04-2017, 09:58 AM
(11-04-2017, 06:03 AM)ReneZ Wrote: You are not allowed to view links. Register or Login to view.Let me add that I don't reject the possibility that the text is meaningless. I consider it possible, and there are some arguments in favour of it.
However, the simulations shown here do not demonstrate this. Let me try to explain why, by using a thought experiment. (This could actually be done in practice).
Take some known text, why not 'Don Quixote'. This is certainly a meaningful text.
Sort all words in descending frequency of occurrence. This produces a list.
Put next to this a list of all words in the Voynich MS, equally sorted in order of descending frequency.
Using the resulting table of word pairs as a translation table, translate Don Quixote word for word into Voynichese. This results in a text using 100% real Voynichese words, but that is is meaningful.
Now shuffle all the words in Don Quixote around. (This is easily done with a computer. With paper and a pair of scissors would take a considerable amount of time.)
The resulting text is certainly meaningless.
Again translate this into Voynichese using the same table as before.
The two texts in Voynichese that we obtained are looking extremely similar. Nobody would be able to tell whether one or the other is meaningful or not.
Both have exactly the same word length distribution and their 'Zipf graphs' are identical.
-JKP- > 19-04-2017, 08:46 PM
Torsten > 21-04-2017, 11:10 AM
(10-04-2017, 02:50 PM)DonaldFisk Wrote: You are not allowed to view links. Register or Login to view.I have also worked out, in detail, the general method by which the text must have been generated.
In brief, the text appears to have been generated using state transition tables.
Davidsch > 21-04-2017, 12:35 PM
Torsten > 21-04-2017, 01:28 PM
(21-04-2017, 12:35 PM)Davidsch Wrote: You are not allowed to view links. Register or Login to view.In any language letters are predictable. In a sentence words are predictable if you look at the SVO or VSO etc. word order and grammar.
In Voynichese exactly the same: letters are predictable and word sequences as well.