Ielle,
I'm not an expert in latin texts but I studied it during some years at school and later as a young adult.
I have to recognize that I am not able to translate a single word of this latin translation of the Voynich manuscript.
Could you ask to this blogger which dictionnary he used to make this translation ?
I'd be interested in what the blogger has to say also, but the impression I got (and this is JUST a guess) is that maybe he used Google Translate to try out character assignments on a chunk of text and once it start looking like Latin he extended it to making a character-assignment chart for the rest of the manuscript.
Totally a guess, but the problem is, you can type semi-nonsense sentences into Google Translate and the software tries to make a grammatical sentence out of it as long as SOME of it is valid. In other words, you don't have to supply completely correct words in any grammatical order (and they certainly don't have to be words that were specific to the 15th century) for the English translation to look somewhat valid. To someone who doesn't know Latin, it might look like a legitimate translation.
I could be completely wrong, but since most of the "Latin" he has offered so far is not actually Latin, perhaps that is how the translation table was developed.
There might be some Latin in the manuscript, but if so, it's definitely not anything like this.
Let me give you an example of what I mean.
I just typed some completely random nonsense text into Google translate, with a few spaces to break up the nonsense chunks, set it to Arabic > English and this is what it gave me:
دفقج أجي أجلك بحج
"I will come to you with a pilgrimage"
This does work pretty well, with the condition that you type in words which look a bit like the source language. Without knowing a word of Arabic, I got:
أل نار شر شي
The fire is evil
Can't argue with that.
(02-08-2017, 07:53 PM)ReneZ Wrote: You are not allowed to view links. Register or Login to view.I would be interested if someone had the capability of computing the character and digraph entropies of this 'plain text'.
My prediction (based on the fact that the Voynich text is being 'expanded') is that the digraph entropy of the plain text is even lower than that of the Voynichese text, which is already anomalously low.
No. A 2gram list over a list will have the same entropy as when over the same list if these are the same but expanded 2grams
(03-08-2017, 08:19 PM)Davidsch Wrote: You are not allowed to view links. Register or Login to view.No. A 2gram list over a list will have the same entropy as when over the same list if these are the same but expanded 2grams
I'm sorry, I don't follow that, but what I wrote is certainly correct. By expanding individual characters into fixed strings of characters, one reduces the entropy. Essentially by definition.
(03-08-2017, 02:32 PM)Paris Wrote: You are not allowed to view links. Register or Login to view.Ielle,
.....
Could you ask to this blogger which dictionnary he used to make this translation ?
Sorry, I feel a bit reluctant to take up that contact.
(03-08-2017, 09:25 PM)ReneZ Wrote: You are not allowed to view links. Register or Login to view. (03-08-2017, 08:19 PM)Davidsch Wrote: You are not allowed to view links. Register or Login to view.No. A 2gram list over a list will have the same entropy as when over the same list if these are the same but expanded 2grams
I'm sorry, I don't follow that, but what I wrote is certainly correct. By expanding individual characters into fixed strings of characters, one reduces the entropy. Essentially by definition.
Not sure that I understand what is discussed here, but in the simple example of duplicating each character the 1st order entropy will remain the same while the 2nd order entropy will change.
Consider a simple sequence 'abababab....' If we modify it to 'aabbaabbaabb....', then neither the alphabet itself nor the frequences of individual characters do not change, hence 1st order entropy is intact. In contrast to that, we obtain new bigrams ('aa' and 'bb' in addition to 'ab' and 'ba'), which changes conditional probablities of characters to appear. In the first sequence, it is all pre-determined: you ever get 'a' after 'b' and 'b' after 'a' (hence h
2 = 0). In the second sequence, probabilities to get 'a' after 'b' ('b' after 'a' etc.) reduce to 1/2, and h
2 changes to unity.
Anton,
this may not be the best example. Both strings carry essentially zero information, so whatever the subtle difference may not mean much...
Yes, but mathematically they illustrate the principle. Whatever the sequence, duplicating each character woud not change h1 (because frequencies would not change), while it would change h2 (because frequencies of bigrams will change).