The Voynich Ninja
turkicresearch.com - Printable Version

+- The Voynich Ninja (https://www.voynich.ninja)
+-- Forum: Voynich Research (https://www.voynich.ninja/forum-27.html)
+--- Forum: Analysis of the text (https://www.voynich.ninja/forum-41.html)
+--- Thread: turkicresearch.com (/thread-3099.html)

Pages: 1 2


RE: turkicresearch.com - Ahmet Ardıç - 28-07-2020

(14-02-2020, 06:16 AM)-JKP- Wrote: You are not allowed to view links. Register or Login to view.
Common_Man Wrote:
...
So are we expecting a constructed language to be the answer? Or meaningless text? I believe it is unfair to say that the solution should explain the frequency numbers. All languages have a number of linguistic phenomena underlying it. For example as someone pointed out, in English we have multi to multi correspondence for letter/letter combos to sound. But if someone proposes such a solution, we ignore it saying it has many degrees of freedom. I dont think English would ever be reconstructed by people with this mindset in the future if it were to be a lost language at the time.


I didn't say any solution should explain the disparity in letter frequency. There are ways the text might be manipulated to come out like Voynich text. Sadly, the people who offer substitution solutions do not explain this discrepancy in their methods, they don't even acknowledge it. I daresay many of them haven't even noticed it because they cherry-pick the words that seem to work and ignore all the rest (in other words, they ignore about 85% of the manuscript).


What I was pointing out, since this thread is specifically about a substitution-style solution on turkicresearch.com is that anyone proposing a simple substitution cipher MUST explain the positional and frequency anomalies that differentiate their substitution solution from natural language. There is no natural language that has the positional characteristics of Voynich text. Even syllabic Asian languages do not function this way.

That's why all the people trying to do it this way (at least the ones I've seen so far) stall after picking out about 5% of the words that seem to work with their system. The next step, for those who keep going, is to start proposing that it is polyglot, or that each VMS glyph can represent multiple letters. Then they can maybe get up to 10% or 15% "hits" using their cherry-picking method. Applying their method to the rest of the text results in gibberish. In other words, they are SELECTING what works and ignoring what doesn't (which is easy to do when there are 200 pages of text) while simultaneously ignoring questions about the lack of grammar.



Hello JKP, hello everyone, 

I read your interesting comments about our work. I noted down valuable part of all that comments. Thank you for this. Please check the latest status of our website and our article in English when you have free time. We try to add new information, new words and sentence readings and/or articles to our page every day.

The manuscript has employed certain characters in a way that forms several potential Latin counterparts. For example, letters such as “p” and-or “f” are marked with the same grapheme. Furthermore, the same lack of distinction applies when it comes to diacritic characters (with only exception “c” and “ç”, as phonetically the author employed them as separate letters). For example, letters “o” and “ö” are not district. When this happens it unfortunately is able to redefine the word. “ol” and “öl” mean very different things. A common misconception is that such variations enable us to derive many pseudo-scientific transcriptions, however that is not the case. It only makes everything harder as we are forced to account for both and list them in the transcription. At no point do we engage in picking and choosing where what syllable/meanings stay. The author employed characters that have multiple phonological meanings on purpose, it creates a deeper linguistic expression and adds a multifaceted understanding in the syntax. Furthermore, the primary reason as to why the author chose such a complication becomes evident when coding is accounted in the intention. The author coded the writing to be read from top to bottom in the first character row. We do want to reiterate that the alphabet transcription is not set in stone, as future progress will alter and shorten the transcription.

So, ATA alphabet transcription allows the reader to make different readings since some of the text marks are coded to correspond to multiple sound values, as the author does not prefer to write some information explicitly in this manuscript. This will become more evident upon reading our further explanations. Although appearing superficially easy to manipulate, such permutations in fact create more words and only make the transcription harder translate. It is possible these permutations (which can be interpreted in more than one way) have created different words that give meaningful results with multiple possibilities.

As a result, if a word can be read in more than one way in this manuscript, all of them often have at least one meaning in Turkish. In many cases, different readings of a word in the same sentence can be found without breaking the integrity of that sentence. For example, the word 'OL' can be read as 'ÖL' too. Both are Turkish words and they have their meanings. In other words, we do not choose and use what we want, but In all translations we always show all the different readings. Because this is not our preference, but we can say 600 years ago that the VM author found it appropriate to do so.

Thanks,

Ahmet Ardıç

Main page:  You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.


RE: turkicresearch.com - Ahmet Ardıç - 28-07-2020

(13-02-2020, 08:57 AM)-JKP- Wrote: You are not allowed to view links. Register or Login to view.It fascinates me that this site includes so many concepts that are essentially the same as in medieval Latin scripts.

For example, their interpretation of EVA-sh as a ligature starting with c and ending with r or c, with an apostrophe in between, is exactly the same process that is used to read Latin abbreviations, and even uses most of the same letters:



In Latin, this very common ligature can be read as cer, c'r (with more letters in between), cet, cert, ci, and anything that looks like c, r or old-style t with letters in between (the "cap" is an apostrophe symbol in Latin that can be one or more characters). It's the exact same concept.


Hi,

Use of Abbreviations:
Another one of the most significant syntax choices that the author made was demonstrated through occasional use of abbreviations all throughout the manuscript. The alphabet that the author used contains abbreviations that both have phonetic and numerological value. A close example to demonstrate the concept in English would be the use of abbreviation to express statements like “for you” as “4U”. Phonetically they sound the same, and similarly to this concept, the author of the manuscript uses numerical values alongside letters to create words and apply specific syntax, yet unlike the intentional use of abbreviations (like in the English “4U” example), the author uses the abbreviations as part of the alphabet itself. For example, the number “8” is called “sekiz”, and the symbol of “8” would both convey the numerical value of eight and also can be used as part of the alphabet to convey the phonetic sound of “se”. More information regarding the nature of letters and numbers can be found in the appendix.

Thanks,

Ahmet Ardıç