The Voynich Ninja

Full Version: Calgary engineer believes he's cracked the mysterious Voynich Manuscript
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Dear RobGea,
Artificial intelligence giving people wrong information or manipulating them is not an issue. With its current structure, artificial intelligence mainly serves to compile and summarize the information spread by people and relay it back to them. Some of the information it provides is definitely wrong, but it repeats widely believed falsehoods because people are spreading this information. What is the common view about the VM? The most widespread opinion among people is that the VM texts are still unreadable and the language of the content is not understood. However, when asked ai to make a comparison based on the consistency of evidence between A, B, and C, and report the result, I have come to understand that artificial intelligence can point to the correct path even among common views. Despite its current errors, it can be said that artificial intelligence is about to gain the ability to evaluate all articles that say ‘we have VM texts in a natural language’ in seconds and point to the most likely conclusion. I do not need to check my work with artificial intelligence. There is no need because I can already read the VM content, and it is clearly and concretely based on tangible/material evidence that the content is in Turkish. But, I am trying to help those who cannot see this by using artificial intelligence to understand that the result is more likely to be in Turkish.
Dear Novacna,
We already have the key to translate the entire text. Some full pages have already been translated. Across 240 pages, we are already reading numerous Turkish words in every line, and about 21% of them have hardly changed their phonetic form. We do not use artificial intelligence to read VM texts or to check our work. We use artificial intelligence because it knows almost all languages, can access all dictionary books, and can reach almost all articles written on this subject, and we use it to compare these with our articles in terms of evidence. Why do we do this? Some people think that "they do not know Turkish, but they think that if they may know Turkish to scale our claim about the VM-Turkish hypothesis they can understand the claim". We present the most likely content of the VM in machine language/expression, reminding them that they are not smarter or more knowledgeable than the machine about Turkish. This may help some intelligent people to accept that the conclusion has already been reached and that the content of the VM is most likely in Turkish. There is no need to provide helpful information to everyone, but if it provides clues to people who have the knowledge to make academic and scientific comparisons, that will be sufficient for us.
Dear Koen,
At the current level, artificial intelligence compiles, summarizes, and shares common information. AI cannot produce from "starting zero to go new information" yet. But I think the way to make better use of artificial intelligence is to learn to ask it questions correctly. What we show here is that, while the common belief in VM is obvious, artificial intelligence can compare the possibilities other than that common belief with each other and choose the most likely candidate among them. This is also a very important advance, but it is not possible to get results from artificial intelligence by asking the wrong questions.
(02-06-2024, 08:39 PM)Ahmet Ardıç Wrote: You are not allowed to view links. Register or Login to view.There is no need to provide helpful information to everyone, but if it provides clues to people who have the knowledge to make academic and scientific comparisons, that will be sufficient for us.
It's clearer now. So I'm going to wait until the people you cite want to publish the complete translation.
All articles that say VM-Indo-European languages, Semitic-VM languages, VM-Natural language, and our articles that say VM has Turkish content have been compared by the machine based on the quality and/or consistency of the evidence. I think that none of you will come forward and say that there is no Turkish in this GPT artificial intelligence knowledge pool. Or I think there will be no one who says that this machine does not understand Turkish and that it will not understand it after reading our article. If so; Now let's share the GPT3,5 & GPT4 and Chatgpt Maxai answers on this subject again, and think about how much light GPT-ai can shed on the common & current VM views in public:

[attachment=8623]

[attachment=8624]

[attachment=8625]

[attachment=8626]

[attachment=8627]

[attachment=8628]

[attachment=8629]
In my first communication experience with GPT, I asked it a single question in one sentence, and the machine relayed information from Wikipedia and similar sources to me. Today, I asked the same question, and the answer was different. I think the machine can really learn. But I would still like to be sure.

The simple question I mentioned was:
Which language do you think is most likely the writing language of the Voynich manuscript?

Now, let’s test GPT together with some volunteers among you. We can test whether GPT has the ability to learn on its own based on its communication experience. It may express what it has learned from me can be expressed differently to you. Of course, I would expect it not to be so, but I think testing is not difficult, and I say let’s do it with the help of volunteers.

So much so that very recently, I had GPT read some of my articles that I had not published before and had it create PDF summaries of them. In these articles, I asked it to examine various pieces of evidence I presented and additionally to test the words I read in the VM from dictionary pages. Later, I asked it to read and examine academic articles that claim the VM texts are in a natural language. Then, I asked it to compare all these articles based on certain criteria such as the consistency of the evidence presented. In short, I asked the right questions by setting a certain research framework so that it could compare the articles at hand instead of giving general knowledge answers. Therefore, after a long question and conversation, I was able to get the answers I had shared before from the machine.

What did I do today? Today, I asked the machine the same single-sentence question on the same topic again. And the machine’s answer was no longer general knowledge from Wikipedia, and it had changed.

In the first of the two images below, you can see my question at the top and the answer given by the machine today.

What I am curious about now is whether it gives the same answer when the same question is asked in different languages in different geographies. Actually, if you ask the same question in your country, you should get the same answer. I asked GPT4 this short question. I kindly request you to share the answers given by the machine along with the question you asked as a screenshot output. Normally, the machine’s response should not change according to the person or geography. But I’m not entirely sure, and I would appreciate it if you could test and share the results.

Maybe the machine will not reflect its communication experience with me or the information it receives from me by transmitting the same answer to the next geography or person. So, the GPT may be giving me different answers by remembering his communication with me. If this is the case, this could be a weakness of GPT.

Thank you.

[attachment=8630]
[attachment=8631]
ChatGPT doesn't learn between versions of the model. It uses your previous interactions (in a limited window) as context. It takes your inputs at face value, having no way to evaluate them. 

There is a "Clear context" button on the interface I'm using (Poe.com) to start from scratch.
The fact alone that it lists any strengths for Cheshire's theory is abundant proof that it doesn't know what it's talking about.
We had this already. In one of the earliest interactions with an AI chat, someone asked about the provenance of the MS, and the answer included an earlier owner's name which only appears in Cheshire's paper.

Still, these tools cannot judge the reliability of its inputs.
I have asked AI chat to compare Aztec, Turkish, Albanian and Serbian Voynich theories. The answer I got was that the non-existent Serbian theory was written by a monk or scholars from the Serbian Orthodox Church and that Turkish theory was written in a Turkic language, possibly by an Ottoman alchemist or herbalist and that the Turkish theory faces challenges in providing concrete evidence to support its claims. It concluded that none of these theories have been definitively proven, and the mystery of the Voynich manuscript remains unsolved.
To my question where it gets the information, it responded: "I synthesized the information based on my training data, which includes a wide range of sources such as books, articles, websites, and other texts available up to January 2022. My response reflects a general understanding of the Voynich manuscript and the various theories proposed about its origins. These theories have been discussed in scholarly literature, online forums, and popular media over the years."
Since Albanian and Serbian theories are non-existent, the AI simply invented the information. Do not trust it!