The Voynich Ninja

Full Version: Special Rules
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
The You are not allowed to view links. Register or Login to view. promote general good behavior on the forum (no flaming, no spamming...). However, over the years the need has arisen to introduce some specific additional rules. These rules seek to maintain the balance between freedom of expression for all and a positive user experience.

1. You are welcome to discuss your theory, but keep it in a single thread.
Why is this rule necessary? Some people are enthusiastic about their theories or solutions, and they want to share them in as many places as possible. So they will make a bunch of new threads, or derail existing discussions. Therefore, all theories, translations... should be discussed in their own threads.

2. Clearly mention if (part of) your post was AI-generated.
We are seeing a rise of AI applications, or at least services marketed as such. These can be a great tool in your research, but they also pose challenges. The use of various kinds of AI is permitted on the forum, under the condition that AI-generated content is flagged as such.


Not a rule, but some advice on the use of AI:
  • LLM's like ChatGPT are trained on vast amounts of data scraped off the internet. They use this data to predict a word in a sentence, and then the next word and the next. You can think of the LLM as taking an average of what the internet says about a subject and then generating a human-like text response to your question. It has no consciousness and does not reason - it just generates human language. The data it has on a niche subject like the VM is limited, and often conflicting or of questionable quality. So will be its answers.
  • ChatGPT is easy to manipulate. It took me only one question to make it explain why (very serious) Voynich researcher Rene Zandbergen believes the MS was made by aliens (he doesn't). It will happily confirm a bad Voynich theory because, once again, it doesn't think.
  • Even about "regular" subjects, LLM's often hallucinate, make stuff op. 
  • ChatGPT is especially bad at solving ciphers, and it will feed you fake solutions with great confidence.

Therefore, it is always a good idea to confirm what chatbots tell you before sharing it with others. Either by finding the information in a separate source, or by testing a solution yourself. We understand that not everyone is able to do this, and it doesn't apply to all situations, so this is a suggestion rather than a rule.