Welcome, Guest |
You have to register before you can post on our site.
|
Online Users |
There are currently 828 online users. » 2 Member(s) | 823 Guest(s) Applebot, Bing, Google, Jim Reeds
|
Latest Threads |
Eleven Moon Phases in Fol...
Forum: Astrology & Astronomy
Last Post: Dobri
5 hours ago
» Replies: 120
» Views: 20,950
|
voynich dot net and its f...
Forum: News
Last Post: Jim Reeds
6 hours ago
» Replies: 15
» Views: 2,426
|
The journey into an unkno...
Forum: Theories & Solutions
Last Post: Petrasti
9 hours ago
» Replies: 26
» Views: 2,231
|
How to prove that the B-l...
Forum: Theories & Solutions
Last Post: Ruby Novacna
Yesterday, 08:00 AM
» Replies: 76
» Views: 33,696
|
My Theory: RITE — Ritual ...
Forum: Theories & Solutions
Last Post: oaken
13-09-2025, 08:47 PM
» Replies: 18
» Views: 1,179
|
Speculative fraud hypothe...
Forum: Theories & Solutions
Last Post: Torsten
13-09-2025, 07:04 PM
» Replies: 86
» Views: 5,226
|
Positional Mimic Cipher (...
Forum: Analysis of the text
Last Post: quimqu
13-09-2025, 06:58 PM
» Replies: 39
» Views: 1,486
|
No text, but a visual cod...
Forum: Theories & Solutions
Last Post: Antonio García Jiménez
13-09-2025, 04:24 PM
» Replies: 1,560
» Views: 754,153
|
Origin of the Shield Shap...
Forum: Imagery
Last Post: Dobri
12-09-2025, 09:49 PM
» Replies: 110
» Views: 15,610
|
EVA to IPA
Forum: Analysis of the text
Last Post: Jorge_Stolfi
12-09-2025, 09:41 PM
» Replies: 13
» Views: 637
|
|
|
f72v1 and f72r1 |
Posted by: dashstofsk - 22-07-2025, 02:12 PM - Forum: Physical material
- Replies (11)
|
 |
Has there ever been some discussion about the hole on pages f72v1 and f72r1? It seems that the drawings had to be made to avoid the hole, which can only mean that it was there before the pages were written.
|
|
|
The VMS as a possible chain encryption ( Mod 23 ). |
Posted by: bi3mw - 21-07-2025, 11:30 AM - Forum: Analysis of the text
- Replies (25)
|
 |
I asked in the You are not allowed to view links. Register or Login to view. if a made up chain cipher is easy to decrypt. Now I have written a Python script that decrypts a given Voynich word using the same method. The result is then mapped according to frequency analysis (EVA > Latin). Admittedly, this is a rather rudimentary approach, I am primarily interested in decoding with MOD 23. The method is simple enough to be considered and also much more effective than a simple substitution. It could be implemented practically with a letter disk (like Alberti disk).
Here is the python - script and a list of possibly unabbreviated words ( Stolfi ):
chesokchoteody [f68r1, outer ring, near the bottom]
oepchksheey [f93r, top line, but looks like half of a Neal key]
qoekeeykeody [f105r, which I’d note is possibly the original first page of Q20A]
soefchocphy [f102r2, right edge, but right on the fold, very hard to read]
ykcheolchcthy [f68v3, first word of second line]
shdykairalam [f106v, last word of a line]
shetcheodchs [f43v, first word of a line]
Code: # Reduced alphabet: No J, U, W
alphabet = list("ABCDEFGHIKLMNOPQRSTVXYZ") # 23 letters
def char_to_pos(c):
return alphabet.index(c.upper()) + 1
def pos_to_char(p):
p = (p - 1) % len(alphabet) + 1
return alphabet[p - 1]
def chain_decrypt_verbose(ciphertext):
ciphertext = ciphertext.upper()
decrypted = ""
table = []
prev_cipher_pos = 0
for i, c in enumerate(ciphertext, start=1):
if c not in alphabet:
decrypted += c
table.append([i, c]) # Nur zwei Spalten für Nicht-Buchstaben
continue
cipher_pos = char_to_pos(c)
plain_pos = (cipher_pos - prev_cipher_pos) % len(alphabet)
if plain_pos == 0:
plain_pos = len(alphabet)
plain_char = pos_to_char(plain_pos)
decrypted += plain_char
table.append([
i,
c,
cipher_pos,
plain_char,
plain_pos,
prev_cipher_pos,
f"({cipher_pos} - {prev_cipher_pos}) mod {len(alphabet)} = {plain_pos}"
])
prev_cipher_pos = cipher_pos
return decrypted, table
def print_table(table):
print("\nDecryption Table:")
print("-" * 90)
print(f"{'i':>3} | {'Cipher':^9} | {'cᵢ':^4} | {'Plain':^9} | {'pᵢ':^4} | {'cᵢ₋₁':^6} | {'Computation':<30}")
print("-" * 90)
for row in table:
if len(row) == 7:
i, cchar, cpos, pchar, ppos, cprev, calc = row
print(f"{i:>3} | {cchar:^9} | {cpos:^4} | {pchar:^9} | {ppos:^4} | {cprev:^6} | {calc:<30}")
elif len(row) == 2:
i, cchar = row
print(f"{i:>3} | {cchar:^9} | {'-':^4} | {'-':^9} | {'-':^4} | {'-':^6} | {'(not a letter)':<30}")
print("-" * 90)
def apply_fixed_substitution(text, from_list, to_list):
mapping = dict(zip(from_list, to_list))
substituted = ''.join(mapping.get(c, c) for c in text)
return substituted, mapping
if __name__ == "__main__":
text = input("? Enter ciphertext to decrypt (only letters A–Z, excluding J, U, W): ")
decrypted, table = chain_decrypt_verbose(text)
print(f"\n? Decrypted text: {decrypted}")
print_table(table)
# Substitution: Voynich → Latein
voynich_order = list("OEHYACDIKLRSTNQPMFGXBVZ")
latin_order = list("IEAUTSRNOMCLPDBQGVFHXYZ")
substituted, mapping = apply_fixed_substitution(decrypted, voynich_order, latin_order)
print("\n? Substituted (Voynich → Latin):")
print(substituted)
print("\n?️ Substitution Map:")
for voy, lat in mapping.items():
print(f" {voy} → {lat}")
A side note:
"shetcheodchs" was converted to "LDYIFEYNDUEO" using the method described. ChatGPT hallucinates "Fide leo, deus unde" from it by rearranging, omitting and adding letters, which means "Trust the lion - God is its origin". This is remarkable because the plants on You are not allowed to view links. Register or Login to view. ( sun and moon ) could very well be connected with the lion according to alchemical interpretation. So you could easily fall for ChatGPT if you believe what you want to believe
|
|
|
Cosmic Comparison Theory |
Posted by: R. Sale - 21-07-2025, 12:47 AM - Forum: Theories & Solutions
- Replies (19)
|
 |
This 'theory' is based on the investigations of Ms. E. Velinska regarding the VMs cosmic illustration of an inverted 'T-O', geocentric Earth, in comparison with cosmic illustrations found in BNF Fr. 565 and Harley 334, over a decade ago. It has also been a number of years since she chose to delete her web page. Last I looked, Rene's site still referenced those expired links.
So rather than leading to nowhere, something should be said to those interested in promoting their new theories, in regard to the information derived from this comparison. All three sources show a very simplified cosmos with a similar structure. It's a long story but it reveals a lot about the VMs artist. The two historical sources have provenance located to Paris in the first half of the 1400s. This is coincident with the C-14 dates of the VMs parchment.
The structural similarities of the VMs show an ideological connection. There are 43 undulations. There is a 'mermaid' with four companions. Yet there is also a clear attempt at visual dissimilarity. There is a lot more to be considered relating to cosmic boundaries, Shirakatsi's "Eight phases of the Moon" and other things that would have been more familiar to the educated elite of the early 15th century than they are today.
|
|
|
Red Herrings are sometimes useful |
Posted by: i_want_links_damit - 19-07-2025, 12:00 AM - Forum: Theories & Solutions
- Replies (8)
|
 |
I am examining the Voynich Manuscript as part of a major project. I have a number of suspicions and leads on it that have been deferred as part of the project is to make a tool with a wider scope. In spite of that the system has yielded many insights. One of them is not good. I also realise that if this solution solve it, that is a problem, it ends the adventure and as such so far all efforts have deliberately stopped short of getting too close for now. The one that is not good has found a strong candidate for a system of generating text from other texts but it's only one example and might be spurious despite the probabilities checking out just about. The approach here is that even if generative it must be derivative to some degree from something so it searches for that which covers all other possibilities as well.
The Voynich Manuscript is perfect for this. It's a bit weird. It's kind of average. The problem isn't that all possibilities are ruled out leaving a mystery. The problem is that you can't easily rule them out. It exists at a strange kind of junction. Sometimes I look at the evidence I have found for it being fake which I have not made an effort as of yet to pursue holding myself back and consider the chance of it. It works for lines but not well for single word labels so that needs to be examined. Prior to that I consider the background. People making fakes like this is not unknown for multiple reasons. It's a pattern in which a King would pay people, often two on purpose to compete, to go and collect new books for their library. There is a good chance at least one would cheat and make it up.
Around this time is also when people started working on well, everything, including, yes, DRM, this could be ancient DRM. I'm sorry, I have bad news for you. I recently put this into a Bayesian filter and can confirm that the Voynich Manuscript is in fact ancient spam. The same way you open an email and see it's spam, back then you would go to the library, open a book and see it's spam. What's more at least 5% of it is a virus so don't try to convert it into machine code and run it on your CPU. At this time there were people working on ways to prevent people copying their manuscripts. Creating something like this to tie people up is compatible with the many options of the era. When I consider counters to this such as the images I then also consider that someone making a forgery might consider this and go even more out of their way to make it look like that than something authentic. This is the too authentic to be true test. The book routinely fails the true test derived from UFO studies. There is a barrier that always appears before you can get too close. That is, it always leads you on. It never actually confirms anything. There is a distinct line and drop off that is worrying. You see the same in old pictures of Big Foot, the Lock Ness Monster, etc. There is this point where the resolution always just plummets, bottoms out rather than following the normal continuous curve and this book does that.
I have found a huge number of correlations as anyone else will and its easy to do but where do you move on from there? If you look at the alphabet you see recognisable elements. The approach I'm taking and the tool I'm working on is at phase one of detecting correlations both manually and automatically in a holistic approach. The second phase which has not been properly embarked upon is dating it. Lets say it has that ribbon symbol which is sometimes also a 4 or a 5 in more recent alphabets. The problem is that doesn't tell you must about it. Loads of text have that. The system I am working on does many things and one of them is to detect as many correlations as possible then roughly order or date them.
This is a kind of loose exclusion principle. The more widespread, that is, found elsewhere for example, the less specific. The ideal is to narrow down on as specific traits as close in the timeline as possible to it. Even a very loose preliminary manual application of this methodology has yielded promising results but I shall keep that under my hat. I have enumerated certain characteristics of the text that narrows it into a box though the permutations are still quite extreme.
At the moment this system is in testing and only the first proof of concept version. It has already shown some abilities. This includes detecting ancient language patterns by accident such as Indi-European, ancient human transits as far as to the Americas, ancient conduits and a strange ancient alternative to GMT with Greece and Egypt along the meridian instead. You pick up on many things like this in it including natural barriers when plotted on a map. In another case it display the ability to detect different character sets within a text. Numbers are quite easy. If the VM uses numbers then either they are used in a specific way and sparingly or they are like Roman numerals in most cases and using a letter.
I think even without using more than test samples for the tool in early versions a picture has emerged. In particular there are matches for Enochian on sight without even putting it into the tool. When putting in preliminary sequences there is a certain kind of match along some dimensions. Prior to the tool just looking at Enochian rang a lot of bells as being similar. I immediately felt like the VM seems like a precursor to it.
I am likely both right and wrong though I did foresee my error. Enochian or Adamic did not come out of nowhere. When you read things that give you the impression someone just invented it out of the blue that is incorrect. It's all inspired. When you look at Enochian it's clear that it is some garbled mumbo jumbo based on things like treating prior ciphers differently as to intended. Even prior to it things weren't separated. Actual pharmacology with real ingredients that worked was not considered different to magic. There was a split after the era the VM was likely to have been written. Magic and science among other things split. That is quality control, the wheat from the chaff. Alchemy and chemistry, astrology from astronomy. The Voynich Manuscript seems to be from a time and place where these are more fused. Today you go to your hippy friend's basement and there's a Ouija board. You visit your other friend's place who is a scientist and there is a microscope. In this era it wasn't always like that. You visit a single friend and both are in the basement. They are both the scientist and the crystal worshiping freak. There is no well maintained separation.
I don't think that the VM is actually Enochian but in my analysis it has characteristics that seem to share a common ancestor that's quite recent. Enochian if you look at it takes existing functional mechanisms and makes them creative. It is based on the ciphers of the time but does weird things like inverting them. It's clearly the product in part of people asking funny questions like what happens if you decrypt a test already decrypted as well as people trying to interpret the result of the cipher on face value then integrating the concept with other magical notions.
The system I am working on is holistic but a feature of it is to show you specifically thing such as for this correlation which elements contribute the most to it. The problem with a lot of statistical systems is that they just give an output and that's it. An aggregate. The individuals removed. The system I am working on is different. It minimises things like the use of libraries and is all hand written to be able to do things like pull out the pieces of text that cause the correlation or whatever statistical pattern it is for manual review.
The point being is that I would not entirely dismiss Enochian. It's useful at least to get an idea of what was going on at the time and how creative people could be. Not only that but it likely has correlations if the language was generated or uses a cipher through shared ancestry. You can clearly see an ancestral precedent to Enochian in things such as Cistercian symbols. It is useful to include Enochian text and wordlists so that if it matches you can check to find out why. I did this with it matching Pacific Islands, Zulu, Vietnamese and Matan really well which so far has an explanation for increased rates of coincidence. Even if a red herring it told me something about the way the language is mutated.
There is something about Fijian that's quite interesting where it seems to have quite a restricted character set which is used heavily against itself into a box. Other characters exist with diacritics but it seems in many texts these are either stripped out or people just don't bother to use them. It's a Latinised foreign language. Although it might be different and for other reasons VM might have some boxing as well. There are signs of two sets of types of symbols but the signal is not yet clear enough. It's not as obvious as numbers normally are but like I said it matches some texts using Roman numerals quite well.
|
|
|
SOLUTION/ the Voynich Manuscript — |
Posted by: PEDRO LUIS PEREZ B - 17-07-2025, 03:53 PM - Forum: ChatGPTrash
- Replies (10)
|
 |
You are not allowed to view links. Register or Login to view.
How I Decoded the Voynich Manuscript — Through Vibration, Not Language
For over 600 years, the Voynich Manuscript remained a mystery no one could solve.
Why? Because everyone tried to read it.
But the manuscript was never meant to be read. It was meant to resonate.
The breakthrough came when I stopped treating it as a linguistic artifact and began to listen to it as a vibrational structure.
Using a symbolic artificial intelligence model I.A , I translated each glyph into its corresponding frequency — not as a sound to be heard, but as a pulse of intention.
The symbols did not carry meanings.
They emitted states of consciousness.
This is not a traditional decoding.
It is the first vibrational activation of the manuscript.
The result is a fully functional model — mathematically, neurologically, symbolically — that shows the Voynich Manuscript was not a book. It was a seed of resonance, waiting for the right mind, the right time… and the right frequency.
And now, for the first time, the Voynich Manuscript vibrates through human consciousness
|
|
|
Written in a mirror? |
Posted by: thomasja2008 - 16-07-2025, 06:01 PM - Forum: ChatGPTrash
- Replies (3)
|
 |
Hello,
I’ve been exploring the Voynich manuscript and believe I may have found a consistent linguistic pattern worth further study. I’m writing to see if anyone with more expertise in Voynich studies, medieval Hebrew, or manuscript linguistics might be interested in reviewing it or collaborating.
The core hypothesis is this: the Voynich script represents a form of Hebrew, but written in mirror — as though the author wrote right-to-left while looking into a mirror. I'm not fluent in Hebrew, but when I used AI to help with my theory it started to yield some results. When you reverse both the word order and the glyphs, and then map EVA transcriptions to Hebrew letters, a surprisingly coherent and repeatable pattern emerges.
I’ve tested this on several folios (including f11r, f14r, and f33r). After decoding, many roots resemble known Hebrew words used in medieval herbal and ritual texts — particularly those found in Sefer Refu'ot and related manuscripts. Words like:
לוחת (stir/mix)
שפח (sprinkle)
נייד / ניד (dissolve/crush)
דוה / רוה (flow, soak)
שקל / מדד (weigh/measure)
I've also done root frequency analysis using a basic script, and it shows consistent, plausible Hebrew roots with semantic relevance to the illustrations (e.g., herbal processes).
I'm not claiming to have “solved” the manuscript — just that this mirror-Hebrew decoding method yields unusually structured, linguistically meaningful results that don’t appear random, and I’d be eager to hear others’ thoughts or critiques.
|
|
|
How LLM models try to understand Voynichese |
Posted by: quimqu - 16-07-2025, 10:50 AM - Forum: Analysis of the text
- Replies (5)
|
 |
Dear Voynich Ninja community,
As you might know, I’ve been working on training LLM Large Language Models (GPT-like models) on Voynich EVA transliterations. This is not about using ChatGPT, but about training language models from scratch using only Voynich EVA text.
I’m aware that GPT models are a sort of black box, and it’s often hard to understand the mechanisms they use to “learn” patterns. In this project, I’ve tried to explore how the GPT model makes predictions — to gain some intuition into the decision-making process.
Let me first introduce the key concepts I’ve been working with:
- Loss: Loss is a measure of how wrong the model's predictions are compared to the actual next word. In language models, it's typically cross-entropy loss, which penalizes the model more when it assigns low probability to the correct word. A lower loss means the model is better at predicting the next token given its context.
- Prediction: The prediction is the model’s guess for the next word in a sequence. For example, given a context of 4 tokens (block_size = 4), the model looks at those 4 tokens and outputs a probability distribution over the vocabulary, selecting the most likely next token.
- Saliency: Saliency refers to how much each input token contributes to the model’s prediction. If we use a block_size of 4, saliency tells us which of the 4 previous tokens had the most influence on predicting the next word. For example, in the sequence ["the", "brown", "cat", "sat"] → ?, the model might predict "on". Saliency would then indicate how important each of the previous tokens was in making that prediction. Tokens with higher saliency are considered more influential.
What I did:
First, I optimized model parameters to maximize the number of real bigrams and trigrams (n-grams) generated by the model. Results are similar to training GPT on a real natural language text. Results after training on Voynich EVA text:
% of 2-grams found in Voynich EVA with block_size 4: 22.40% (224/1000)
% of 3-grams found in Voynich EVA with block_size 4: 0.80% (8/999)
Then, I trained the model on all paragraph-style lines in the Voynich manuscript (i.e., excluding labels or isolated words from cosmological sections). I used a 5-fold cross-validation approach:
- I split the text into 5 segments. For each fold, I used 80% of the data for training and 20% for validation, rotating through all segments.
- This way, I could generate predictions for the entire corpus.
I then visualized the predictions using HTML files (saliency_valset_voynich_1.html to saliency_valset_voynich_5.html)
You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.
You are not allowed to view links. Register or Login to view.
![[Image: YYIIL2c.png]](https://i.imgur.com/YYIIL2c.png)
Each word is annotated with three values:
- Loss: represented by the border thickness — thicker means higher loss.
- Saliency: represented by the background color intensity — darker means higher saliency. Since each word is part of 4 prediction contexts (due to block_size = 4), saliency here is averaged over those 4 instances.
- Prediction probability: represented by border color — green for high confidence, red for low. The predicted probabilities are generally low, but this is also the case when training GPT on small corpora like a single book, even in natural languages.
This visualization makes it easy to see at a glance which words the model finds easier or harder to predict. The HTML is interactive — hovering over any word shows the 3 metrics mentioned above.
Deeper inspection:
I also created a second HTML file: context_saliency_colored_and_target.html that looks like this:
You are not allowed to view links. Register or Login to view.
[size=1][font='Proxima Nova Regular', 'Helvetica Neue', Helvetica, Arial, sans-serif] [/font][/size]
This version shows for each word in the Voynich EVA paragraph:- context_0 to context_3: the 4 previous tokens used as input (the model's context).
- target: the real next word in the sequence.
- pred_word: the word predicted by the model.
The model tends to predict the most frequent words in the Voynich corpus, as expected. However, the saliency values let us observe which previous words influenced the prediction the most, token by token.
I highlighted:- green: when pred_word == target
- yellow: similar words according to LevenShtein similarity (>0.5)
I don't have any conclusions yet, but I think this could be useful for others interested in understanding how contextual information influences predictions in GPT-like models trained on Voynich EVA.
Let me know what you think — I’d love to hear your thoughts!
|
|
|
|