The Voynich Ninja
Positional Mimic Cipher (PM-Cipher) - Printable Version

+- The Voynich Ninja (https://www.voynich.ninja)
+-- Forum: Voynich Research (https://www.voynich.ninja/forum-27.html)
+--- Forum: Analysis of the text (https://www.voynich.ninja/forum-41.html)
+--- Thread: Positional Mimic Cipher (PM-Cipher) (/thread-4921.html)

Pages: 1 2 3 4


RE: Positional Mimic Cipher (PM-Cipher) - oshfdk - 10-09-2025

(10-09-2025, 09:19 PM)quimqu Wrote: You are not allowed to view links. Register or Login to view.It is also known that the first initial word of each sentence is restricted to a couple of glyphs. If those glyps, for example, were keys to change the residuals order, we would also have different ciphers for the same word...

But multiple encodings would probably drive overall entropy up?


RE: Positional Mimic Cipher (PM-Cipher) - quimqu - 10-09-2025

(10-09-2025, 09:25 PM)oshfdk Wrote: You are not allowed to view links. Register or Login to view.But multiple encodings would probably drive overall entropy up?

I've tested two tables, one for the first lines of the paragraphs and the other for the rest (as it is also known that they are quite different). The model fits a bit better, and the entropy remains low.

If we switch residuals, it should go up a bit. But if we find the correct natural language, the cipher would fit better (not 100% because that would be a direct substitution). So, the residuals would be minimal, and the general entropy would be even lower, so switching residuals would make it a bit higher, but starting from a lower point (due to the very low degree of residuals).

I don't know if I have explained myself properly..


RE: Positional Mimic Cipher (PM-Cipher) - oshfdk - 10-09-2025

(10-09-2025, 09:33 PM)quimqu Wrote: You are not allowed to view links. Register or Login to view.If we switch residuals, it should go up a bit. But if we find the correct natural language, the cipher would fit better (not 100% because that would be a direct substitution).

I'm not sure I understand the part in parentheses.

Speaking about plaintext languages, what properties should this hypothetical language have in order for the cipher to fit better?


RE: Positional Mimic Cipher (PM-Cipher) - quimqu - 10-09-2025

If all the residuals are 0, it is a direct substitution (no glyph ciphers more than one letter). It could be, but there is no good direct substitution up to now.

Yes, for example, the languages should start words with a couple of letters, not with many letters. Now with latin I have residual with "qo" substitution of 5 (this means "qo" ciphers up to 5 letters at pos1, too much starting letters in the natural language input). I have tested in old catalan just to play (catalan has less startin letters than latin) and the residual for "qo" went down to 4 instead of 5.


RE: Positional Mimic Cipher (PM-Cipher) - ReneZ - 11-09-2025

I generally find this type of experimentation interesting but:

(10-09-2025, 06:37 PM)quimqu Wrote: You are not allowed to view links. Register or Login to view.H₂≈2.80

this is still quite high, especially for an Eva-like representation.


RE: Positional Mimic Cipher (PM-Cipher) - quimqu - 11-09-2025

(11-09-2025, 12:58 AM)ReneZ Wrote: You are not allowed to view links. Register or Login to view.I generally find this type of experimentation interesting but:

(10-09-2025, 06:37 PM)quimqu Wrote: You are not allowed to view links. Register or Login to view.H₂≈2.80

this is still quite high, especially for an Eva-like representation.

Thank you, René, you are right. 

The H₂ in my post is a bit high, but that’s easy to fix because the system is fully parameterizable.

In practice I just tighten per-position alphabets to reduce next-symbol options; this directly lowers H₂. Then I run a targeted H₂ pass that collapses only the positions contributing most to conditional entropy. And finally I do a light bigram touch-up so correlations/JS and the Zipf curve stay on track.

With those knobs, pushing H₂ down toward 2.3 is straightforward while keeping token lengths and overall grapheme/bigram shape aligned with Voynich. Here some plots:

[Image: V0s8lNJ.png]
[Image: MORyg75.png]
Example of cipher and residuals:

Latin: Auferre, trucidare, rapere falsis nominibus res publica, atque ubi solitudinem faciunt, pacem appellant
PM-ciphered: aediddi qotarynydy chelidy qoedlyd qoorelyrir chol cheldyny akqoi ate chodenydylyn qoedeynn chedin aklinnyln
Residuals: 2 2 4 0 0 1 0 | 0 1 1 1 0 5 1 2 1 | 1 1 2 0 0 1 | 7 1 2 0 0 0 | 1 1 1 0 0 0 3 1 0 | 1 0 0 | 3 2 3 2 0 3 1 | 2 0 0 0 | 3 3 0 | 0 1 2 0 0 3 2 0 0 1 1 | 7 1 1 0 1 1 0 | 3 1 1 0 2 | 2 2 2 0 3 4 1 0 0

And here is the new conversion table in case you wan to have some fun!

[Image: fz3r8KD.png]


RE: Positional Mimic Cipher (PM-Cipher) - oshfdk - 11-09-2025

(11-09-2025, 07:42 AM)quimqu Wrote: You are not allowed to view links. Register or Login to view.The H₂ in my post is a bit high, but that’s easy to fix because the system is fully parameterizable.

In practice I just tighten per-position alphabets to reduce next-symbol options; this directly lowers H₂. Then I run a targeted H₂ pass that collapses only the positions contributing most to conditional entropy. And finally I do a light bigram touch-up so correlations/JS and the Zipf curve stay on track.

With those knobs, pushing H₂ down toward 2.3 is straightforward while keeping token lengths and overall grapheme/bigram shape aligned with Voynich. Here some plots:

I think it is trivial to push entropy down when encoding any particular short string using this method. For example, if I need to encode "voynich manuscript will resist decoding attempts" and I prepare the table which maps each Nth character of each word to the same letter, the ciphertext will read something close to: vvvvvvv mmmmmmmmmm wwww rrrrrr dddddddd aaaaaaaa. There will be a clash somewhere, when Nth letter of word A will be the same as the Nth letter of word B, but as long as the text is short and the table is much larger than the string, it's possible to move a lot of uncertainty from the text into the table.

So, if you have a table of N cells, I think to be safe you will need a text of size ~N * N to estimate true entropy of the encoding. So 500 250000 symbols of natural Latin text (and not just a hand picked list of words). Edit: this is a cautious estimate, maybe a few thousand characters will be more than enough, but in any case there is no lack of large texts in Latin.


RE: Positional Mimic Cipher (PM-Cipher) - quimqu - 11-09-2025

Yes, the tricky thing is to adapt it to voynich stile and try to keep the residuals as low as possible. But this id what I automated. So, the code can cipher any text and language to a voynich style text, keeping the entropies low, returnin a table for the scriba and the residuals to decode it again to natural language.


RE: Positional Mimic Cipher (PM-Cipher) - oshfdk - 11-09-2025

(11-09-2025, 08:10 AM)quimqu Wrote: You are not allowed to view links. Register or Login to view.Yes, the tricky thing is to adapt it to voynich stile and try to keep the residuals as low as possible. But this id what I automated. So, the code can cipher any text and language to a voynich style text, keeping the entropies low, returnin a table for the scriba and the residuals to decode it again to natural language.

So, can it actually encode 10000 symbols of Latin keeping the entropies low? Is there some source code to reproduce this?


RE: Positional Mimic Cipher (PM-Cipher) - quimqu - 11-09-2025

(11-09-2025, 08:16 AM)oshfdk Wrote: You are not allowed to view links. Register or Login to view.So, can it actually encode 10000 symbols of Latin keeping the entropies low? Is there some source code to reproduce this?

The results I ppsted to René comment about entropy are the results of decoding the full De docta ignorantia text.

Yes, I have the code. Need to clean it a bit, and publish. I would like to publish a paper about this together wirh the code, but firstly I wanted some feedback from the ninja comunity if it is really interesting or not.