(29-11-2025, 06:42 PM)bi3mw Wrote: You are not allowed to view links. Register or Login to view.@Koen: Just out of interest, what threshold value did you set?
I just copied a random section that looked a bit unified from your mapped.txt and made ChatGPT deal with it. For "no matches" I allowed either omission or insertion of one word.
(29-11-2025, 06:37 PM)Koen G Wrote: You are not allowed to view links. Register or Login to view.It portrays a rising, sacred force that surges through destruction and renewal, guiding spirits, sharpening strength, and stripping away what is dead so life can move forward again.
ooo, ooo, I know this one. It's
clearly referring to Batman and Gotham City
(29-11-2025, 06:47 PM)Koen G Wrote: You are not allowed to view links. Register or Login to view.I just copied a random section that looked a bit unified from your mapped.txt and made ChatGPT deal with it. For "no matches" I allowed either omission or insertion of one word.
Oh, when you run the script, you can simulate the perception of the “solver” by adjusting the threshold value

(the higher the value, the greater the permeability) This doesn't have any practical value, but it's a pretty fun feature.
Greek word salad, sounds tasty :-)
Again, these are excellent examples. Too bad the solvers are not likely to be convinced by this.
(29-11-2025, 11:52 PM)ReneZ Wrote: You are not allowed to view links. Register or Login to view.Too bad the solvers are not likely to be convinced by this.
But if they read this thread, they may at least have some doubts about their own discovery. After all, it clearly shows that the computer can imitate this type of solution well.
I do like the positive attitude, and that's how it should be....
My experience is that would-be solvers tend not to do a lot of reading, at least not before creating their solutions. They often say so very clearly.
I used the code in post #1 to run the process with English words from Shakespeare's works + King James Bible. EVA characters were mapped into English based on character frequency. It's possible I made mistakes in the process.
RF1a-n-x7.txt (EVA)
fachys ykal ar ataiin shol shory ctoses y kor sholdy
sory ckhar ory kair chtaiin shar ais cthar cthar dan
syaiir sheky or ykaiin shod cthoary cthes daraiin sy
soiin oteey oteor roloty ctaar daiin okaiin or okan
eng_replaced.txt (simple substitution)
gnsoau agnh nl nmnrrf uoeh uoela smeutu a gel uoehia
uela sgonl ela gnrl somnrrf uonl nru smonl smonl inf
uanrrl uotga el agnrrf uoei smoenla smotu inlnrrf ua
uerrf emtta emtel lehema smnnl inrrf egnrrf el egnf
eng_mapped.txt
gnaws again nail nimrah joah joelah smit a gale ehi
uel scowl elah gnarled somerset only naarah small small inveigh
unreal outgo elah agar oui smyrna smote incurred a
wharf amittai embellished lehabim small ingrate ignorance elah enew
I think this shows the amount of hammering needed to turn the transliteration into a word-salad in the target language. It also shows how distant the final output is from actual language.
Thanks, Marco, that's quite illuminating. I'd still say that this isn't too far from what solvers tend to do.
(30-11-2025, 08:17 AM)MarcoP Wrote: You are not allowed to view links. Register or Login to view.It also shows how distant the final output is from actual language.
Yes, the output simply replaces the words from the input with similar words (phonetic and Levenshtein) from the word list. ChatGPT and the solver then have to take care of the sentence structure and interpretation. Names make it hard to form sentences.
edit: When translating into modern English and interpreting in prose, ChatGPT really gets into storytelling mode.
(30-11-2025, 10:06 AM)Koen G Wrote: You are not allowed to view links. Register or Login to view.Thanks, Marco, that's quite illuminating. I'd still say that this isn't too far from what solvers tend to do.
I agree this is very close to the pre-LLMs solutions that were popular until recently.
I think there are ways the results could be improved eg:
- Use a transliteration system like CUVA that aggregates the most common EVA bigrams (this marginally increases entropy).
- More sophisticated simple substitution, e.g. instead of simply matching character frequencies, optimize bigram frequencies (I guess this would mostly map EVA:o,a,e into Vowels).
- Use the frequency of target-language words together with Levenshtein distance, so that most frequent words are preferred.
In any case, solvers tend not to be very sophisticated, and it's possible that these "improvements" would make the system less close to the solutions we are familiar with.