I tried to replicate Koens blog post You are not allowed to view links.
Register or
Login to view. results
using a genetic algorithm in python
after many trials i actually got the same changes as Koen did ( phew! ).
It seems Koen did an awesome job getting his results.
Running the proggie tens of times sometimes for hundreds of generations,
several results turned up, all similar to Koens but nothing beat them on h1 and h2 considered together.
The output from my prog was put into nablators 'Entropy' java code and results posted here.
My numbers are not quite the same because i used a slighly different Q13 text as original input.
preprocess text
sh=%
ch=!
ain ={
aiin =}
Koens' results
java Entropy koen_best.txt
h1 = 4.2599308291379225
h2 = 2.7107551739850644
['qot', 'qok', 'ot', 'or', 'ol', 'ok', 'eey', 'edy', 'ar', 'al', '%e', '!e']
My best result
java Entropy my_best.txt
h1 = 4.268190134298851 ( my best but has higher h1)
h2 = 2.7127760842416575
['qot', 'qok', 'or', 'ol', 'ok', 'hy', 'eey', 'edy', 'ar', 'al', '%e', '!e'] diff:: 'hy' for 'ot'
A close result
java Entropy nearly.txt ( a close result but with lower h1 )
h1 = 4.24663933742455
h2 = 2.7104328131417854
['qot', 'qok', 'ot', 'or', 'ol', 'ok', 'eey', 'edy', 'ar', 'al', '%ey', '!e'] diff:: '%ey' for '%e' i.e 'shey' for 'she'