| Welcome, Guest |
You have to register before you can post on our site.
|
|
|
| starting glyphs on paragraphs |
|
Posted by: Juan_Sali - 24-05-2022, 02:21 PM - Forum: Analysis of the text
- Replies (8)
|
 |
Most common starting glyphs on paragraphs are k and f. I consider that all the family of k is the same glyph, same for the family of f.
In my opinion both of them at starting of a paragraph are nulls in term of forming a monogram/bigram/trigram.
One option is that they indicate the language or the written text.
Is it possible a text analisys of the glyhs of paragraphs starting with k (excluding the starting k), then a text analisys of the glyhs of paragraphs starting with f (excluding the starting f)? Better if the analisys are separated by scribes.
And then chek if there are signifitative differencies between both of them.
|
|
|
| New blog page |
|
Posted by: Ruby Novacna - 22-05-2022, 05:09 PM - Forum: News
- Replies (18)
|
 |
Hello everyone!
I've just added a new page, You are not allowed to view links. Register or Login to view., to my Reading Voynich blog, where I've decided to collect all the words I've looked at up to now with my translation suggestions.
Happy reading!
|
|
|
| Bigrams across uncertain spaces |
|
Posted by: RobGea - 21-05-2022, 05:56 PM - Forum: Analysis of the text
- Replies (26)
|
 |
Bigrams across uncertain spaces.
All errors are mine and there probably are some.
Extract text from ZL_ivtff_2a using IVTT: from Windows cmdline "ivtt -x7 -s0 -h0 ZL_ivtff_2a.txt ZL2adot-comma.txt"
text file contains : pure text, keep commas and dots, also keeps '?', '*' and @123; notation
Find all the commas denoting uncertain spaces, get EVA-character from either side of the comma, concatenate to form bigram i.e ('X,Y' = 'XY')
Find all the dots denoting certain spaces, get EVA-character from either side of the comma, concatenate to form bigram i.e ('X.Y' = 'XY')
Count them, Rank them, Simple stat them:
abs(Percentage divide) =
percentage occurrence of Comma-bigram, percentage occurrence of Dot-bigram, take which ever number is higher and divide by the lower
abs(Rank Comma - Rank Dot) =
Absolute value of the Comma bigram rank subtracted from the Dot bigram rank
Did the 'percent divide' and 'rank subtract' because it just makes it easier to spot the differences and provides a simple way to compare them.
Top12 shown here:
Code: Commas(X,Y) 2737 Total Dots(X.Y) 30890 Total abs(Percentage divide) abs(Rank Comma - Rank Dot)
Rank count bigram % Rank count bigram %
('R1', 285, 'ra', 10.413) ('R14', 730, 'ra', 2.363) 4.407 13
('R2', 147, 'lc', 5.371) ('R10', 1146, 'lc', 3.71) 1.448 8
('R2', 147, 'lk', 5.371) ('R33', 201, 'lk', 0.651) 8.25 31 --lk
('R4', 125, 'ls', 4.567) ('R15', 672, 'ls', 2.175) 2.1 11
('R5', 119, 'sa', 4.348) ('R28', 245, 'sa', 0.793) 5.483 23
('R6', 101, 'yk', 3.69) ('R18', 443, 'yk', 1.434) 2.573 12
('R7', 93, 'ol', 3.398) ('R39', 118, 'ol', 0.382) 8.895 32 --ol
('R8', 90, 'ld', 3.288) ('R17', 569, 'ld', 1.842) 1.785 9
('R9', 83, 'ro', 3.033) ('R6', 1355, 'ro', 4.387) 1.446 3
('R10', 78, 'lo', 2.85) ('R11', 996, 'lo', 3.224) 1.131 1
('R11', 73, 'yd', 2.667) ('R7', 1275, 'yd', 4.128) 1.548 4
('R12', 68, 'yt', 2.484) ('R23', 312, 'yt', 1.01) 2.459 11
('R12', 68, 'ok', 2.484) ('R52', 65, 'ok', 0.21) 11.829 40 --ok
Observations:
the 'o<character>' family turn up a lot with high abs(Rank subtract) scores ; ol, ok, or,oa,ot
The bigram 'lg' has the highest abs(Rank subtract) score:
comma dot
('R58', 5, 'lg', 0.183) ('R206', 1, 'lg', 0.003) 61.0 148 --lg
One conclusion is that at least some of those 'l,g' bigrams are real bigrams and any apparent space is a scribal artifact.
Questions:
What does it mean when the Dot-bigram occurrence percentage is higher than the Comma-bigram occurrence percentage? e.g
('R38', 15, 'yo', 0.548) ('R2', 2687, 'yo', 8.699) 15.874 36
Data attached:
bigrams_un-certain_spaces.txt (Size: 10.34 KB / Downloads: 31)
|
|
|
| Upcoming Conference about Alchemy and Visual Culture |
|
Posted by: MichelleL11 - 20-05-2022, 01:16 PM - Forum: News
- No Replies
|
 |
Can attend virtually or in person. Sign up and more information here:
You are not allowed to view links. Register or Login to view.
Through a Glass Darkly: The Visual Culture of Alchemy
Location and Time
Bowen Hall, PRISM, Princeton University
May 26 – 28, 2022
Conference
The pre-modern science and art of alchemy is famous for its vivid, often bizarre imagery. Alchemical images often represent ingredients and processes allegorically, suggesting analogies with other aspects of creation: from human generation and reproduction to the motion of heavenly bodies. Alchemists also used descriptive and diagrammatic images to convey practical and philosophical information about their art. Such imagery might catch the eye of readers and patrons, or present visual arguments for ideas about nature and artifice. Images also changed over time, as new audiences sought to decipher and adapt earlier depictions—whether reflecting new artistic trends, or revealing changing attitudes towards nature, matter, and antiquity.
This conference explores the visual language of alchemy within the broader cultural and intellectual context of pre-modern Europe. The conference accompanies the Princeton University Library exhibition “Through a Glass Darkly: Alchemy and the Ripley Scrolls, 1400–1700,” open in the Ellen and Leonard Milberg Gallery until July 17, 2022.
Speakers Include:
Donna Bilak (NYU Gallatin)
Stephen Clucas (Birkbeck, University of London)
Leah DeVun (Rutgers University)
Marina Escolano-Poveda (University of Liverpool)
Peter J. Forshaw (University of Amsterdam)
Marlis Hinckley (Johns Hopkins University)
Janna Israel (Princeton University Museum of Art)
Didier Kahn (CNRS, Paris)
Sharifa Lookman (Princeton University)
William R. Newman (Indiana University Bloomington)
Lawrence M. Principe (Johns Hopkins University)
Jennifer M. Rampling (Princeton University)
Melissa Reynolds (Princeton University Society of Fellows)
Sergei Zotov (University of Warwick)
Organizers and Sponsors
The event is hosted by the Princeton Institute for the Science and Technology of Materials (PRISM), which houses some of the world’s most sophisticated imaging technology: a modern analogue for alchemists’ attempts to visualize the interior of matter.
Sponsored by the Office of the Dean for Innovation, the Princeton Humanities Council, and the Society for the History of Alchemy and Chemistry (SHAC).
Organized by Jennifer M. Rampling (Princeton University).
|
|
|
| Poliphonyc ciphers |
|
Posted by: Juan_Sali - 19-05-2022, 12:35 PM - Forum: Voynich Talk
- Replies (1)
|
 |
I look for poliphonyc ciphers in the times of the VM, century up and down, with the following caracterictis:
Preferably in latin and/or spanish.
Polyphonic candidates: Q(U)-CA-G(U), L-I, S-X, V-B (V used as in Latin so B can be a V-U), ET-I (ampersand), ...
I would appreciate any other suggestion for polyphonics.
|
|
|
| Words ain and dain |
|
Posted by: Ruby Novacna - 09-05-2022, 12:06 PM - Forum: Analysis of the text
- Replies (8)
|
 |
The separate words ain and dain are not among the most frequent: less than a hundred for ain and two hundred for dain.
I propose to read them in Greek as ειν and τειν:
- ειν = εν - in;
- ειν - ου - where;
- ειν a (rare?) form of ειναι - inf. of ειμι - to be ;
- τειν - dative of singular pronoun συ - you, thou.
I don't know if statistics alone, without full or partial translation, can confirm or refute this reading.
|
|
|
| Roman numerals and entropy |
|
Posted by: Koen G - 05-05-2022, 10:06 AM - Forum: Analysis of the text
- Replies (1)
|
 |
Several people expressed their interest in the entropy behavior of Roman numerals. I decided to run a few preliminary tests to get a first basic idea of the numbers. These are just experiments, they do not represent any reality found in real manuscripts.
First I made a list of numbers from 1 - 3500 and placed them in a random order. Then I used Excel's ROMAN function to convert these to roman numerals. I made a second version of the roman numerals file where they use additive notation (XIIII instead of IX). Then I made versions of each of these where final letters are replaced with swooped versions (IIJ instead of III). This resulted in five files to test:
01: randomized list of numerals 1-3500
02: list (01) converted to Roman numerals
03: list (02) with additive notation
04: list (02) with swoops
05: list (03) with swoops
I compared the h1 and h2 numbers to those of some Voynichese sections and a normal text (Chaucer). The green dot in the square is a minimally modified EVA version. I drew some arrows from the "best performing" Roman numeral version to modified EVA to Chaucer, just to make the graph a bit easier to read.
As we see, a randomized list of Roman numerals has significantly lower h1 and h2 than Voynichese, as we probably could have expected. After all, Roman numerals only have seven characters, which can be increased to a dozen or so if some swooped versions are added. This lack of character variety explains why especially h1 is lower.
These numbers are pretty far off, but there may still be room for improvement. Apparently there were medieval practices that included additional symbols for some numbers, like O for XI. (See for example You are not allowed to view links. Register or Login to view.). There is also mention of a system of lines or "brackets" to make numbers larger, for example | I | would be 100,000 or something like that. Or "( ( I ) )" is a way to write 10,000. If a bracket system is used to enlarge numbers, it might explain benched gallows. But testing this would require some more thought, time and planning than this initial experiment allowed.
|
|
|
| Spaces and Entropy |
|
Posted by: Koen G - 03-05-2022, 02:01 PM - Forum: Analysis of the text
- Replies (32)
|
 |
A couple of years ago, when I was playing around with entropy statistics, I also wanted to test the impact of spaces. I don't recall if I ever published these data, but I do remember mailing with Marco about it. We concluded that a good way to test spaces would be to remove the variable entirely. Remove all spaces from all texts tested to level the playing field.
Predicted outcome:
- H1: "space" is a frequent character, so removing it may increase h1.
- H2: I'd expect an increased h2 in all texts, because removing spaces will create novel bigrams where the endings and beginnings of words meet.
Results:
The big red cloud is the comparison corpus of medieval texts in various languages. Purple is the same corpus with spaces removed. As predicted, there is a general shift towards the top-right. The h2 of all texts increases by 3% (some Latin texts) to 16% (an English text). The median increase is 9%.
The green dots are various sections of the VM in EVA. Their h1 does not change much, but h2 does increase, as predicted. The increase for EVA texts is around 10%, close to the median. In other words, taking spaces out of the equation does not help EVA at all to catch up with other texts, since their h2 increases just as much (if not more).
Finally, the blue dot is a slightly modified Herbal A, which tries to mitigate EVA's most entropy-reducing characteristics. Benched gallows were unstacked, then benches replaced by novel glyphs. "Ain" and "aiin" were also replaced. Here, h1 does increase like in normal texts. Moreover, h2 goes up by 15%, which would be the biggest gain if it weren't for one English text.
Here is the same graph again, but only with texts that had their spaces removed:
Conclusion: accounting for some of the more obvious effects of EVA and eliminating spaces as a variable is not enough to fix Voynichese's entropy problem.
|
|
|
|