06-02-2026, 08:20 PM
Thanks everyone, I already found a few interesting threads worth picking up.
First, to clarify what I was getting at: I agree we don't need to call them "vords". The question wasn't really about terminology. It was that these space-delimited units seem to have internal structure more like sentences than typical words. Sentences have openers and closers, paradigmatic choices, continuation dependencies. Words generally don't show these properties at the sub-word level, at least not to this degree. That's what I find odd.
On context-sensitivity, here's something rather striking. Comparing different text types across the whole manuscript:
Labels: q 1.0%, standalone s 15.5%
Paragraphs: q 15.8%, standalone s 6.5%
Circular text: q 2.0%, standalone s 12.9%
Radial text: q 5.0%, standalone s 15.9%
The pattern holds across every section. Non-paragraph contexts suppress q and elevate s. One way to think about it: labels don't need to introduce new referents (the visual already shows you what's being labelled) but may need more anchoring to what's already established. Though of course that's just one possible reading.
The e-gradient is also consistent across both Currier languages:
Currier A: e 57% → ee 34% → eee 9% after ch/sh
Currier B: e 65% → ee 27% → eee 10% after ch/sh
The absolute frequencies differ (Currier B has more d, famously) but the structural gradient has the same shape. So whatever system produces these patterns, it appears stable across sections, text types, and Currier languages. The frequencies vary, but the rules don't.
Rafal asks about proof of meaningfulness. I suspect "meaningful or not" may be less useful than asking what generative system, if any, produces these constraints. Even a hoax needs a procedure that respects the regularities. And if that procedure is consistent enough to show the same gradients in Currier A and B, the same q/s asymmetry across all text types, that's a fairly disciplined hoax.
First, to clarify what I was getting at: I agree we don't need to call them "vords". The question wasn't really about terminology. It was that these space-delimited units seem to have internal structure more like sentences than typical words. Sentences have openers and closers, paradigmatic choices, continuation dependencies. Words generally don't show these properties at the sub-word level, at least not to this degree. That's what I find odd.
(06-02-2026, 02:01 AM)oshfdk Wrote: You are not allowed to view links. Register or Login to view.I usually treat the manuscript as a cipher, so for me all of these are glyph sequences that have no semantics of their own.On the cipher framing: I agree that treating the glyphs as having no semantics of their own is reasonable. But the structural question remains regardless. I am curious whether you see these patterns arising from a system that operates before encoding (a language, notation, or formal system), or from one that operates during encoding? And what properties would such a system need in order to reproduce the observed boundary and continuation effects?
(06-02-2026, 11:02 AM)dashstofsk Wrote: You are not allowed to view links. Register or Login to view.It and i are the only strokes that repeat. Many words have the format of starting as a e stroke string and continuing as an i stroke string. I mentioned something about this in previous posts [ You are not allowed to view links. Register or Login to view. , You are not allowed to view links. Register or Login to view. ]. My personal conviction is that it is just a fabrication. An easy way for the writer to construct meaningless text.dashstofsk's stroke-repetition observations are indeed something to note, and Bluetoes101's transition rules formalise similar intuitions. These are genuinely interesting frameworks. But I'm not sure "easy to repeat" accounts for everything. Take the e/ee/eee pattern: single e follows ch/sh about 63% of the time, ee only 28%, and eee just 9%. The environment shifts systematically as length increases. If this were simply about ease of repetition, why would longer chains actively avoid appearing after ch/sh? A grammatical analogy might be a derivational gradient ("quickly" → "quick" → "quickness") where longer forms occupy different structural positions. What would the ergonomic account predict here?
On context-sensitivity, here's something rather striking. Comparing different text types across the whole manuscript:
Labels: q 1.0%, standalone s 15.5%
Paragraphs: q 15.8%, standalone s 6.5%
Circular text: q 2.0%, standalone s 12.9%
Radial text: q 5.0%, standalone s 15.9%
The pattern holds across every section. Non-paragraph contexts suppress q and elevate s. One way to think about it: labels don't need to introduce new referents (the visual already shows you what's being labelled) but may need more anchoring to what's already established. Though of course that's just one possible reading.
The e-gradient is also consistent across both Currier languages:
Currier A: e 57% → ee 34% → eee 9% after ch/sh
Currier B: e 65% → ee 27% → eee 10% after ch/sh
The absolute frequencies differ (Currier B has more d, famously) but the structural gradient has the same shape. So whatever system produces these patterns, it appears stable across sections, text types, and Currier languages. The frequencies vary, but the rules don't.
Rafal asks about proof of meaningfulness. I suspect "meaningful or not" may be less useful than asking what generative system, if any, produces these constraints. Even a hoax needs a procedure that respects the regularities. And if that procedure is consistent enough to show the same gradients in Currier A and B, the same q/s asymmetry across all text types, that's a fairly disciplined hoax.