The Voynich Ninja

Full Version: Brian Roemmele, AI and Rupescissa question
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
I wasn’t sure what category to put this in. I just happened upon an X posting by a Brian Roemmele (AI expert) from March that says he has detracted the Latin from the last page. It says, “the wine multiplies in the bowl”. The phrase is the same as one from John Rupescissa. He says the AI has since located other indications of Rupescissa, and so he’s building an AI around the Voynich and Rupescissa to decipher more. That’s about it. No mention of what his parameters were. To see more, I had to subscribe to this account, so I did but there’s been no news since March. 

The thing is, my analysis of the images turned up Rupescissa at every turn. Koen and b1Mw (sp?) can confirm. That’s why I subscribed, but I don’t want to pay the big bucks of $7 a month if this is some kind of a scam!

Does anyone know anything about this???
I don't remember seeing anything similar.
What would have been Rupescissa's literal text?
(06-08-2024, 09:28 AM)Barbrey Wrote: You are not allowed to view links. Register or Login to view.... that says he has detracted the Latin from the last page. It says, “the wine multiplies in the bowl”...

If I understand YOUR posting correctly, you have interpreted Roemmele's X posting to be saying that he himself has "detracted" (?!?) the Latin from the past page and determined that it says "the wine will multiply in the bowl".   
But I don't believe that is what is in the X posting.  Roemmele has given a "prompt" (that is a question or instructions) to a Large Language Model (ChatGPT or some similar LLM) and it is the LLM that is claiming that the last page's Latin says "the wine will multiply in the bowl".  The entire section quoted below the label "Output" (in his X posting) is what Roemmele got as output from the LLM. And it is likely all nonsense as far as any connection to the VMS goes.

LLM's cannot actually understand anything or reason about things like a human. There are some AI practitioners that believe they can, but Rommele is not one of them. (Nor am I). All an LLM can do is generate words one at a time  -- basing each sequential word on what it has most seen with high frequency within zillions of  examples that it saw in its training data. Usually what it comes up with is factually correct. But sometimes it is wrong. And sometimes it is epically wrong.

From what I have seen of Roemmele's interviews and writings, he promotes the idea that one can get some very useful and accurate results (for some complex problems) from an LLM providing one guides it through the reasoning process by using carefully designed prompts. And that is true. (In fact, I've used the approach to develop some pretty powerful applications to do things like extract and summarize results from research papers.)

Apparently he (perhaps with others, since he says "we") have embarked on a (absurdly) hopeful project to decode the Voynich MS using this approach; he says (without any background context):  "We have started on the last page of Voynich Manuscript going backwards as this is the esoteric and alchemical way to read a 3 level encoded book."  
(I have no idea what that last part means --  it sounds a lot like the kind of word-salad that a Charlatan spouts off when trying to appear scholarly. Just saying.)

My opinion:  It is true that if you design a series of prompts in the right way, you can effectively get the LLM to reason its way through certain complex problems with some success. But what is happening is that the intelligence for the reasoning is being injected into the activity by the human prompter. In effect, the human provides the intelligence of reasoning and the LLM provides only what LLMs are actually capable of -- statistical word prediction based on the huge volume of language examples that were used to train it.

It is **conceivable** that an LLM trained with enough Latin text examples **might** be able to convert the text on the last page of the VMS into something correct. Maybe. So perhaps it is worth investigating that candidate phrase further through other means just to see if it holds up. But it is still a stretch for any current LLM to figure out the last page's Latin possibilities better than the numerous knowledgable human researchers that have had a kick at that can. (It is actually more likely for the LLM to regurgitate the supposed Latin translation due to seeing it suggested by someone within some text that made its way into the training data used on the LLM.)

As for using the guided-prompts approach to get anywhere beyond that towards decoding any other folios (and specifically beyond just the Latin part of the last folio page), it is highly, highly ... highly ... unlikely. The necessary "information" just isn't present in either the training data that has been used on current LLMs, nor in the mind of the humans designing the prompts.

I would start saving your $7/month.
Quote:...be able to convert the text on the last page of the VMS into something correct.

Whatever "correct" means in this context.

For curiosity, this paper may be interesting reading to understand AI prompting models.

You are not allowed to view links. Register or Login to view.
(07-08-2024, 02:42 AM)asteckley Wrote: You are not allowed to view links. Register or Login to view.The entire section quoted below the label "Output" (in his X posting) is what Roemmele got as output from the LLM.

Thanks for the explanation, yesterday I didn't understand anything. 
Can we interpret the sentence "I have detracted writing in Latin ..." as "I limited my writing to Latin..."?
(07-08-2024, 06:24 AM)Scarecrow Wrote: You are not allowed to view links. Register or Login to view.
Quote:...be able to convert the text on the last page of the VMS into something correct.

Whatever "correct" means in this context.

For curiosity, this paper may be interesting reading to understand AI prompting models.

You are not allowed to view links. Register or Login to view.

Actually that paper has little connection to the kind of guided-prompt approach that Roemmele is trying and to which I referred.  No offense to Scarecrow, but the paper has little to do with AI either. It's rather useless in my opinion.

Granted it was presented at a conference dealing with "Human-Computer Interaction for Work" and so it has more to do with the psychology of the interactive process than with effectiveness of AI/LLMs, but even still -- it is pretty void of value.

The diagram presented in the above message is illustrative of this. They say a picture is worth a thousand words? This one is worth less than twenty. It says nothing more than: 
     "Keep changing your question until you get the answer you want. Then ask another."
Do we really need a diagram to convey that... let alone a whole research study and paper?

That whole "participatory prompting" approach implicitly assumes the AI has the information and you just need to figure out how to get it out of it. (Not unlike an interrogator persisting to question a prisoner until he finally gets the confession he is looking for.) In fact, the AI may not have the information that constitutes the answer (as is very likely with decoding a one-of-a-kind document like the VMS.)

The guided prompting is more about understanding the logical steps needed to solve a problem and then composing a set of prompts that walks the AI (LLM) through those steps -- resulting in providing it with reasoning power that it otherwise lacks. As such, the prompts can be pre-defined by the user before even starting the interactions with the AI--  there is no need for an iterative and interactive process as is done with their "participatory prompting".  

A simple illustrative example: 
Suppose you want to determine whether some written argument makes its case in a solid way or whether it is flawed. Rather than just prompt it with "Does it argue its case in a solid way?" (which requires the LLM to actually "understand" the semantic content of both agument and prompt, and not just the syntactic arrangements of their words), you might ask a set of questions like:  
   1) What does it say about <concept X>?  
   2) What does it say about <concept Y>?
   3) Is there anything in <answer to 1> that contradicts anything in <answer to 2>
   4) Do <answer to 1> and <answer to 2> both support the case being argued?
   5) What important points does it neglect in the argument that could undermine its case?

(The above example is just off the top of my head; it is not meant to be an actual set of guided prompts -- just meant to be illustrative of the overall idea of providing the path-of-reason to the dumb AI.)
(07-08-2024, 11:30 AM)Ruby Novacna Wrote: You are not allowed to view links. Register or Login to view.
(07-08-2024, 02:42 AM)asteckley Wrote: You are not allowed to view links. Register or Login to view.The entire section quoted below the label "Output" (in his X posting) is what Roemmele got as output from the LLM.

Thanks for the explanation, yesterday I didn't understand anything. 
Can we interpret the sentence "I have detracted writing in Latin ..." as "I limited my writing to Latin..."?

I don't think so. I think "detracted" might be a typo. ("extracted"?  "decoded"?).   There are a few other typos or missing words in that same quoted block that Roemmele put in the posting.  

That is weird because you would assume he cut-and-pasted the whole thing directly from the output of his LLM.  And for all the faults of LLMs, they very rarely make any spelling or grammatical mistakes. (Perhaps he is using his own home-grown LLM or a one of lower ability than say ChatGPT 4.)
(07-08-2024, 02:42 AM)asteckley Wrote: You are not allowed to view links. Register or Login to view.Apparently he (perhaps with others, since he says "we") have embarked on a (absurdly) hopeful project to decode the Voynich MS using this approach; he says (without any background context):  "We have started on the last page of Voynich Manuscript going backwards as this is the esoteric and alchemical way to read a 3 level encoded book." 

f42 is another great option.
Is "detracted" a typo for "detected"?
(07-08-2024, 05:51 PM)R. Sale Wrote: You are not allowed to view links. Register or Login to view.Is "detracted" a typo for "detected"?
Could be that too.
In any case it is a bit of 'failure' from the LLM itself, it seems.
Pages: 1 2 3