(07-08-2024, 06:24 AM)Scarecrow Wrote: You are not allowed to view links. Register or Login to view.Quote:...be able to convert the text on the last page of the VMS into something correct.
Whatever "correct" means in this context.
For curiosity, this paper may be interesting reading to understand AI prompting models.
You are not allowed to view links. Register or Login to view.
Actually that paper has little connection to the kind of guided-prompt approach that Roemmele is trying and to which I referred. No offense to Scarecrow, but the paper has little to do with AI either. It's rather useless in my opinion.
Granted it was presented at a conference dealing with "Human-Computer Interaction for Work" and so it has more to do with the psychology of the interactive process than with effectiveness of AI/LLMs, but even still -- it is pretty void of value.
The diagram presented in the above message is illustrative of this. They say a picture is worth a thousand words? This one is worth less than twenty. It says nothing more than:
"Keep changing your question until you get the answer you want. Then ask another."
Do we really need a diagram to convey that... let alone a whole research study and paper?
That whole "participatory prompting" approach implicitly assumes the AI has the information and you just need to figure out how to get it out of it. (Not unlike an interrogator persisting to question a prisoner until he finally gets the confession he is looking for.) In fact, the AI may not have the information that constitutes the answer (as is very likely with decoding a one-of-a-kind document like the VMS.)
The guided prompting is more about understanding the logical steps needed to solve a problem and then composing a set of prompts that walks the AI (LLM) through those steps -- resulting in providing it with reasoning power that it otherwise lacks. As such, the prompts can be pre-defined by the user before even starting the interactions with the AI-- there is no need for an iterative and interactive process as is done with their "participatory prompting".
A simple illustrative example:
Suppose you want to determine whether some written argument makes its case in a solid way or whether it is flawed. Rather than just prompt it with "Does it argue its case in a solid way?" (which requires the LLM to actually "understand" the semantic content of both agument and prompt, and not just the syntactic arrangements of their words), you might ask a set of questions like:
1) What does it say about <concept X>?
2) What does it say about <concept Y>?
3) Is there anything in <answer to 1> that contradicts anything in <answer to 2>
4) Do <answer to 1> and <answer to 2> both support the case being argued?
5) What important points does it neglect in the argument that could undermine its case?
(The above example is just off the top of my head; it is not meant to be an actual set of guided prompts -- just meant to be illustrative of the overall idea of providing the path-of-reason to the dumb AI.)