![]() |
|
TEST * YOUR * THEORY * HERE - Printable Version +- The Voynich Ninja (https://www.voynich.ninja) +-- Forum: Voynich Research (https://www.voynich.ninja/forum-27.html) +--- Forum: Theories & Solutions (https://www.voynich.ninja/forum-58.html) +--- Thread: TEST * YOUR * THEORY * HERE (/thread-4850.html) Pages:
1
2
|
RE: TEST * YOUR * THEORY * HERE - gaugamela - 10-12-2025 (25-09-2025, 01:38 PM)Jorge_Stolfi Wrote: You are not allowed to view links. Register or Login to view.(25-09-2025, 11:46 AM)ReneZ Wrote: You are not allowed to view links. Register or Login to view.Internet chat bots are just one very specific implementation of AI. That seems like a gross misrepresentation of the current state of the technology. I tried both of your examples and got very different solutions. No brute forcing at all, rather checking divisibility, which would seem like the easiest way. As for the context, it depends a lot on the prompt and context you give it. I've gotten usable and very accurate solutions for diffusion and convection problems, fits, and all sorts of other not so trivial questions. Obvious one can not simply rely on these but a human has to check. AI isn't a magic wand that can make sense of anything you throw at it, in fact it technically can't make meaning of things at all. I agree, it basically works like a large predictive search engine, trawling it's database. There are gonna be blind spots and misinformation. Nonetheless it's a powerful tool in the right context. It can give people the ability to engage with things they have little knowledge in. Coding is a great example. Is the code right? 90% of the time. Is the code good? 50/50. Is the code usable? Almost always. For many people that is good enough, it's only a problem when vibe coding becomes the norm and isn't properly implemented. The biggest problem is the opaque inner mechanism. We can only really study the outputs and make guesses and tweaks to the system. We have no real clue what's really going on under the hood. Hallucinations are also a persisting problem where the gravity depends on the familiarity of the user with the subject. The biggest problem is users taking AI generated content at face value and not checking it against actual sources. It's a tool like any other, no more and no less useful than the user operating it. As evident by some of the ludicrous theories about the Voynich Manuscript that get posted regularly. RE: TEST * YOUR * THEORY * HERE - oshfdk - 10-12-2025 (10-12-2025, 02:24 PM)gaugamela Wrote: You are not allowed to view links. Register or Login to view.That seems like a gross misrepresentation of the current state of the technology. But this is exactly what @Jorge_Stolfi said, I think. Modern AIs are fed truckloads of information when training, which includes discussions on all kinds of topics in many languages. It's highly likely that the AI has seen "usable and very accurate solutions for diffusion and convection problems" that you mention and readily reproduced or combined them. But when AI combines two things together it probably does this is a way that ensures a statistically seamless textual transition from A to B, which may or may not coincide with the right way of combining A and B. The result is not guaranteed to work, but it is guaranteed to look very plausible. And while in jobs like software engineering with proper TDD this is not a huge problem, wrong solutions get rejected very quickly, I would say AIs are of very limited use in other areas where verifying a solution is not much easier than producing it. RE: TEST * YOUR * THEORY * HERE - Jorge_Stolfi - 10-12-2025 (10-12-2025, 02:24 PM)gaugamela Wrote: You are not allowed to view links. Register or Login to view.I've gotten usable and very accurate solutions for diffusion and convection problems, fits, and all sorts of other not so trivial questions. They are pretty good at finding a standard solution which has been published somewhere. I myself use them for questions like "what command line options do I use to get ImageMagick to replace a given color by another color". It generally works much better than searching the manual or searching the the answer with google. But, in at least one case out of ten tries, it gave me an answer that was completely made-up nonsense. Quote:I tried both of your examples and got very different solutions. No brute forcing at all, rather checking divisibility, which would seem like the easiest way. For the "primes that end" questions, the right answers would have been "print(2)" and "print(5)", respectively. As I said, between my first test and the second one, ChatGPT somehow discovered the right answer for the "2" case. But if it still uses a loop and tests for divisibility in the "5" case, then it still has not learned that yet. [quote']Is the code right? 90% of the time. Is the code good? 50/50. Is the code usable? Almost always.[/quote] Again, it will give the right code if it happens to find one in its database that it can adapt in a straightforward way. For common types of problem, it may "almost always" work. But the danger is not that it will be "right 90% of the the time you ask", but "right 90% of the input cases". Other people who use LLMs for writing code tell me that sometimes they spend much more time finding and fixing bugs than they saved by using the tool. Quote:The biggest problem is users taking AI generated content at face value and not checking it against actual sources. Yup. When the correct answer (or some adaptable version of it) is not in their database, they will spit out garbage. As we see on this forum, several times a week. And, AFAIK, because of the way they work, they cannot tell that what they assembled is garbage. All the best, --stolfi RE: TEST * YOUR * THEORY * HERE - R. Sale - 10-12-2025 This thread has gone totally off topic. It is meant be a challenge to proposed translators. If the VMs can be read; read it! Read something that looks to be significant. Not just this: "Pour, mix, shaken, not stirred' buffalo shot. The four circles are examples of texts that I would most like to understand. Other suggestions are fine. RE: TEST * YOUR * THEORY * HERE - Jorge_Stolfi - 10-12-2025 (10-12-2025, 07:38 PM)R. Sale Wrote: You are not allowed to view links. Register or Login to view.This thread has gone totally off topic. Unfortunately when I ask to read new posts, each page that I get is scrolled so as to show the new post but not the name of the thread. To see it, I would have to scroll scroll scroll up to the top of the page. Thus, when I reply to such a post, I usually don't know what thread I am in. I suppose that is true for most readers... All the best, --stolfi. |