Replies: 8 comments 21 replies
-
Interesting idea. In my experience, chat AIs like ChatGPT aren't very good at Prolog in general. If an AI was trained to do this specifically, then maybe they would actually be good enough, but training an AI like this is really hard. I also think that Scryer's errors would need to improve drastically with more information before this could be remotely viable, but maybe an sufficiently advanced AI could infer most of it. My intuition is that a "dumb" (in the sense of no trained statistical AI) system like GUPU would be more impactful for less work. |
Beta Was this translation helpful? Give feedback.
-
There are some recent efforts of using LLMs to generate Prolog code: https://arxiv.org/abs/2405.17893 I didn't read the paper only abstract. |
Beta Was this translation helpful? Give feedback.
-
The biggest problem for beginners is #16. Any other effort is just a distraction from it. One needs to master non-termination first. |
Beta Was this translation helpful? Give feedback.
-
PS: I have to apologize; this discussion page is about LLM AI "for training beginners on Prolog and through syntax errors". I was writing (as always) about "a beginner in programming in the Prolog world". My apologies. And thank you... |
Beta Was this translation helpful? Give feedback.
-
For anyone here who saw my talk (still working on getting the slides together), you'll know that an LLM is only as good as the dataset it is trained on. We could absolutely make one that tutors people on pure Prolog or Scryer Prolog, or answer questions about the common library, if (and only if) we create the right dataset (unclear if that is possible or not!). Based on the presentation he gave, @UWN probably has the best dataset in the world for such a purpose, but that's not what he created his Prolog training tool GUPU for. But as a community effort, making such a dataset would be the only hurdle to making the LLM we want. |
Beta Was this translation helpful? Give feedback.
-
#302 would accelerate the localization of a syntax error very cheaply. As for training data, the data is very biased. I myself "harvest" the logfiles for misspellings and unfitting names every semester thus causing such edits from then on. And the syntax is pretty much restricted, roughly: goals and non-terminals must be on a single line which rules out many of the harder-to-spot errors. And then, it's all in German... |
Beta Was this translation helpful? Give feedback.
-
Ah, yes, I should clarify -- we can have an LLM that can give general conversation around Prolog, more like an interactive Q&A. It could not be used to help find logical errors without running Prolog on the backend and possibly handing the work off to GUPU or doing some kind of genetic programming to try to find logical errors. We would actually need the architecture discussed in my talk, and the LLM part would be the easiest (and least useful) part of the system, the Prolog to find the logical errors would be the most difficult part. |
Beta Was this translation helpful? Give feedback.
-
Off topic, again... but LLM and Prolog style together.... An ACM SIGPLAN SPLASH 2024 talk by Erik Meijer: From AI Software Engineers to AI Knowledge Workers https://www.youtube.com/live/_VF3pISRYRc?&t=15429 Speaking about eliminating the programming work, so to speak, in a safe way, with LLM AI combined with a Prolog-like internal language (a.k.a. neuro-symbolic computing?), generated from a natural language input. Some "slogans" from the presentation: "Tools are Relations/Predicates/Facts"; "Primitive tools are facts"; "Chains are conjunctive goals"; "Derived tools are Horn Clauses". |
Beta Was this translation helpful? Give feedback.
-
An idea occurred to me today. Would it be possible or even a good idea to augment the Scryer Prolog playground with a chatbot AI that could coach beginners through their syntax errors? The effort involved to train an AI up to a good level of quality might preclude realistically doing this.
Beta Was this translation helpful? Give feedback.
All reactions