-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing Pong and Pong2 #107
base: dev
Are you sure you want to change the base?
Conversation
First off, thank you for your contribution. There’s a couple things to note here.
|
Hi, I saw the other pong PR, but it looked more like a new client-server based implementation. I guess you mean the kanren engine, right? I would like to test the new engine and I'm currently particularly interested in evaluating it for solving more complex logic problems and evaluate its performance. Do you have some complex example where the reasoner has to deal with many multiple paths? |
@kripper The old Pong.py/Pong2.py files were not succsessfully completed and are outdated. @MoonWalker1997 Tangrui |
Arguable, KanrenEngine is a lot simpler with only 3 files and a declarative rule style nal-rules.yml.
Yes
This is ongoing work and in the coming months more of the existing test cases and examples will be enabled with the new engine. It is perhaps worth noting that the job of inference engine is to take input tasks and apply rules to them producing conclusions if the rules apply. Coordinating and selecting which tasks to use as inputs at any given cycle is the job of the control mechanism -- Reasoner.py. So the choice of inference engine should not matter as much in the multi-step case which has more to do with control. |
Hi @maxeeem
|
It supports all of NAL rules and theorems, so yes.
Truth and Desire value calculations are indeed part of the inference engine, however those are unchanged from the original implementation. Budget and Priority calculations are done in the Reasoner and are part of control mechanism.
Not at the moment, no. At least we have not tested them. Currently, only single-step inference has been tested with the new inference engine and you can take a look at those test cases (test_NAL1 through 8). We plan on adding multi-step examples from opennars in the coming months as we continue working on this new implementation. Simple cases like this one should already work but have not been tested and may give you different truth values for the conclusion or require changes to the memory and control mechanism. |
Awesome. Does the inference engine prioritize inferences that somehow may lead to the goal or to desired states? |
If I understand the question correctly then no, inference engine is given tasks and it applies rules to them, so anything to do with priority is in the domain of the control mechanism. |
Maybe it could be useful to prioritize inference that leads to answer questions and/or achieve goals. One thing I noticed on PyNARS that I didn't see on ONA, was that the questions didn't generate an answer immediately, but took some steps until matching some inferred rule. I didn't check the code, but it seems as if it was buffering questions and maybe guiding the inference engine to solve them ASAP. And I believe there is also a bug that is generating multiple answers for the same question when one 1) makes a question and then 2) tells the engine to process 10 steps. |
Absolutely and that's precisely the role of the control mechanism which is an area of active research :) ONA has a different approach and PyNARS is closer to OpenNARS in that sense and is slated to be released as OpenNARS 4 later this year. There are very possibly still some bugs in attention/budget which aren't allocating resources in the best manner but ultimately there is not a single right answer here, different control strategies will be better suited to different use cases. What should be present in all versions however is that the system adapts over time.
I can't speak for ONA but the OpenNARS control mechanism can indeed take a few cycles before answering a question. Cycles can be allocated to observing the buffer for example or picking a concept from memory that is unrelated to your question (for a number of reasons, one being that Bag is non-deterministic and the system has many goals and questions at any given moment). Obviously if the latency is too high this will indicate a problem but again if the system learns over time then this would be the main goal of the project.
This may be expected actually and OpenNARS displays a similar tendency. You see, once a question is answered it is not deleted from memory and as the system continues to run a better answer to the question could be found and reported. If you're seeing the same answer printed multiple times then this may indicate an issue with resource allocation, and warrant a look at our budget and priority functions. |
I was trying to fix
pong
andpong2
, but it seems there are some issues with the inference engine inTemporalRules.py
duringinduction_composition()
.