Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing Pong and Pong2 #107

Open
wants to merge 3 commits into
base: dev
Choose a base branch
from
Open

Fixing Pong and Pong2 #107

wants to merge 3 commits into from

Conversation

kripper
Copy link

@kripper kripper commented May 17, 2024

I was trying to fix pong and pong2, but it seems there are some issues with the inference engine in TemporalRules.py during induction_composition().

@maxeeem
Copy link
Collaborator

maxeeem commented May 17, 2024

First off, thank you for your contribution. There’s a couple things to note here.

  1. There is another PR open currently for the game of pong so we’ll want to consolidate the efforts if possible. I’m not sure how current these examples are and if they’re using some outdated calls. @bowen-xu would know more
  2. Recently a new inference engine got merged which does not use the TemporalRules.py file in the same way. It is still a work in progress but if you want to run the old style inference engine, you can do so by specifying “foo” (any string will do) in Reasoner.py on line 49. We will expose it later as a command line option but currently you have to pass it to the initializer.

@kripper
Copy link
Author

kripper commented May 17, 2024

Hi,

I saw the other pong PR, but it looked more like a new client-server based implementation.
I was expecting the old Atari pong examples to work and so I started fixing things.
Maybe it's not bad to keep the old versions because of their relative simplicity.
Is it easy to get them to work with the new inference engine?

I guess you mean the kanren engine, right?

I would like to test the new engine and I'm currently particularly interested in evaluating it for solving more complex logic problems and evaluate its performance.

Do you have some complex example where the reasoner has to deal with many multiple paths?
For example, for a simplified chess game with just a few pieces or only planning a few number of moves.

@bowen-xu
Copy link
Collaborator

@kripper The old Pong.py/Pong2.py files were not succsessfully completed and are outdated. @MoonWalker1997 Tangrui
is working on the pong game demo.

@maxeeem
Copy link
Collaborator

maxeeem commented May 17, 2024

I saw the other pong PR, but it looked more like a new client-server based implementation. I was expecting the old Atari pong examples to work and so I started fixing things. Maybe it's not bad to keep the old versions because of their relative simplicity. Is it easy to get them to work with the new inference engine?

Arguable, KanrenEngine is a lot simpler with only 3 files and a declarative rule style nal-rules.yml.
That said, the old engine is still in there and fully connected so can be enabled by passing anything other than "kanren" in Reasoner init.

I guess you mean the kanren engine, right?

Yes

Do you have some complex example where the reasoner has to deal with many multiple paths? For example, for a simplified chess game with just a few pieces or only planning a few number of moves.

This is ongoing work and in the coming months more of the existing test cases and examples will be enabled with the new engine.

It is perhaps worth noting that the job of inference engine is to take input tasks and apply rules to them producing conclusions if the rules apply. Coordinating and selecting which tasks to use as inputs at any given cycle is the job of the control mechanism -- Reasoner.py. So the choice of inference engine should not matter as much in the multi-step case which has more to do with control.

@kripper
Copy link
Author

kripper commented May 17, 2024

Hi @maxeeem

  1. I checked miniKanren and it looks quite nice and easy to interface.
    Is it also well suited to support the complete NAL specs?

  2. Does the inference engine prioritize inferences that lead to the goal or to desired states?

  3. Is there any complex multi-step example I can take a look on to see how things work internally?

@maxeeem
Copy link
Collaborator

maxeeem commented May 17, 2024

Is it also well suited to support the complete NAL specs?

It supports all of NAL rules and theorems, so yes.

  1. Do the inferred conclusions not affect the states that the Reasoner.py considers achievable and their desire vale computation?

Truth and Desire value calculations are indeed part of the inference engine, however those are unchanged from the original implementation. Budget and Priority calculations are done in the Reasoner and are part of control mechanism.

  1. Is there any complex multi-step example I can take a look on to see how things work internally?

Not at the moment, no. At least we have not tested them. Currently, only single-step inference has been tested with the new inference engine and you can take a look at those test cases (test_NAL1 through 8).

We plan on adding multi-step examples from opennars in the coming months as we continue working on this new implementation. Simple cases like this one should already work but have not been tested and may give you different truth values for the conclusion or require changes to the memory and control mechanism.

@kripper
Copy link
Author

kripper commented May 17, 2024

Awesome.

Does the inference engine prioritize inferences that somehow may lead to the goal or to desired states?

@maxeeem
Copy link
Collaborator

maxeeem commented May 17, 2024

Does the inference engine prioritize inferences that somehow may lead to the goal or to desired states?

If I understand the question correctly then no, inference engine is given tasks and it applies rules to them, so anything to do with priority is in the domain of the control mechanism.

@kripper
Copy link
Author

kripper commented May 18, 2024

If I understand the question correctly then no, inference engine is given tasks and it applies rules to them, so anything to do with priority is in the domain of the control mechanism.

Maybe it could be useful to prioritize inference that leads to answer questions and/or achieve goals.

One thing I noticed on PyNARS that I didn't see on ONA, was that the questions didn't generate an answer immediately, but took some steps until matching some inferred rule. I didn't check the code, but it seems as if it was buffering questions and maybe guiding the inference engine to solve them ASAP.

And I believe there is also a bug that is generating multiple answers for the same question when one 1) makes a question and then 2) tells the engine to process 10 steps.

@maxeeem
Copy link
Collaborator

maxeeem commented May 18, 2024

Maybe it could be useful to prioritize inference that leads to answer questions and/or achieve goals.

Absolutely and that's precisely the role of the control mechanism which is an area of active research :) ONA has a different approach and PyNARS is closer to OpenNARS in that sense and is slated to be released as OpenNARS 4 later this year. There are very possibly still some bugs in attention/budget which aren't allocating resources in the best manner but ultimately there is not a single right answer here, different control strategies will be better suited to different use cases. What should be present in all versions however is that the system adapts over time.

One thing I noticed on PyNARS that I didn't see on ONA, was that the questions didn't generate an answer immediately, but took some steps until matching some inferred rule. I didn't check the code, but it seems as if it was buffering questions and maybe guiding the inference engine to solve them ASAP.

I can't speak for ONA but the OpenNARS control mechanism can indeed take a few cycles before answering a question. Cycles can be allocated to observing the buffer for example or picking a concept from memory that is unrelated to your question (for a number of reasons, one being that Bag is non-deterministic and the system has many goals and questions at any given moment). Obviously if the latency is too high this will indicate a problem but again if the system learns over time then this would be the main goal of the project.

And I believe there is also a bug that is generating multiple answers for the same question when one 1) makes a question and then 2) tells the engine to process 10 steps.

This may be expected actually and OpenNARS displays a similar tendency. You see, once a question is answered it is not deleted from memory and as the system continues to run a better answer to the question could be found and reported. If you're seeing the same answer printed multiple times then this may indicate an issue with resource allocation, and warrant a look at our budget and priority functions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants