Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modelling designators for logging #318

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

sasjonge
Copy link
Collaborator

This is a draft pr open for discussion on modelling designators. Type of queries we want to support:

  • Which designators where executed for this action?
  • What is the most specialized designator for this action?
  • What is the most abstract designator for this action?
  • Into which designators did this version of the designator resolve into?

@hawkina
Copy link
Collaborator

hawkina commented Sep 3, 2024

Here is some info, badly formatted, I am sorry, but I hope it helps ^^
Generally we could differentiate between an ActionDesignatorDescription, Resolution and Performance. (e.g. the PyCRAM code of this looks like ActionDesignatorDescription.resolve().perform())

  • In most cases we can summarize resolve().perform() to perform, since it also executes resolve().
  • resolve fills out the designator with parameters
  • perform actually attempts to move the robot to the designated position
  • there is always a with semI_real_robot statement around a function, which describes if it is a simulated, or real robot. Semi-real means that we work in simulation but we do use giskard for motion calculations.
  • An Object Designator can have the following fields:
'uid','type','shape','shape_size','color','location','size','pose','pose_source','attribute','description'

However we usually just use: names, types (yes plural since the designator description accepts a list of those). The designator usually gets resolved by perception, who fills out as many fields as possible.

  • Locations. Can now be described semantically. One parameter is enough. E.g.
Location(furniture_item='coffee table', room='living room')

This gets resolved (via KnowRob in this case) to a list of dict entries:

[{'Item': {'value': 'http://www.ease-crc.org/ont/SUTURO.owl#CoffeeTable',
   'link': 'iai_kitchen/coffee_table:coffee_table:table_center',
   'room': 'http://www.ease-crc.org/ont/SUTURO.owl#LivingRoom_RTKDJVBC',
   'pose': header: 
     seq: 0
     stamp: 
       secs: 1725356965
       nsecs: 826580762
     frame_id: "map"
   pose: 
     position: 
       x: 8.435593626312391
       y: -0.2637401054035722
       z: 0
     orientation: 
       x: 0.0
       y: 0.0
       z: 0.024997395914712332
       w: 0.9996875162757026}}]
  • Locations can also be resolved via costmaps.
  • Actions. Contain sub-actions and motions. Motions are most low-level objects and usually contain poses. Actions can directly refer to poses but usually contain other actions first.

Some examples:

Abstract descriptions

        action = ActionDesignator(type ='navigate',
                                  target_locations = Location(furniture_item=furniture_item,
                                                              room=room))
        action.resolve().perform()
    with semi_real_robot:
        action = ActionDesignator(type='detect',
                                  technique=PerceptionTechniques.ALL,
                                  object_designator = ObjectDesignatorDescription(types = [ObjectType.JEROEN_CUP]))
        action = action.resolve().perform()

Questions

  • I am not sure if prints of the resolution will help?
  • Goal would be to have a transporting action described with one designator which then gets resolved into other designators
  • we can differentiate between a plan, which contains multiple designators and describes a very high level task. Then it gets broken down into the individual Actions and Motions.
  • Result of Success/Failure needs to be recorded
  • It would be cool if one can add key-value pairs to a designator and that they are not predefined in the ontology. The keys are very scenario-dependant and might be used in one use-case and not another. Maybe we could say one can define a key and set a type of it. The type can be another designator or primitive python type + Poses and PoseStamped.
    Sorry for the chaos, need to sort this, but wanted to provide something.

@hawkina
Copy link
Collaborator

hawkina commented Sep 5, 2024

tl;dr:
In soma we need a way to represent:

  • ActionDesignator with a type parameter and a set of key-value pairs which can be anything (unfortunately)
  • the above designator gets matched/resolved into it's proper type. e.g. it can become a NavigateActionDesignator. The parameters are just passed on accordingly
  • a value can also be another designator (in most cases Location or Object)
  • an ActionDesignator might contain other actionDesignators which then get resolved in it at perform time of the high level designator
  • Resolution: fills the key value pair fields with information, e.g. the values of the keys change into poses or something else more specific.
  • Grounding: same as resolution but for Location + Object designators
  • then it gets performed. The performing may contain the creation, resolution and performance of other designators (this is where the hierarchy comes in)
    More Examples
object_desig = ObjectDesignatorDescription(types = [ObjectType.CUP]])
nav_location = Location(furniture_item='kitchen counter', room='kitchen')

nav_action = ActionDesignator(type ='navigate', target_locations = nav_location)
nav_action.resolve().perform()

detect_action = ActionDesignator(type='detect',
                                  technique=PerceptionTechniques.ALL,
                                  object_designator = object_desig)
detect_action = detect_action.resolve().perform()

Location Designator parameters

after creation:

{'args': (),
 'semantic_poses': [],
 'poses': [],
 'pose': None,
 'kwargs': {'furniture_item': 'kitchen counter', 'room': 'kitchen'},
 'urdf_link': None}

Same designator after running location.ground()

{'args': (),
 'semantic_poses': [{'
  {'Item': {'value': 'http://www.ease-crc.org/ont/SUTURO.owl#KitchenCounter',
    'link': 'iai_kitchen/kitchen_counter:kitchen_counter:table_center',
    'room': 'http://www.ease-crc.org/ont/SOMA.owl#Kitchen_HUGQVWLM',
    'pose': {.....}}]
 'kwargs': {'furniture_item': 'kitchen counter', 'room': 'kitchen'},
 'urdf_link': 'iai_kitchen/kitchen_counter:kitchen_counter:table_center'}

Action

ActionDesignator(type ='navigate', target_locations = nav_location)
# pre-resolution: 
{'resolve': <bound method NavigateAction.ground of <pycram.designators.action_designator.NavigateAction object at 0x7f3ef0e47970>>,
 'ontology_concept_holders': [<pycram.ontology.ontology_common.OntologyConceptHolder at 0x7f3f0c431c10>],
 'exceptions': {},
 'state': None,
 'executing_thread': {},
 'threads': [],
 'interrupted': False,
 'name': 'NavigateAction',
 'soma': get_ontology("http://www.ease-crc.org/ont/SOMA.owl#"),
 'target_locations': Location(pose=None)}

# post-resolution:
{'resolve': <bound method NavigateAction.ground of <pycram.designators.action_designator.NavigateAction object at 0x7f3ef0e47970>>,
 'ontology_concept_holders': [<pycram.ontology.ontology_common.OntologyConceptHolder at 0x7f3f0c431c10>],
 'exceptions': {},
 'state': None,
 'executing_thread': {},
 'threads': [],
 'interrupted': False,
 'name': 'NavigateAction',
 'soma': get_ontology("http://www.ease-crc.org/ont/SOMA.owl#"),
 'target_locations': [{POSES]}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants