Acquiring everyday manipulation skills through games – Details

WARNING: Deprecated by michael and patrick – michael wants the detail section in the overview page (16.10.2014)

In order for future robots to successfully accomplish more and more complex manipulation tasks, they are required to be action-aware. They need to possess a model which discloses how the effects of their actions depend on the way they are executed. For example, a robot making pancakes should have an understanding of the effects of its actions: pouring the pancake-mix on the oven depends on the position and the way the container is held, or that the consequence of sliding a spatula under a pancake may or may not cause damage to it, depends on the angle and the dynamics of the way it is pushed. In artificial intelligence these are considered to be naive physics reasoning capabilities.

Games Setup

Setup of the virtual environment game

Our current gaming setup is depicted in the figure above. The system is provided with a sensor infrastructure that allows interaction with the virtual world by tracking the players’ hand motion and gestures and mapping them onto the robotic hand in the game. We tested out two different setups for the tracking. One with the help of a magnetic sensor based controller, which returns the position and orientation of the hand, together with a dataglove for finger joints positioning. In the second case we used two 3D camera sensors mounted on a frame which yields the pose and the skeleton of the tracked hand.

Description of the data

This dataset contains the following data:

  • Semantic memory (OWL log) of all performed actions, and their parametrizations and outcomes
  • Physical transformation (TF) data of all the objects in the virtual world