IROS 2018: BDSR Workshop

Workshop on Latest Advances in Big Activity Data Sources for Robotics & New Challenges

 

Workshop Information

Date: October 1st, 2018

Location: Madrid, Spain

Room: 1.L1 (Monaco)

Deadline for the paper and poster submissions: September 3rd, 2018

Deadline for the camera-ready versions: September 20th, 2018

Download link for contributed papers

Organizers:

               Asil Kaan Bozcuoğlu, M.Sc., University of Bremen

              Dr.-Ing Karinne Ramirez Amaro, Technical University of Munich

               Prof. Dr. Gordon Cheng, Technical University of Munich

            Prof. Michael Beetz, PhD, University of Bremen

 

Workshop Objectives

Recently, we have witnessed that robots start to execute human-level complex tasks such as making popcorn, baking pizza and carrying out chemical experiments. Although these executions are milestones by themselves, robots still have limitations in terms of flexibility, speed and adaptability. To attack these limitations, we believe that big data sources, which contain activities from robots, human tracking and virtual reality, play an important role. Having a big activity data source on site can help robots in many ways such as learning motion parameterizations, adapting to different conditions and generalizing their existing knowledge. Although we see many applications which start to make use of big data, the research community is still in the phase of “re-inventing the wheel” by designing new data structures, collecting similar data and implementing interfaces between data and learning/control routines. Our main goal in this workshop is to gather the interested researchers from IROS attendees and make a step towards the standardization of research tools and data formats to strengthen the joint research efforts.

We believe that the data coming from different agents should reside in a similar format for being combined and used together. On the other hand, there exist surely unique aspects of each source. For instance, the robotic activity data has usually annotations from the control and perception routines but we do not have access to such “brain dumps” in the human tracking data or in the virtual reality (VR). Similarly, we can detect force-dynamic events and the ground truth in the simulation and virtual reality environments in contrast to real-world executions. Thus, one of our objectives in this workshop is to discuss and seek an answer to the questions “Is it possible to come up with a generic data format to cover all these aspects? If so, how can we design such a format?”

A more specific sub-problem is that the variety of virtual reality engines is used by roboticists. All the available VR engines present different input/output devices and are able to capture position and orientation by using tracking technologies. Therefore the development of VR-software complies to demanding quality standards and timing constraints which obey the needs of sensory simulation and direct 3D interaction. Working applications combine different packages and implement specific solutions. In every VR-setting, activity data is represented and stored in different formats and therefore virtual scenarios cannot be easily integrated and interpreted uniformly. To this end, we plan to analyze the existing virtual reality set-ups from the accepted papers and assess the needs of the research community. An important question to be asked is“Can we agree on a pseudo-standard VR system for robotics like Gazebo for 3D-simulations?”

Overall, the participants will get insights from the state-of-the-art with the presentations of the invited speakers and the authors of the accepted papers in this workshop. We foresee that the keynotes from the invited speakers with known expertise on the field will lead to valuable discussions. Since we plan to assess the community’s needs, we will encourage every participant to actively communicate and discuss on possible ways towards a collaborative research effort in the dedicated discussion slot. In addition, we will offer the authors a place for their posters to explain in detail their work for every accepted paper.

Topics of Interest

-Big Data Sources for Robotics

  • Architectures
  • Cloud-based Big Data Sources
  • Motion databases

–  Virtual Reality in Robotics Applications

  • Designing VR-based Games with Purposes
  • Activity Recognition and Recording in VR Frameworks
  • Learning and Reasoning from VR

– Representations of Activities and Motions in Datasets

– Learning from Human-tracking

– Representations of Activity Data inside Symbolic Knowledge Bases

Schedule

Time                                   Event                                                        Comment

9:00 – 9:20                      Opening                                                Opening remarks by the organizers.

9:30 – 10:15                   Invited talk 1                                          Prof. Takano

10:15 – 11:00                 Invited talk 2                                         Prof. Mombaur

11:00 – 11:30                Coffee break

11:30 – 12:15                 Invited talk 3                                         Prof. Aksoy

12:20 – 13:20                 Lightning Talks                                     Accepted papers (6 papers)

13:30 – 14:30                Lunch break

14:30 – 15:00               Poster session

15:00 – 15:45               Invited Talk 4                                          Prof. Beetz

15:45 – 16:30               Invited Talk 5                                          Dr. Ramirez-Amaro

16:30 – 17:00               Coffee break

17:00 – 17:45               Invited Talk 6                                          Prof. Asfour

17:45 – 18:15               Discussion

18:15 –   18:30              Final remarks

18:30                              End

 

Invited Speakers

Prof. Dr. Tamim Asfour    (Title: “The KIT Whole-Body Human Motion Database and the KIT Motion-Language Dataset”) Confirmed

Abstract:

We first present the KIT whole-Body Human Motion Database, a large-scale database of whole-body human motion, in which the motion data does not only consider motions of the human subject but the position and motion of objects with which the subject is interacting as well.  We present the methods and tools used for a unifying representation of captured human motion such as the reference kinematics and dynamics model of the human body, the master motor map (MMM), together with procedures and techniques for the systematic recording, annotating, classification of human motion data and for contact-based segmentation of whole-body human motion.  Second, we will present the KIT Motion-Language Dataset, a large, open, and extensible dataset for linking human motion and natural language, where annotations of whole-body human motions in natural language are obtained using a crowd-sourcing approach. Finally, we show how the database and dataset are used for the generation of multi-contact whole body for humanoid robots as well as for generating text from motion and motion from text by learning a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks.

Bio:

Tamim Asfour is full Professor of Humanoid Robotics at the Institute for Anthropomatics and Robotics, High Performance Humanoid Technologies at the Karlsruhe Institute of Technology (KIT). His research focuses on the engineering of high performance 24/7 humanoid robotics as well as on the mechano-informatics of humanoids as the synergetic integration of mechatronics, informatics and artificial intelligence methods into humanoid robot systems, which are able to predict, act and interact in the real world. Tamim is the developer of the ARMAR humanoid robot family. In his research, he is reaching out and connecting to neighboring areas in large-scale national and European interdisciplinary projects in the area of robotics in combination with machine learning and computer vision.

Prof. Michael Beetz, PhD   (Title: “Virtual Reality-based Mental Simulations for Robots and Hybrid Reasoning”) Confirmed

Assist. Prof. Dr. Eren Erdal Aksoy (Title: “Action Semantics for Big Activity Data Sources ” )  Confirmed

Abstract:

In my talk, I will present a novel deep neural network architecture for representing robot experiences in an episodic-like memory which facilitates encoding, recalling, and predicting action experiences. Our proposed unsupervised deep episodic memory model 1) encodes observed actions in a latent vector space and, based on this latent encoding, 2) infers action categories, 3) reconstructs original frames, and 4) predicts future frames. We evaluate the proposed model on two different large-scale action datasets. Results show that conceptually similar actions are mapped into the same region of the latent vector space. Results show that conceptual similarity of videos is reflected by the proximity of their vector representations in the latent space. Based on this contribution, we introduce an action matching and retrieval mechanism and evaluate its performance and generalization capability on a real humanoid robot in an action execution scenario.

Prof. Dr. Katja Mombaur (Title: “Benchmarking schemes and data bases for humanoid performance evaluation in the Eurobench project”) Confirmed

Abstract:

Standardized benchmarking schemes will play an increasingly important role in evaluating robot
performance and predicting their suitability for real world applications. The Eurobench project, led by
CSIC Madrid, aims at defining benchmarking standards for humanoid and wearable robots with a special
focus on locomotion, setting up two benchmarking facilities for general use and implement a data base
with a collected benchmarking data for future reference. The project builds among others on expertise
on benchmarking and performance evaluation collected in 5 previous Euopean projects (KoroiBot, H2R,
BioMot, WalkMan and Balance). The European robotics community will be able to participate in the
Eurobench project in the context of two subcalls, (a) contributing to the benchmarking setups and
measurement and (b) performing benchmarking experiments in the established facilities. In this talk, I
will in particular highlight the planned work on benchmarking humanoid locomotion against the
background of previous research and discuss the most important applications scenarios as well as the
suitability of different key performance indicators.

 

Dr.-Ing Karinne Ramirez-Amaro (Title: “A Semantic Reasoning Method for the Recognition of Human Activities”) Confirmed

Abstract:

Autonomous robots are expected to learn new skills and to re-use past experiences in different
situations as efficient, intuitive and reliable as possible. Robots need to adapt to different sources of
information, for example, videos, robot sensors, virtual reality, etc. Then, to advance the research in
the understanding of human movements, in robotics, the development of learning methods that
adapt to different datasets are needed. In this talk, I will introduce a novel learning method that
generates compact and general semantic models to infer human activities. This learning method
allows robots to obtain and determine a higher-level understanding of a demonstrator’s behavior via
semantic representations. First, the low-level information is extracted from the sensory data, then a
meaningful semantic description, the high-level, is obtained by reasoning about the intended human
behaviors. The introduced method has been assessed on different robots, e.g. the iCub, REEM-C,
and TOMM, with different kinematic chains and dynamics. Furthermore, the robots use different
perceptual modalities, under different constraints and in several scenarios ranging from making a
sandwich to driving a car assessed on different domains (home-service and industrial scenarios).
One important aspect of our approach is its scalability and adaptability toward new activities, which
can be learned on-demand. Overall, the presented compact and flexible solutions are suitable to
tackle complex and challenging problems for autonomous robots.

Bio:

Dr. Karinne Ramirez Amaro is a Post-doctoral researcher at the Chair for Cognitive Systems (ICS).
She completed her Ph.D. (summa cum laude) at the Department of Electrical and Computer
Engineering at the Technical University of Munich (TUM), Germany. She performed her Ph.D.
under the supervision of Prof. Gordon Cheng and she joined ICS in January 2013. From October
2009 until Dec 2012, she was a member of the Intelligent Autonomous Systems (IAS) group
headed by Prof. Michael Beetz. She received a Master degree in Computer Science (with honours)
at the Center for Computing Research of the National Polytechnic Institute (CIC-IPN) in Mexico
City, Mexico in 2007. Dr. Ramirez-Amaro received the Laura Bassi award granted by TUM and the
Bavarian government to conduct a one-year research project in December 2015. For her doctoral
thesis, she was awarded the price of excellent Doctoral degree for female engineering students,
granted by the state of Bavaria, Germany in September 2015. In addition, she was granted a
scholarship for a Ph. D. research by DAAD – CONACYT and she received the Google Anita Borg
scholarship in 2011. Currently, she is involved in the EU FP7 project Factory-in-a-day and in the
DFG-SFB project EASE. Her research interests include Artificial Intelligence, Semantic
Representations, Assistive Robotics, Expert Systems, and Human Activity Recognition and
Understanding.

 

 Prof. Dr. Wataru Takano (Title: “On Human Activity Dataset for Humanoid Robots”) Confirmed

Abstract:

Human behaviors come in a variety of forms and styles. We handles their varieties in the
real world by breaking them down or putting them together in the language form. The
symbolic representations of the behaviors underlie our intelligence, and their technical
implementation is required for a humanoid robot that is integrated into our daily life. This
talk presents contributions of encoding the human demonstrations into stochastic models.
These motion models can be connected to their relevant action descriptions in the
stochastic manner. The connection allows the humanoid robot to understand human
activities in the descriptive sentences, and to synthesize human-like actions from sentence
commands. Additionally, the motion models are extended to encode the physical property,
more specifically, profiles of joint angles and joint torques. The motion models can compute
a pair of joint angle and torque that satisfies the physical consistency. This computation
leads to a design of the force controller from human demonstrations. The experiments
show its validity of a roboic arm drawing on a board in response to the reaction force.

Bio:

Wataru Takano is a Specially Appointed Professor at a Center for Mathematical Modeling
and Data Science, Osaka University. He received the B.S and M.S degrees from Kyoto
University, Japan, in precision engineering in 1999 and 2001, Ph.D degree from Mechano-
Informatics, the University of Tokyo, Japan, in 2006. He was a Lecturer, and an Associate
Professor at the University of Tokyo from 2009 to 2015, and from 2015 to 2017. He was a
Researcher on Project of Information Environment and Humans, Presto, Japan Science
and Technology Agency from 2010 to 2014. He was awarded Best Paper Award of Journal
of Artificial Intelligence, Young Researcher of Robotics Society of Japan. His field of
research includes robotics and artificial intelligence.

Acknowledgements

This full day workshop is supported by the DFG Collaborative Research Center 1320: Everyday Activity Science and Engineering – EASE