IROS 2018: BDSR Workshop

Workshop on Latest Advances in Big Activity Data Sources for Robotics & New Challenges

 

Workshop Information

Date: October 1st, 2018

Location: Madrid, Spain

Deadline for the paper and poster submissions: August 20th, 2018

Deadline for the camera-ready versions: September 10th, 2018

Organizers:

               Asil Kaan Bozcuoğlu, M.Sc., University of Bremen

              Dr.-Ing Karinne Ramirez Amaro, Technical University of Munich

               Prof. Dr. Gordon Cheng, Technical University of Munich

 

Workshop Objectives

Recently, we have witnessed that robots start to execute human-level complex tasks such as making popcorn, baking pizza and carrying out chemical experiments. Although these executions are milestones by themselves, robots still have limitations in terms of flexibility, speed and adaptability. To attack these limitations, we believe that big data sources, which contain activities from robots, human tracking and virtual reality, play an important role. Having a big activity data source on site can help robots in many ways such as learning motion parameterizations, adapting to different conditions and generalizing their existing knowledge. Although we see many applications which start to make use of big data, the research community is still in the phase of “re-inventing the wheel” by designing new data structures, collecting similar data and implementing interfaces between data and learning/control routines. Our main goal in this workshop is to gather the interested researchers from IROS attendees and make a step towards the standardization of research tools and data formats to strengthen the joint research efforts.

We believe that the data coming from different agents should reside in a similar format for being combined and used together. On the other hand, there exist surely unique aspects of each source. For instance, the robotic activity data has usually annotations from the control and perception routines but we do not have access to such “brain dumps” in the human tracking data or in the virtual reality (VR). Similarly, we can detect force-dynamic events and the ground truth in the simulation and virtual reality environments in contrast to real-world executions. Thus, one of our objectives in this workshop is to discuss and seek an answer to the questions “Is it possible to come up with a generic data format to cover all these aspects? If so, how can we design such a format?”

A more specific sub-problem is that the variety of virtual reality engines is used by roboticists. All the available VR engines present different input/output devices and are able to capture position and orientation by using tracking technologies. Therefore the development of VR-software complies to demanding quality standards and timing constraints which obey the needs of sensory simulation and direct 3D interaction. Working applications combine different packages and implement specific solutions. In every VR-setting, activity data is represented and stored in different formats and therefore virtual scenarios cannot be easily integrated and interpreted uniformly. To this end, we plan to analyze the existing virtual reality set-ups from the accepted papers and assess the needs of the research community. An important question to be asked is“Can we agree on a pseudo-standard VR system for robotics like Gazebo for 3D-simulations?”

Overall, the participants will get insights from the state-of-the-art with the presentations of the invited speakers and the authors of the accepted papers in this workshop. We foresee that the keynotes from the invited speakers with known expertise on the field will lead to valuable discussions. Since we plan to assess the community’s needs, we will encourage every participant to actively communicate and discuss on possible ways towards a collaborative research effort in the dedicated discussion slot. In addition, we will offer the authors a place for their posters to explain in detail their work for every accepted paper.

Topics of Interest

-Big Data Sources for Robotics

  • Architectures
  • Cloud-based Big Data Sources
  • Motion databases

–  Virtual Reality in Robotics Applications

  • Designing VR-based Games with Purposes
  • Activity Recognition and Recording in VR Frameworks
  • Learning and Reasoning from VR

– Representations of Activities and Motions in Datasets

– Learning from Human-tracking

– Representations of Activity Data inside Symbolic Knowledge Bases

Call for Papers

Please download from here

Invited Speakers

Prof. Dr. Tamim Asfour    (Title: “Large-Scale Human Motion Database for Robotics”) Confirmed

Prof. Michael Beetz, PhD   (Title: “Virtual Reality-based Mental Simulations for Robots and Hybrid Reasoning”) Confirmed

Dr. Tetsunari Inamura Confirmed

Prof. Dr. Katja Mombaur (Title: “Benchmarking schemes and data bases for humanoid performance evaluation in the Eurobench project”) Confirmed

Abstract:

Standardized benchmarking schemes will play an increasingly important role in evaluating robot
performance and predicting their suitability for real world applications. The Eurobench project, led by
CSIC Madrid, aims at defining benchmarking standards for humanoid and wearable robots with a special
focus on locomotion, setting up two benchmarking facilities for general use and implement a data base
with a collected benchmarking data for future reference. The project builds among others on expertise
on benchmarking and performance evaluation collected in 5 previous Euopean projects (KoroiBot, H2R,
BioMot, WalkMan and Balance). The European robotics community will be able to participate in the
Eurobench project in the context of two subcalls, (a) contributing to the benchmarking setups and
measurement and (b) performing benchmarking experiments in the established facilities. In this talk, I
will in particular highlight the planned work on benchmarking humanoid locomotion against the
background of previous research and discuss the most important applications scenarios as well as the
suitability of different key performance indicators.

 

Dr.-Ing Karinne Ramirez-Amaro (Title: “Crowd-Sourcing Human Activity Data from Virtual Reality”) Confirmed

 Prof. Dr. Wataru Takano (Title: “On Human Activity Dataset for Humanoid Robots”) Confirmed

Abstract:

Human behaviors come in a variety of forms and styles. We handles their varieties in the
real world by breaking them down or putting them together in the language form. The
symbolic representations of the behaviors underlie our intelligence, and their technical
implementation is required for a humanoid robot that is integrated into our daily life. This
talk presents contributions of encoding the human demonstrations into stochastic models.
These motion models can be connected to their relevant action descriptions in the
stochastic manner. The connection allows the humanoid robot to understand human
activities in the descriptive sentences, and to synthesize human-like actions from sentence
commands. Additionally, the motion models are extended to encode the physical property,
more specifically, profiles of joint angles and joint torques. The motion models can compute
a pair of joint angle and torque that satisfies the physical consistency. This computation
leads to a design of the force controller from human demonstrations. The experiments
show its validity of a roboic arm drawing on a board in response to the reaction force.

Bio:

Wataru Takano is a Specially Appointed Professor at a Center for Mathematical Modeling
and Data Science, Osaka University. He received the B.S and M.S degrees from Kyoto
University, Japan, in precision engineering in 1999 and 2001, Ph.D degree from Mechano-
Informatics, the University of Tokyo, Japan, in 2006. He was a Lecturer, and an Associate
Professor at the University of Tokyo from 2009 to 2015, and from 2015 to 2017. He was a
Researcher on Project of Information Environment and Humans, Presto, Japan Science
and Technology Agency from 2010 to 2014. He was awarded Best Paper Award of Journal
of Artificial Intelligence, Young Researcher of Robotics Society of Japan. His field of
research includes robotics and artificial intelligence.

 

Schedule (Tentative)

Time                                   Event                                                        Comment

9:30 – 9:45                      Opening                                                    Opening remarks by the organizers.

9:45 – 10:00                   Invited talk 1                                          Prof. Asfour

10:00 – 10:30                Coffee break

10:30 – 11:30                Invited talk 2 & 3                                 Prof. Beetz & Prof. Inamura

11:30 – 12:00                Lightning Talks

12:00 – 12:30                Poster Session

12:30 – 13:30                Lunch break

13:30 – 14:30                Invited Talk 4 & 5                                Prof. Mombaur & Dr. Ramirez-Amaro

14:30 – 15:00                Lightning Talks

15:00 – 15:30                Coffee break                                         Participants will  have chance to visit the posters

15:30 -16:00                  Invited talk 6                                         Prof. Takano

16:00 – 16:15                Lightning talks

16:15-17:15                   Discussion

17:15-17:30                   Final remarks

17:30                                  End

Acknowledgements

This full day workshop is supported by the DFG Collaborative Research Center 1320: Everyday Activity Science and Engineering – EASE