Socially Trustable Drones: Human Centered Robotic System Design

This research is focused on modeling human’s perception of safety in close proximity of flying co-robots. Fast and automated framework is created to generate these models. It is achieved by building a virtual reality (VR) testbed, where humans are tested for their reaction to flying co-robots. A biometric data acquisition system is used to measure human’s skin conductance, heart rate and head tilt, which are used as inputs for a recurrent neural network (RNN) to learn the human’s perception of safety. The experimental framework provides precise measurement of the subject’s stimuli from the VR environment and time-stamped acquisition of skin conductance and heart rate. The following video shows more details about a VR simulator we built for this research.


YouTube Preview Image


Based on the collected data, the RNN is trained to quantify the human arousal in response to a given trajectory. Fig. 1 shows an example of the validation data set with human subjects. Although our framework is as streamlined as possible to accelerate the data acquisition process, the current RNN model structure requires a vastly greater data pool to achieve successful learning. This explains why the prediction in Fig. 1 did not capture every feature of the skin conductance. The current research is focused on the fundamental limitations of machine learning techniques when trained with small and limited datasets typical for complex behaviors, like human models.


Fig 1. Prediction of arousal for a test in validation dataset


We are trying to answer the following questions:

  • Given the structure of the RNN, how many training samples are required to obtain a sufficiently generalized model of human behavior?
  • What structure of RNN is advantageous for a relatively small number of training samples? (e.g. is deep neural network suitable for such applications?)
  • How prior knowledge of the human model from psychology can be incorporated into the learning/training algorithm of the RNN?

This video gives a brief overview of this research project:
YouTube Preview Image


Publications and talks:

  1. H.-J. Yoon and N. Hovakimyan, “VR Environment for the Study of Collocated Interaction between Small UAVs and Humans”, AUA Data Science Workshop, Yerevan, Armenia, May 2017
  2. H.-J. Yoon and C. Widdowson, “Regression of Human Physiological Arousal Induced by Flying Robots Using Deep Recurrent Neural Networks,” Coordinated Science Lab Student Conference 2017, University of Illinois at Urbana-Champaign
  3. T. Marinho, A. Lakshmanan, V. Cichella, C. Widdowson, H. Cui, R. M. Jones, B. Sebastian, and C. Goudeseune, “VR Study of Human-Multicopter Interaction in a Residential Setting,” IEEE Virtual Reality (VR), March 2016, pp. 331–331.
  4. T. Marinho, C. Widdowson, A. Oetting, A. Lakshmanan, H. Cui, N. Hovakimyan, R. F. Wang, A. Kirlik, A. Laviers, and D. Stipanovic, “Carebots: Prolonged Elderly Independence Using Small Mobile Robots,” Mechanical Engineering, vol. 138, no. 9, pp. S8–S13, 09 2016.
  5. H.-J. Yoon, T. Marinho, and N. Hovakimyan, “Trajectory Generation for a Flying Robot in the Proximity of Humans,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, September 2017 (Submitted)
  6. C. Widdowson, H.-J. Yoon, V. Cichella, R. F. Wang, and N. Hovakimyan, “VR Environment for the Study of Collocated Interaction between Small UAVs and Humans,” 8th International Conference on Applied Human Factors and Ergonomics, Los Angeles, July 2017 (Submitted)


NICER Robotics Awarded CSL Video of the Month
NICER Robotics – Virtual Reality with L1 Adaptive Control Quad

To read more about the NSF/EAGER and their funding of this project, click here.