CHI 2018 SIG: Redefine Natural User Interface

In CHI 2018, Paul Fu from Alibaba Group, James Landay from Stanford University, Michael Nebeling from University of Michigan, Haipeng Mi from Tsinghua University, and Chen Zhao from Alibaba Group cohosted the Natural UI special interest group session. Over 50 researchers, designers, and practitioners participated. Three topics were discussed during the session: 

  • Definition of Natural User Interface; 
  • Scenarios, use cases, and opportunities for natural user interface; 
  • Research framework and technologies needed to be developed. 

During the discussion, we realized that defining the problem itself is a very challenging task. After 1.5 hours of discussion, debate, and sharing. Here is the proposed report. Feel free to comment and make suggestion on improvements.

Definition

Natural UI is a human computer interaction system, which can leverage human’s natural perception, attention, cognition, and emotion capabilities.

Characteristics of Natural UI: 

  • Natural capabilities could be quite different from individual to individual and from group to group. For example, using a pen might not be a natural capability for an un-educated population.
  • Natural UI should be adaptive based on scenario, environment, culture, and user’s natural capabilities
  • Natural UI should be able to handle ambiguities, which is a natural characteristic of human interaction.
  • Natural UI should be able to learn and improve over time

Group Exercise Notes

In One Word

  • Cross Contextual
  • Biometrics
  • Peripheral-less
  • Unbounded
  • Tangible
  • Instinctive 

Examples

  • VR & AR
  • Kinect
  • Touch
  • Mirror Reflection

Most Natural

  • Automatic doors
  • Amazon Go
  • Touch Screen
  • Motion Sensing
  • Auto lock

Unnatural Experience 

  •  TSA system in American Airports
  • Beam tele-presence
  • Voice interface in public
  • Forced syntax
  • Programming
  • Snapchat
  • Amazon Alexa for non-native speakers 
  • Musical instruments

Natural UI Usage Scenarios 

  • Learning physical objects (touch, smell)
  • A company or friendly assistant while cooking 
  • Exploring data with speech, gesture, touch
  • Share trip experience (smell of Yellowstone)
  • Remark the story in my dream
  • Chat robot
  • Prepare a meeting room at workplace using voice command
  • Order pizza with voice & screen 
  • Haptic output or rendering
  • Eye tracking for object selection 

Technologies Needed

  • NLP, vision, gesture, typing, and multi-modal interaction technology
  •  Contextual
  • Understanding ambiguity 
  • Social and/or cultural naturalness
  • Adaptive natural UI of the person’s capabilities
  • Learn naturalness from users
  • Coordinated multi-modal interfaces
  • Conscious and subconscious level gestures
  • Ask vs. prompt balance
  • Speech more natural
  • Simplicity and robust
  • Ambient display 
  • System with the capability to learn

This SIG will continue after the initial meeting. We can help to build a multidisciplinary network to encourage communication and collaboration for research and practice. We created a Linkedin Group. The URL is https://www.linkedin.com/groups/13594488. Through the LinkedIn groups, we can bring together more universities, research institutes, and companies to participate. We expect the effort will promote more awareness of natural user interface, which will eventually help advance the field.

Leave a comment