The most comprehensive collection of Design Principles on the Internet.

916 design principles and counting.

14 Jul 2021

7 Principles of Efficient Human Robot Interaction

Lessons from evaluating neglect tolerance and interface efficiency are compiled into a set of principles for efficient interaction. Emphasis is placed on designing efficient interfaces, but many of the principles require autonomy levels that support the principles.

Source: Seven principles of Efficient Human Robot Interaction (pdf)


The principles

  1. Implicitly switch interfaces and autonomy modes

    Attention appears to be a major bottleneck in cognitive information processing.

    A well designed interface and robot autonomy level should support proper attention management. For example, if a user is not attending to a relevant sensor and keeps running into an obstacle, the interface could highlight the information. Also, if one robot in a team needs attention, it could change colors, flash, or pop to the front of the attention queue.

  2. Let the robot use natural human cues

    People have extensive experience in accomplishing tasks and interacting with other people. With this experience comes a set of natural expressions.

    In terms of cognitive information processing "naturalness" means that well-calibrated mental models are available, well-known sensory stimuli receive attention, and well practiced use of short term memory is employed. Thus, naturalness is compatible with effective interaction because it invokes well-practiced response generations.

  3. Manipulate the world instead of the robot

    In terms of cognitive information processing, interacting with the world requires a mental model, and interacting with the robot requires a separate mental model. If the robot is transparent to the user, than only one mental model. If the robot is transparent to the user, than only one mental model is required. This entails that working memory is loss likely to be overtaxed with extra data in short-term memory and extra mental models.

  4. Manipulate the relationship between the robot and world

    In terms of cognitive information processing, the relationship between robot and wold must be known before a human can plan what the robot should do. Directly presenting information about this relationship allows the human to use only the mental model that generates behavior, rather than using not only this mental model but also the mental model that translates sensor data into such a representation. Furthermore, since the translation of data into a representation. Furthermore, since the translation of data into a representation imposes a burden on short-term memory, removing this translation frees up short term memory resources.

  5. Let people manipulate presented information

    The interface should support interaction with the information presented.

    In general, if information is presented to a user, the user should be able to manipulate this information directly and thereby guide the robot or make progress on a task. In terms of the cognitive information processing model, if a information can be manipulated directly, there is no need for a mental model that translates this information into and action that will occur in a different modality.

  6. Externalize memory

    One way to simplify the cognitive load associated with navigation (and thereby support multi-tasking) is to externalize memory.

    The user need not remember where all obstacles occurred once they are out of the camera's field of view, and the user need not integrate range information with the camera data.

  7. Help people manage attention

    It is important to help schedule attention- Since neglect time is stochastic, a user should schedule attention for the worst case scenario. To facilitate a service schedule based on the average case, an interface and robot autonomy mode that supports an UNDO would be beneficial. Furthermore, such assistance would help users properly calibrate trust and thereby avoid misuse and abuse.

1. Implicitly switch interfaces and autonomy modes

Attention appears to be a major bottleneck in cognitive information processing.

A well designed interface and robot autonomy level should support proper attention management. For example, if a user is not attending to a relevant sensor and keeps running into an obstacle, the interface could highlight the information. Also, if one robot in a team needs attention, it could change colors, flash, or pop to the front of the attention queue.

2. Let the robot use natural human cues

People have extensive experience in accomplishing tasks and interacting with other people. With this experience comes a set of natural expressions.

In terms of cognitive information processing "naturalness" means that well-calibrated mental models are available, well-known sensory stimuli receive attention, and well practiced use of short term memory is employed. Thus, naturalness is compatible with effective interaction because it invokes well-practiced response generations.

3. Manipulate the world instead of the robot

In terms of cognitive information processing, interacting with the world requires a mental model, and interacting with the robot requires a separate mental model. If the robot is transparent to the user, than only one mental model. If the robot is transparent to the user, than only one mental model is required. This entails that working memory is loss likely to be overtaxed with extra data in short-term memory and extra mental models.

4. Manipulate the relationship between the robot and world

In terms of cognitive information processing, the relationship between robot and wold must be known before a human can plan what the robot should do. Directly presenting information about this relationship allows the human to use only the mental model that generates behavior, rather than using not only this mental model but also the mental model that translates sensor data into such a representation. Furthermore, since the translation of data into a representation. Furthermore, since the translation of data into a representation imposes a burden on short-term memory, removing this translation frees up short term memory resources.

5. Let people manipulate presented information

The interface should support interaction with the information presented.

In general, if information is presented to a user, the user should be able to manipulate this information directly and thereby guide the robot or make progress on a task. In terms of the cognitive information processing model, if a information can be manipulated directly, there is no need for a mental model that translates this information into and action that will occur in a different modality.

6. Externalize memory

One way to simplify the cognitive load associated with navigation (and thereby support multi-tasking) is to externalize memory.

The user need not remember where all obstacles occurred once they are out of the camera's field of view, and the user need not integrate range information with the camera data.

7. Help people manage attention

It is important to help schedule attention- Since neglect time is stochastic, a user should schedule attention for the worst case scenario. To facilitate a service schedule based on the average case, an interface and robot autonomy mode that supports an UNDO would be beneficial. Furthermore, such assistance would help users properly calibrate trust and thereby avoid misuse and abuse.

Tags

  • AI