human robot inter.docx

24
1: Introduction Human-Robot Interaction (HRI) is a field of study dedicated to understanding, designing, and evaluating robotic systems for use by or with humans. Interaction, by definition, requires communication between robots and humans. Communication between a human and a robot may take several forms, but these forms are largely influenced by whether the human and the robot are in close proximity to each other or not. Thus, communication and, therefore, interaction can be separated into two general categories: Remote interaction – The human and the robot are not co-located and are separated spatially or even temporally (for example, the Mars Rovers are separated from earth both in space and time). Proximate interactions – The humans and the robots are co-located (for example, service robots may be in the same room as humans). Within these general categories, it is useful to distinguish between applications that require mobility, physical manipulation, or social interaction. Remote interaction with mobile robots often is referred to as teleoperation or supervisory control, and remote interaction with a physical manipulator is often referred to as telemanipulation. Proximate interaction with mobile robots may take the form of a robot assistant, and proximate interaction may include a physical interaction. Social interaction includes social, emotive, and cognitive aspects of interaction. In social interaction, the humans and robots interact as peers or companions. Importantly, social interactions with robots appear to be proximate rather than remote. Because the volume of work in social interactions is vast, we present only a brief survey; a more complete survey of this important area is left to future work. In this paper, we present a survey of modern HRI. We begin by presenting key developments in HRI-related fields with the goal of identifying critical technological and scientific developments that have made it possible for HRI to develop as a field of its own; we argue that HRI is not simply a reframing and reformulation of previous work, but rather a new field of scientific study. To support this argument, we identify seminal events that signal the emergence of HRI as a field. Although we adopt a designer-centered framing of the paper, work in the field requires strong interdisciplinary blends from various scientific and engineering fields. After surveying key aspects in the emergence of HRI as a field, we define the HRI problem with an emphasis on those factors of interaction that a designer can shape. We then proceed to describe the application areas that drive much of modern HRI. Many of these problems are extremely challenging and have strong societal implications. We group application areas into the previously mentioned two general categories, remote and proximate interactions, and identify important, influential, or thought-provoking work within these two categories. We follow this by describing common solution concepts and barrier problems that cross application domains and interaction types. We

Upload: prabhat-sharma

Post on 29-Dec-2015

20 views

Category:

Documents


3 download

DESCRIPTION

human robot inte

TRANSCRIPT

Page 1: human robot  inter.docx

1: IntroductionHuman-Robot Interaction (HRI) is a field of study dedicated to understanding, designing, and evaluating robotic

systems for use by or with humans. Interaction, by definition, requires communication between robots and humans.

Communication between a human and a robot may take several forms, but these forms are largely influenced by

whether the human and the robot are in close proximity to each other or not. Thus, communication and, therefore,

interaction can be separated into two general categories:

Remote interaction – The human and the robot are not co-located and are separated spatially or even temporally (for

example, the Mars Rovers are separated from earth both in space and time).

Proximate interactions – The humans and the robots are co-located (for example, service robots may be in the same

room as humans).

Within these general categories, it is useful to distinguish between applications that require mobility, physical

manipulation, or social interaction. Remote interaction with mobile robots often is referred to as teleoperation or

supervisory control, and remote interaction with a physical manipulator is often referred to as telemanipulation.

Proximate interaction with mobile robots may take the form of a robot assistant, and proximate interaction may

include a physical interaction. Social interaction includes social, emotive, and cognitive aspects of interaction. In

social interaction, the humans and robots interact as peers or companions. Importantly, social interactions with robots

appear to be proximate rather than remote. Because the volume of work in social interactions is vast, we present only

a brief survey; a more complete survey of this important area is left to future work.

In this paper, we present a survey of modern HRI. We begin by presenting key developments in HRI-related fields

with the goal of identifying critical technological and scientific developments that have made it possible for HRI to

develop as a field of its own; we argue that HRI is not simply a reframing and reformulation of previous work, but

rather a new field of scientific study. To support this argument, we identify seminal events that signal the emergence

of HRI as a field. Although we adopt a designer-centered framing of the paper, work in the field requires strong

interdisciplinary blends from various scientific and engineering fields.

After surveying key aspects in the emergence of HRI as a field, we define the HRI problem with an emphasis on

those factors of interaction that a designer can shape. We then proceed to describe the application areas that drive

much of modern HRI. Many of these problems are extremely challenging and have strong societal implications. We

group application areas into the previously mentioned two general categories, remote and proximate interactions, and

identify important, influential, or thought-provoking work within these two categories. We follow this by describing

common solution concepts and barrier problems that cross application domains and interaction types. We then briefly

identify related work from other fields involving humans and machines interacting, and summarize the paper.

Human–robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language understanding, design, and social sciences.

Human–computer interactionFrom Wikipedia, the free encyclopedia

Page 2: human robot  inter.docx

A woman teaching girls in Afghanistan how to use computers. Human use of computers is a major focus of the field of HCI.

Human–computer interaction (HCI) involves the study, planning, design and uses of the interaction between

people (users) and computers. It is often regarded as the intersection of computer science, behavioral

sciences, design and several other fields of study. The term was popularized by Card, Moran, and Newell in

their seminal 1983 book, The Psychology of Human-Computer Interaction, although the authors first used the

term in 1980,[1] and the first known use was in 1975.[2] The term connotes that, unlike other tools with only

limited uses (such as a hammer, useful for driving nails, but not much else), a computer has many affordances

for use and this takes place in an open-ended dialog between the user and the computer.

Because human–computer interaction studies a human and a machine in conjunction, it draws from supporting

knowledge on both the machine and the human side. On the machine side, techniques in computer

graphics, operating systems, programming languages, and development environments are relevant. On the

human side, communication theory, graphic and industrial design disciplines, linguistics,social

sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are

relevant. Engineering and design methods are also relevant. Due to the multidisciplinary nature of HCI, people

with different backgrounds contribute to its success. HCI is also sometimes referred to as human–machine

interaction (HMI), man–machine interaction (MMI) orcomputer–human interaction (CHI).

Poorly designed human-machine interfaces can lead to many unexpected problems. A classic example of this

is the Three Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design

of the human–machine interface was at least partially responsible for the disaster.[3][4][5] Similarly, accidents in

aviation have resulted from manufacturers' decisions to use non-standard flight instrument or throttle quadrant

layouts: even though the new designs were proposed to be superior in regards to basic human–machine

interaction, pilots had already ingrained the "standard" layout and thus the conceptually good idea actually had

undesirable results.

Page 3: human robot  inter.docx

Artificial intelligenceFrom Wikipedia, the free encyclopedia

"AI" redirects here. For other uses, see Ai and Artificial intelligence (disambiguation).

Artificial intelligence (AI) is the intelligence exhibited by machines or software, and the branch of computer

science that develops machines and software with human-like intelligence. Major AI researchers and textbooks

define the field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that

perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who

coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]

AI research is highly technical and specialised, and is deeply divided into subfields that often fail to

communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown

up around particular institutions and the work of individual researchers. AI research is also divided by several

technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several

possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural

language processing (communication), perception and the ability to move and manipulate objects.[6] General

intelligence (or "strong AI") is still among the field's long term goals.[7] Currently popular approaches

include statistical methods, computational intelligence and traditional symbolic AI. There are an enormous

number of tools used in AI, including versions of search and mathematical optimization, logic, methods based

on probability and economics, and many others.

The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo

sapiens—can be sufficiently well described to the extent that it can be simulated by a machine.[8] This raises

philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with

human-like intelligence, issues which have been addressed bymyth, fiction and philosophy since antiquity.

[9] Artificial intelligence has been the subject of tremendous optimism[10] but has also suffered stunning setbacks.

[11] Today it has become an essential part of the technology industry and defines many challenging problems at

the forefront of research in computer science.[12]

RoboticsFrom Wikipedia, the free encyclopedia

Page 4: human robot  inter.docx

The Shadow robot hand system

Robotics is the branch of technology that deals with the design, construction, operation, and application

of robots,[1] as well as computer systems for their control, sensory feedback, and information processing. These

technologies deal with automated machines that can take the place of humans in dangerous environments or

manufacturing processes, or resemble humans in appearance, behavior, and/or cognition. Many of today's

robots are inspired by nature contributing to the field of bio-inspired robotics.

The concept of creating machines that can operate autonomously dates back to classical times, but research

into the functionality and potential uses of robots did not grow substantially until the 20th century.[2] Throughout

history, robotics has been often seen to mimic human behavior, and often manage tasks in a similar fashion.

Today, robotics is a rapidly growing field, as technological advances continue, research, design, and building

new robots serve various practical purposes, whether domestically, commercially, or militarily. Many robots do

jobs that are hazardous to people such as defusing bombs, mines and exploring shipwrecks.

Natural language understandingFrom Wikipedia, the free encyclopedia

Page 5: human robot  inter.docx

Learning to read by Sigurður málari , 19th century.

"Santa Ana enseñando leer a la Virgen". S.XVII. Óleo sobre lienzo.

Natural language understanding is a subtopic of natural language processing in artificial intelligence that

deals with machine reading comprehension.

The process of disassembling and parsing input is more complex than the reverse process of assembling

output in natural language generation because of the occurrence of unknown and unexpected features in the

input and the need to determine the appropriate syntactic and semantic schemes to apply to it, factors which

are pre-determined when outputting language.[dubious – discuss]

Page 6: human robot  inter.docx

There is considerable commercial interest in the field because of its application to news-gathering, text

categorization, voice-activation, archiving and large-scale content-analysis.

Social scienceFrom Wikipedia, the free encyclopedia

This article is about the science studying social groups. For the integrated field of study intended to promote

civic competence, see Social studies.

Part of a series on

Science

Formal sciences [show]

Physical sciences [show]

Life sciences [show]

Social sciences[show]

Applied sciences [show]

Interdisciplinarity [show]

Philosophy and history of science[show]

Outline

Portal

Category

V

T

E

Social science is an academic discipline concerned with society and the relationships among individuals within

a society. It includesanthropology, economics, political science, psychology and sociology. In a wider sense, it

may often include some fields in the humanities [1] such as archaeology, history, law, and linguistics. The term

may however be used in the specific context of referring to the original science of society, established in 19th

Page 7: human robot  inter.docx

century, sociology (Latin: socius, "companion"; Greek λόγος, lógos, "word", "knowledge", "study."). Émile

Durkheim, Karl Marx and Max Weber are typically cited as the principal architects of modern social science by

this definition.[2]

Positivist social scientists use methods resembling those of the natural sciences as tools for understanding

society, and so define science in its stricter modern sense. Interpretivist social scientists, by contrast, may use

social critique or symbolic interpretation rather than constructing empirically falsifiable theories, and thus treat

science in its broader sense. In modern academic practice, researchers are often eclectic, using

multiple methodologies (for instance, by combining the quantitative and qualitative techniques). The term social

research has also acquired a degree of autonomy as practitioners from various disciplines share in its aims and

methods.[citation needed]

Origins[edit]

Human–robot interaction has been a topic of both science fiction and academic speculation even before

any robots existed. Because HRI depends on a knowledge of (sometimes natural) human communication,

many aspects of HRI are continuations of human communications topics that are much older than

robotics per se.

The origin of HRI as a discrete problem was stated by 20th-century author Isaac Asimov in 1941, in his

novel I, Robot. He states the Three Laws of Robotics as,

“ 1. A robot may not injure a human being or, through inaction, allow a human being to come to

harm.

2. A robot must obey any orders given to it by human beings, except where such orders would

conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the

First or Second Law. ”These three laws of robotics determine the idea of safe interaction. The closer the human and the robot

get and the more intricate the relationship becomes, the more the risk of a human being injured rises.

Nowadays in advanced societies, manufacturers employing robots solve this issue by not letting humans

and robot share the workspace at any time. This is achieved by the extensive use of safe zones and

cages. Thus the presence of humans is completely forbidden in the robot workspace while it is working.

With the advances of artificial intelligence, the autonomous robots could eventually have more proactive

behaviors, planning their motion in complex unknown environments. These new capabilities keep safety

as the primary issue and efficiency as secondary. To allow this new generation of robot, research is being

Page 8: human robot  inter.docx

made on human detection, motion planning, scene reconstruction, intelligent behavior through task

planning and compliant behavior using force control (impedance or admittance control schemes).

The basic goal of HRI is to define a general human model that could lead to principles and algorithms

allowing more natural and effective interaction between humans and robots. Research ranges from how

humans work with remote, tele-operated unmanned vehicles to peer-to-peer collaboration

with anthropomorphic robots.

Many in the field of HRI study how humans collaborate and interact and use those studies to motivate

how robots should interact with humans.

The goal of friendly human–robot interactions[edit]

Kismet can produce a range of facial expressions.

Robots are artificial agents with capacities of perception and action in the physical world often referred by

researchers as workspace. Their use has been generalized in factories but nowadays they tend to be

found in the most technologically advanced societies in such critical domains as search and rescue,

military battle, mine and bomb detection, scientific exploration, law enforcement, entertainment and

hospital care.

These new domains of applications imply a closer interaction with the user. The concept of closeness is

to be taken in its full meaning, robots and humans share the workspace but also share goals in terms of

task achievement. This close interaction needs new theoretical models, on one hand for the robotics

scientists who work to improve the robots utility and on the other hand to evaluate the risks and benefits

of this new "friend" for our modern society.

With the advance in AI, the research is focusing on one part towards the safest physical interaction but

also on a socially correct interaction, dependent on cultural criteria. The goal is to build an intuitive, and

easy communication with the robot through speech, gestures, and facial expressions.

Page 9: human robot  inter.docx

Dautenhan refers to friendly Human–robot interaction as "Robotiquette" defining it as the "social rules for

robot behaviour (a ‘robotiquette’) that is comfortable and acceptable to humans"[1] The robot has to adapt

itself to our way of expressing desires and orders and not the contrary. But every day environments such

as homes have much more complex social rules than those implied by factories or even military

environments. Thus, the robot needs perceiving and understanding capacities to build dynamic models of

its surroundings. It needs to categorize objects, recognize and locate humans and further their emotions.

The need for dynamic capacities pushes forward every sub-field of robotics.

On the other end of HRI research the cognitive modelling of the "relationship" between human and the

robots benefits the psychologists and robotic researchers the user study are often of interests on both

sides. This research endeavours part of human society.

General HRI research[edit]

HRI research spans a wide range of field, some general to the nature of HRI.

Methods for perceiving humans[edit]

Most methods intend to build a 3D model through vision of the environment. The proprioception sensors

permit the robot to have information over its own state. This information is relative to a reference.

Methods for perceiving humans in the environment are based on sensor information. Research on

sensing components and software lead by Microsoft provide useful results for extracting the human

kinematics (see Kinect). An example of older technique is to use colour information for example the fact

that for light skinned people the hands are lighter than the clothes worn. In any case a human modelled a

priori can then be fitted to the sensor data. The robot builds or has (depending on the level of autonomy

the robot has) a 3D mapping of its surroundings to which is assigned the humans locations.

A speech recognition system is used to interpret human desires or commands. By combining the

information inferred by proprioception, sensor and speech the human position and state (standing,

seated).

Methods for motion planning[edit]

Motion planning in dynamic environment is a challenge that is for the moment only achieved for 3 to

10 degrees of freedom robots. Humanoid robots or even 2 armed robots that can have up to 40 degrees

of freedom are unsuited for dynamic environments with today's technology. However lower dimensional

robots can use potential field method to compute trajectories avoiding collisions with human.

Page 10: human robot  inter.docx

Cognitive models and theory of mind[edit]

A lot of data has been gathered with regards to user studies. For example, when users encounter

proactive behaviour on the part of the robot and the robot does not respect a safety distance, penetrating

the user space, he or she might express fear. This is dependent on one person to another. Only intensive

experiment can permit a more precise model.

It has been shown that when a robot has no particular use, negative feelings are often expressed. The

robot is perceived as useless and its presence becomes annoying.

In another experiment, it has occurred that people tend to attribute to the robot personality characteristics

that were not implemented.

Motion planningFrom Wikipedia, the free encyclopedia

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2013)

Motion planning (also known as the "navigation problem" or the "piano mover's problem") is a term used

in robotics for the process of breaking down a desired movement task into discrete motions that satisfy

movement constraints and possibly optimize some aspect of the movement.

For example, consider navigating a mobile robot inside a building to a distant waypoint. It should execute this

task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description of

these tasks as input, and produce the speed and turning commands sent to the robot's wheels. Motion

planning algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more

complex tasks (e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and

uncertainty (e.g. imperfect models of the environment or robot).

Motion planning has several robotics applications, such as autonomy, automation, and robot design

in CAD software, as well as applications in other fields, such as animating digital characters,video

game artificial intelligence, architectural design, robotic surgery, and the study of biological molecules.

Contents

  [hide] 

1   Concepts

o 1.1   Configuration Space

Page 11: human robot  inter.docx

o 1.2   Free Space

2   Algorithms

o 2.1   Grid-Based Search

o 2.2   Interval-Based Search

o 2.3   Geometric Algorithms

o 2.4   Potential Fields

o 2.5   Sampling-Based Algorithms

3   Completeness and Performance

4   Problem Variants

o 4.1   Differential Constraints

o 4.2   Optimality Constraints

o 4.3   Hybrid Systems

o 4.4   Uncertainty

5   Applications

6   See also

7   References

8   Further reading

9   External links

Concepts[edit]

Example of a workspace.

Page 12: human robot  inter.docx

Configuration space of a point-sized robot. White = Cfree, gray = Cobs.

Configuration space for a rectangular translating robot (pictured red). White =Cfree, gray = Cobs, where dark gray = the

objects, light gray = configurations where the robot would touch an object or leave the workspace.

Page 13: human robot  inter.docx

Example of a valid path.

Example of an invalid path.

Example of a road map.

A basic motion planning problem is to produce a continuous motion that connects a start configuration S and a

goal configuration G, while avoiding collision with known obstacles. The robot and obstacle geometry is

described in a 2D or 3D workspace, while the motion is represented as a path in (possibly higher-

dimensional) configuration space.

Configuration Space[edit]

A configuration describes the pose of the robot, and the configuration space C is the set of all possible

configurations. For example:

Page 14: human robot  inter.docx

If the robot is a single point (zero-sized) translating in a 2-dimensional plane (the workspace), C is a plane,

and a configuration can be represented using two parameters (x, y).

If the robot is a 2D shape that can translate and rotate, the workspace is still 2-dimensional. However, C is

the special Euclidean group SE(2) = R2   SO(2) (where SO(2) is the special orthogonal group of 2D

rotations), and a configuration can be represented using 3 parameters (x, y, θ).

If the robot is a solid 3D shape that can translate and rotate, the workspace is 3-dimensional, but C is the

special Euclidean group SE(3) = R3  SO(3), and a configuration requires 6 parameters: (x, y, z) for

translation, and Euler angles (α, β, γ).

If the robot is a fixed-base manipulator with N revolute joints (and no closed-loops), C is N-dimensional.

Free Space[edit]

The set of configurations that avoids collision with obstacles is called the free space Cfree. The complement of

Cfree in C is called the obstacle or forbidden region.

Often, it is prohibitively difficult to explicitly compute the shape of Cfree. However, testing whether a given

configuration is in Cfree is efficient. First,forward kinematics determine the position of the robot's geometry,

and collision detection tests if the robot's geometry collides with the environment's geometry.

Algorithms[edit]

Low-dimensional problems can be solved with grid-based algorithms that overlay a grid on top of configuration

space, or geometric algorithms that compute the shape and connectivity of Cfree.

Exact motion planning for high-dimensional systems under complex constraints is computationally intractable.

Potential-field algorithms are efficient, but fall prey to local minima (an exception is the harmonic potential

fields). Sampling-based algorithms avoid the problem of local minima, and solve many problems quite quickly.

They are unable to determine that no path exists, but they have a probability of failure that decreases to zero

as more time is spent.

Sampling-based algorithms are currently considered state-of-the-art for motion planning in high-dimensional

spaces, and have been applied to problems which have dozens or even hundreds of dimensions (robotic

manipulators, biological molecules, animated digital characters, and legged robots).

Grid-Based Search[edit]

Grid-based approaches overlay a grid on configuration space, and assume each configuration is identified with

a grid point. At each grid point, the robot is allowed to move to adjacent grid points as long as the line between

them is completely contained within Cfree (this is tested with collision detection). This discretizes the set of

actions, and search algorithms (like A*) are used to find a path from the start to the goal.

Page 15: human robot  inter.docx

These approaches require setting a grid resolution. Search is faster with coarser grids, but the algorithm will fail

to find paths through narrow portions of Cfree. Furthermore, the number of points on the grid grows exponentially

in the configuration space dimension, which make them inappropriate for high-dimensional problems.

Traditional grid-based approaches produce paths whose heading changes are constrained to multiples of a

given base angle, often resulting in suboptimal paths. Any-angle path planning approaches find shorter paths

by propagating information along grid edges (to search fast) without constraining their paths to grid edges (to

find short paths).

Grid-based approaches often need to search repeatedly, for example, when the knowledge of the robot about

the configuration space changes or the configuration space itself changes during path following. Incremental

heuristic search algorithms replan fast by using experience with the previous similar path-planning problems to

speed up their search for the current one.

Interval-Based Search[edit]

These approaches are similar to grid-based search approaches except that they generate a paving covering

entirely the configuration space instead of a grid [1] . The paving is decomposed into two subpavings X-,X+ made

with boxes such that X- ⊂ Cfree ⊂ X+. Characterizing Cfree amounts to solve a set inversion problem. Interval

analysis could thus be used when Cfree cannot be described by linear inequalities in order to have a guaranteed

enclosure.

The robot is thus allowed to move freely in X-, and cannot go outside X+. To both subpavings, a neighbor graph

is built and paths can be found using algorithms such as Dijkstra or A*. When a path is feasible in X-, it is also

feasible in Cfree. When no path exists in X+ from one initial configuration to the goal, we have the guarantee that

no feasible path exists in Cfree. As for the grid-based approach, the interval approach is inappropriate for high-

dimensional problems, due to the fact that the number of boxes to be generated grows exponentially with

respect to the dimension of configuration space.

An illustration is provided by the three figures on the right where a hook with two degrees of freedom has to

move from the left to the right, avoiding two horizontal small segments.

Page 16: human robot  inter.docx

Motion from the initial configuration (blue) to the final configuration of the hook, avoiding the two obstacles (red segments).

The left-bottom corner of the hook has to stay on the horizontal line, which makes the hook two degrees of freedom.

Decomposition with boxes covering the configuration space: The subpaving X- is the union all red boxes and the subpaving

X+ is the union of red and green boxes. The path corresponds to the motion represented above.

This figure corresponds to the same path as above but obtained with many fewer boxes. The algorithm avoids bisecting

boxes in parts of the configuration space that do not influence the final result.

The decomposition with subpavings using interval analysis also makes it possible to characterize the topology

of Cfree such as counting its number of connected components [2] .

Geometric Algorithms[edit]

Point robots among polygonal obstacles

Visibility graph

Cell decomposition

Translating objects among obstacles

Minkowski sum

Page 17: human robot  inter.docx

Potential Fields[edit]

One approach is to treat the robot's configuration as a point in a potential field that combines attraction to the

goal, and repulsion from obstacles. The resulting trajectory is output as the path. This approach has

advantages in that the trajectory is produced with little computation. However, they can become trapped in local

minima of the potential field, and fail to find a path.

Sampling-Based Algorithms[edit]

Sampling-based algorithms represent the configuration space with a roadmap of sampled configurations. A

basic algorithm samples N configurations in C, and retains those in Cfree to use as milestones. A roadmap is

then constructed that connects two milestones P and Q if the line segment PQ is completely in Cfree. Again,

collision detection is used to test inclusion in Cfree. To find a path that connects S and G, they are added to the

roadmap. If a path in the roadmap links S and G, the planner succeeds, and returns that path. If not, the reason

is not definitive: either there is no path in Cfree, or the planner did not sample enough milestones.

These algorithms work well for high-dimensional configuration spaces, because unlike combinatorial

algorithms, their running time is not (explicitly) exponentially dependent on the dimension of C. They are also

(generally) substantially easier to implement. They are probabilistically complete, meaning the probability that

they will produce a solution approaches 1 as more time is spent. However, they cannot determine if no solution

exists.

Given basic visibility conditions on Cfree, it has been proven that as the number of configurations N grows

higher, the probability that the above algorithm finds a solution approaches 1 exponentially.[3] Visibility is not

explicitly dependent on the dimension of C; it is possible to have a high-dimensional space with "good" visibility

or a low dimensional space with "poor" visibility. The experimental success of sample-based methods suggests

that most commonly seen spaces have good visibility.

There are many variants of this basic scheme:

It is typically much faster to only test segments between nearby pairs of milestones, rather than all pairs.

Nonuniform sampling distributions attempt to place more milestones in areas that improve the connectivity

of the roadmap.

Quasirandom samples typically produce a better covering of configuration space

than pseudorandom ones, though some recent work argues that the effect of the source of randomness is

minimal compared to the effect of the sampling distribution.

It is possible to substantially reduce the number of milestones needed to solve a given problem by allowing

curved eye sights (for example by crawling on the obstacles that block the way between two milestones [4]).

Page 18: human robot  inter.docx

If only one or a few planning queries are needed, it is not always necessary to construct a roadmap of the

entire space. Tree-growing variants are typically faster for this case (single-query planning). Roadmaps

are still useful if many queries are to be made on the same space (multi-query planning)

Completeness and Performance[edit]

A motion planner is said to be complete if the planner in finite time either produces a solution or correctly

reports that there is none. Most complete algorithms are geometry-based. The performance of a complete

planner is assessed by its computational complexity.

Resolution completeness is the property that the planner is guaranteed to find a path if the resolution of an

underlying grid is fine enough. Most resolution complete planners are grid-based or interval-based. The

computational complexity of resolution complete planners is dependent on the number of points in the

underlying grid, which is O(1/hd), where h is the resolution (the length of one side of a grid cell) and d is the

configuration space dimension.

Probabilistic completeness is the property that as more “work” is performed, the probability that the planner fails

to find a path, if one exists, asymptotically approaches zero. Several sample-based methods are

probabilistically complete. The performance of a probabilistically complete planner is measured by the rate of

convergence.

Incomplete planners do not always produce a feasible path when one exists. Sometimes incomplete planners

do work well in practice.

Problem Variants[edit]

Many algorithms have been developed to handle variants of this basic problem.

Differential Constraints[edit]

Holonomic

Manipulator arms (with dynamics)

Nonholonomic

Cars

Unicycles

Planes

Acceleration bounded systems

Moving obstacles (time cannot go backward)

Bevel-tip steerable needle

Page 19: human robot  inter.docx

Differential Drive Robots

Optimality Constraints[edit]

Hybrid Systems[edit]

Hybrid systems are those that mix discrete and continuous behavior. Examples of such systems are:

Robotic manipulation

Mechanical assembly

Legged robot locomotion

Reconfigurable robots

Uncertainty[edit]

Motion uncertainty

Missing information

Active sensing

Sensorless planning

Applications[edit]

Robot navigation

Automation

The driverless car

Robotic surgery

Digital character animation

Protein folding

Safety and accessibility in computer-aided architectural design

Application-oriented HRI research[edit]

In addition to general HRI research, researchers are currently exploring application areas for human-robot

interaction systems. Application-oriented research is used to help bring current robotics technologies to

bear against problems that exist in today's society. While human-robot interaction is still a rather young

area of interest, there is active development and research in many areas.

Search and rescue[edit]

First responders face great risks in search and rescue (SAR) settings, which typically involve

environments that are unsafe for a human to travel[citation needed]. In addition, technology offers tools for

Page 20: human robot  inter.docx

observation that can greatly speed-up and improve the accuracy of human perception[citation needed]. Robots

can be used to address these concerns[citation needed] . Research in this area includes efforts to address robot

sensing, mobility, navigation, planning, integration, and tele-operated control[citation needed].

SAR robots have already been deployed to environments such as the Collapse of the World Trade

Center.[2]

Other application areas include:

Entertainment

Education

Field robotics

Home and companion robotics

Hospitality

Rehabilitation and Elder Care

Robot Assisted Therapy (RAT)

Advantage

"Advances in Human-Robot Interaction" provides a unique collection of recent research in human-robot interaction. It covers the basic important research areas ranging from multi-modal interfaces, interpretation, interaction, learning, or motion coordination to topics such as physical interaction, systems, and architectures. The book addresses key issues of human-robot interaction concerned with perception, modelling, control, planning and cognition, covering a wide spectrum of applications. This includes interaction and communication with robots in manufacturing environments and the collaboration and co-existence with assistive robots in domestic environments. Among the presented examples are a robotic bartender, a new programming paradigm for a cleaning robot, or an approach to interactive teaching of a robot assistant in manufacturing environment. This carefully edited book reports on contributions from leading German academic institutions and industrial companies brought together within MORPHA, a 4 year project on interaction and communication between humans and anthropomorphic robot assistants.

ConclusionThe aim of the project was to develop a user- friendly graphical user interface using effective Human robot interaction through iterative design. The interface had to be intuitive and did not subject the user to sensory overload. Human robot interaction had to take a minimalistic approach and only display the selected video streams and what is core to operating the robot. The user interface had to only bring in other data if requested or if required e.g. motor overload. Intelligence in the system was needed that is to think for the user to simplify the operation of the robot. 

In order to fulfill the aim of the project, a user centered design approach was adopted that

Page 21: human robot  inter.docx

involved users from the first stages of the design until the final design was obtained. The users did different tasks on the system and based on the feedback from the tasks, improvement on the system was made. The system development that was adopted for this system was the iterative and incremental approach.

A usable and user centered system was successfully implemented. This was obtained from the last user evaluations that were done. The questionnaire findings showed 85.7 % of system usability and 84.1% effective design principles. These results obtained show that involving users in the design stages increases the probability of obtaining a usable system since the users would have contributed in the design and the system output would be what they expected.

The final system generally received very positive feedback from the users as they were comparing the system to the available robot user interfaces. The other reason for very good system acceptance was that all the functionalities and more that the users had requested were successfully implemented.