users.ntua.gr_kdelip_moustris et al. - evolution of autonomous and semi‐autonomous roboti
TRANSCRIPT
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
1/18
THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY
Int J Med Robotics Comput Assist Surg(2011). REVIEW ARTICLEPublished online in Wiley Online Library (wileyonlinelibrary.com) DOI:10.1002/rcs.408
Evolution of autonomous and semi-autonomousrobotic surgical systems: a review of the literature
G. P. Moustris1*
S. C. Hiridis2
K. M. Deliparaschos1
K. M. Konstantinidis
2
1Department of Signals, Control and
Robotics, School of Electrical and
Computer Engineering, National
Technical University of Athens, Greece
2General, Laparoendoscopic and
Robotic Surgical Clinic, Athens
Medical Centre, Greece
*Correspondence to: G. P. Moustris,
Department of Signals, Control andRobotics, School of Electrical and
Computer Engineering, National
Technical University of Athens,15773 Zographou Campus, Athens,
Greece.
E-mail: [email protected]
Accepted: 12 May 2011
Abstract
Background Autonomous control of surgical robotic platforms may offer
enhancements such as higher precision, intelligent manoeuvres, tissue-
damage avoidance, etc. Autonomous robotic systems in surgery are largely at
the experimental level. However, they have also reached clinical application.
Methods A literature review pertaining to commercial medical systems
which incorporate autonomous and semi-autonomous features, as well as
experimental work involving automation of various surgical procedures, is
presented. Results are drawn from major databases, excluding papers not
experimentally implemented on real robots.
Results Our search yielded several experimental and clinical applications,
describing progress in autonomous surgical manoeuvres, ultrasound
guidance, optical coherence tomography guidance, cochlear implantation,
motion compensation, orthopaedic, neurological and radiosurgery robots.
Conclusion Autonomous and semi-autonomous systems are beginning to
emerge in various interventions, automating important steps of the operation.
These systems are expected to become standard modality and revolutionize
the face of surgery. Copyright 2011 John Wiley & Sons, Ltd.
Keywords minimally invasive surgery (MIS); robotic surgery; autonomous
robots
Introduction
The future of robotic surgical systems depends upon improvements in the
present technology and development of new radically different enhancements
(1). Such innovations, some of them still in experimental stage, include
miniaturization of robotic arms, proprioception and haptic feedback, new
methods for tissue approximation and haemostasis, flexible shafts of
robotic instruments, implementation of the natural orifice transluminal
endoscopic surgery (NOTES) concept, integration of navigation systems
through augmented-reality applications and, finally, autonomous robotic
actuation.
Definitions and classifications
The classification of robotic systems depends on the actual point of view one
takes. There aremultiple classifications of robotic systems applied in medicine,
with some being more preferable. A first high-level classification was proposed
Copyright 2011 John Wiley & Sons, Ltd.
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
2/18
G. P. Moustriset al.
by Taylor and Stoianovici (2), in which they divided
surgical robots into two broad categories, surgical
computer-aided design/manufacturing (CAD/CAM) systems
and surgical assistants. Surgical CAD/CAM systems are
designed to assist in planning and intraoperative naviga-
tion through reconstruction of preoperative images and
formation of three-dimensional (3D) models, registrationof this data to the patient in the operating room, and
use of robots and image overlay displays to assist in the
accurate execution of the planned interventions.
Surgical assistant systems are further divided into two
classes: surgical extenders, which are operated directly by
the surgeon and essentially extend human capabilities in
carrying out a variety of surgical tasks, with emphasis on
intraoperative decision support and skill enhancement;
and auxiliary surgical supports, which work side-by-side
with the surgeon and provide support functions, such as
holding an endoscope.
Complementary to the previous classification, Wolfand Shoham (3) summarize a division according to
autonomous function. They present four categories for
medical robots, passive robots, semiactive robots, active
robots and remote manipulators. Loosely correlating the
two classifications, one could say that passive, semiactive
and active robots fall under the surgical CAD/CAM and
auxiliary surgical support categories, while the remote
manipulators are identified to the surgical extender class.
Passive robots provide support actions in surgery and do
not perform any autonomous or active actions. Typical
examples include the Acrobot (4), the Arthrobot (5) and
the MAKO system (6). Semiactive robots are closely
related to the surgical assistant class and perform similaroperations, viz. support tasks such as holding a tool or
automated stereotaxy, e.g. the NeuroMate stereotactic
robot. On the contrary, active robots exhibit autonomous
behaviour and operate without direct interaction with
the surgeon. Prominent examples include the CyberKnife
(Accuray Inc., Sunnyvale, CA, USA) and RoboDoc (Curexo
Technology Corp., Fremont, CA, USA) (7). Multiple
publications have assessed its efficacy and it is discussed
further below (8,9). Probot also represents one of the
first applications of an autonomous robot in the clinical
setting, initially used in 1991 for a transurethral resection
of the prostate (10). For the first time in history, a roboticdevice was used for removal of human tissue.
Remote manipulators, or surgical extenders, are
probably the most common surgical robots in use today.
One of the most successful commercial robots in this
class is the da Vinci robot (Intuitive Surgical, Sunnyvale,
CA, USA), which was originally implemented for heart
surgery (11). In this masterslave telemanipulator system
the surgeon sits at a master console next to the patient,
who is operated on by the slave arms (Figure 1). The
surgeon views the internal organs through an endoscope
and, by moving the master manipulator, can adjust the
position of the slave robot. The surgeon compensates for
any soft-tissue motion, thus closing the servo-control loopby visual feedback. The high-definition 3D images and
micromanipulation ability of the robot make it ideal for
Figure 1. The da Vinci SI telesurgical robot. Reproduced bypermission of Intuitive Surgical Inc
Figure 2. A view of the MiroSurge telesurgical system. Two
MIRO surgical manipulators are clearly visible. Reproduced by
permission of the German Aerospace Centre
transpubic radical prostatectomy, with reduced risk of
incontinence and impotence (12).
A more recent telesurgery robot is the MiroSurge system
(13) (Figure 2), developed by the German Aerospace
Centre (DLR). The system consists of a masterslaveplatform, with the slave platform involving three robotic
manipulators (MIRO surgical robots; see Figure 3), two
carrying surgical tools and one carrying an endoscope.
Remote manipulators belong to a broad field of robotics
called telerobotics. Niemeyer et al. (14) present a more
engineering-orientated classification of telerobots with
respect to control architecture and user interaction.
However, this classification holds true for surgical
telemanipulators as well. Depending on the degree of
user interaction, three categories are defined, direct or
manual control, shared control and supervisory control
robotic systems. In direct control the surgeon operates
the slave robot directly through the master console. Thisinvolves no autonomy on the slave end and the robot
mirrors the surgeons movements (although some filtering
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
3/18
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
4/18
G. P. Moustriset al.
Figure 4. A depiction of the ALFUS diagram, used in describing
the level of autonomy of a robotic system
in the ALFUS framework (19), a collaborative effort
involving several US organizations which formed theALFUS Ad Hoc Work Group to address the issue
of autonomy in robotic systems. The framework also
specifies metrics in order to quantify each axis of the
diagram.
In the field of robotics, the notion of autonomy is
heavily dependent on the principle of feedback. As an
example, consider a human and a robot performing a
simple mundane task. Even though it is difficult to imagine
a human completely cut off from his environment, this
is easy when it comes to robots. Sensors, e.g. encoders,
cameras, etc., provide necessary information for the actual
state of the system. This information synthesizes thefeedback signal, which is used by the controller in order
to exhibit autonomous behaviour. The environment is
perceived through sensor information and by processing
this information the robot creates a structured image
of the environment (external state) and itself (internal
state). This constitutes the sense phase. Both perceptions
are essential for carrying out a task successfully. Although
a human can easily perceive and process the environment,
the robot must formalize it in a very accurate way
in order to understand it. Having reconstructed these
images, the problem is then transferred to the planning
task. Planning is the process of computing the futureinternal states the system must acquire, e.g. move a
joint along a path, in order to complete the task.
Each action can be characterized by preconditions and
postconditions. Preconditions indicate what is required
to perform an action, while postconditions describe
possible situations after the action. The planning process
involves parameters that express quantities in the actual
environment, e.g. the position and torque of the joint
along a path, and as such both internal and external states
(self and environment) must be previously known through
sensing. Having computed the planning, the problem
shifts to the acting phase. Acting is the actual movement
of the system in the environment. This can be achievedthrough actuators (electrical motors, pneumatic motors,
etc). Note that the actuators impose their own limits in
the actual movement; hence, these limitations must be
taken into account during the planning phase.
The above steps constitute three important opera-
tions in robot control: senseplan act. Robot control
architectures use these three phases in various ways in
order to achieve the desired behaviour. The older, but
largely abandoned, architecture places these phases ina sequential pattern, i.e. the senseplanact cycle. This
architecture is also called deliberative control (20). At
the other end, there is the reactive control paradigm
that does away with planning altogether. Deliberative
control is slow and depends heavily on internal mod-
els and accurate information, while reactive control is
fast, computationally light but cannot exhibit high-level
behaviour. Hybrid architectures also exist, leveraging the
advantages of both paradigms. It is doubtful, however,
that planning can be avoided in surgical robots, since
surgical skills and manoeuvres are very complex in nature.
The surgical field is a special environment for arobot and should be managed according to the previous
framework. The laparoscopic environment consists mainly
of soft tissue, bony tissue, air and fluid. It is obviously
a dynamic environment, with constant alteration of the
shape of its constituents during the operation. Perception
of this environment must result in a digitized image of the
operating field. Preoperative imaging examinations are
not of great help, because the deformation of tissues (with
insufflation of CO2, respiratory movements, instrument
manipulations) may obscure correct registration to real
anatomy. Also, planning of the surgical manoeuvres is
a very complex task that the system must take into
special account. The control algorithms must possessthe knowledge of appropriate techniques for each phase
of an operation. These techniques comprise a set of
complex movements that can be learned from an
expert, i.e. a surgeon using a manipulator which will
record his movements, or be mathematically planned
and described in a suitable manner. Having a database
of these movements the robot, by selectively filtering
the appropriate ones, should robustly fit them to an
actual operating scenario, under the directions of the
surgeon when necessary (surgeon-supervised robotic
surgery). The system could also learn from its own
operations and acquire new field knowledge that willbe incorporated into the existing corpus. In complex
tasks, many hierarchical levels of planning can coexist.
Depending on the level of autonomy required, there can
be several planning algorithms operating in parallel. In the
case of laparoscopic surgery, autonomy should probably
be introduced in the context of task execution, i.e. as an
intelligent tool obeying the instructions of the supervising
surgeon [an idea also mentioned by Baena and Davies
(10)]. In such a setting, the surgeon should instruct the
robot what to do, e.g. grab, suture, etc., and the robot
will have to figure out how to do it. Decision making,
i.e.what to do, is probably best to be left to the surgeon,
since humans, under the correct training and experience,are better at taking decisions in unstructured or chaotic
situations than robots.
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
5/18
Evolution of autonomous and semi-autonomous robotic surgical systems
Planning and skill modelling
With a supervisor-controlled surgical robot, the surgeon
is able to instruct the robot to perform certain tasks
under his supervision, as happens with the training of
young surgeons early in their internships. The system is
supposed to keep a database with different sets of possiblesurgical manoeuvres (drawn from recording actual human
movements) encoded in a suitable manner. This is known
as surgical skill modelling. Work towards this goal has
already been performed by several researchers. Rosen
et al. (21) have used a discrete Markov model in order
to decompose minimally invasive surgery (MIS) tasks and
have deployed it to tying an intracorporeal knot on an
animal model. Kragic et al. (22) have deployed a hidden
Markov model (HMM), using primitive gestemes, and
have modelled two simple surgical tasks in vitreo-retinal
eye surgery. More abstractly, Kang and Wen (23) have
mathematically analysed knot tying and have developedthe conditions for knot placement and tension control. An
interesting approach to skill modelling is the Language of
Surgery project at Johns Hopkins University (24,25). The
main idea behind it is that surgical skill can be learned,
much like a language. Thus, one can identify elementary
motions, juxtaposed to phonemes, and by combining
them new words can be constructed. Again, using these
words, one can produces surgical phrases, and so on.
The surgical procedure is decomposed in a hierarchical
manner (Figure 5), consisting of a sequence of tasks (e.g.
suturing) (26). Respectively, each task is decomposed into
a sequence of more elementary motions called surgemes,
which in turn comprise a sequence of low-level motion
primitives calleddexemes.
Under this framework, Lin et al. (24) have used linear
discriminant analysis along with a Bayesian classifier
in order to model a suturing task. They have created
a motion vocabulary consisting of eight elementary
suturing gestures (reach for needle, position needle, insert
needle/push needle through tissue, etc.) by collecting
motion data from the da Vinci system under the command
of an expert surgeon. The system was able to classify the
surgical gestures with 90% accuracy. Reiley et al. (27)
have extended the previous work using more advanced
Figure 5. Hierarchical decomposition of a surgical task accord-
ing to the Language of Surgery project. Each level is decomposedinto simpler motion gestures, ranging from the entire procedure
(high-level) to elementary surgical motion primitives called dex-
emes (low-level)
statistical modelling, by replacing the Bayes classifier with
a three-state HMM, and increased the number of surgemes
to 11; this system performed with an accuracy of 92%.
Even though these results are promising, more work is
needed in order to model enough surgical tasks. These
tasks can then be combined in the planning phase so as
to produce a meaningful outcome in autonomous roboticsurgery.
Planning is the process of fitting the specified
manoeuvre to the actual operating condition in the most
appropriate way. The planning algorithm should also
compensate for the change of the environment, e.g.
soft tissue deformations, in the immediate future. The
output of this algorithm is primitives of motion, much like
the surgemes described above. However these primitives
must be translated to a more accurate description of
robot movements. This task should be performed by a
low-level planner, which will receive the output of the
high-level planning algorithm. The primitives of motionwill then be translated to actual trajectories that the robot
must follow in order to complete the specified task. This
algorithm must also take into account various constraints,
e.g. distance from the surgical field, quickest route, etc.
Due to the dynamic nature of the environment, the high-
level planning might prove to be impossible in some
instances, e.g. respiratory motion may cause unmodelled
tissue deformation, or the surgeon could move organs that
obstruct his/her line of sight. In such a case, the high-
level plan can be recomputed to produce new feasible
primitives of motion that will then be transferred to
the low-level planner. This aforementioned loop must
include a constant feasibility check while the robot moves,
following the trajectory executed. Note that the feasibility
check takes place constantly when the robot actually
moves, following the trajectory.
Based on the results of the Language of Surgery
project, Reileyet al. developed a prototype system that
generates surgical motions based on expert demonstration
(28). This system produces surgemes for three common
surgical tasks (suturing, knot tying and needle passing)
and combines them using dynamic time warping and
Gaussian mixture models. The actual motion paths are
produced using Gaussian mixture regression. The results
are validated against HMMs models of surgemes (26),and have been classified as those belonging to an expert
surgeon. Although this work is a significant first step
towards automating surgical gestures, the system is open-
loop without any experimental validation on a real robot.
Intelligent Control
Intelligent control refers to the use of various techniques
and algorithms that solve problems using artificial
intelligence applications. Known intelligent algorithms,
usually referred to as intelligent control systems orexpert systems, include neural networks, fuzzy logic,
genetic algorithms and particle swarm optimization (PSO)
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
6/18
G. P. Moustriset al.
techniques (29,30), to name some. The above methods
are often used to provide a more efficient solution (i.e.
convergence to the problem solution). Neural networks
and fuzzy logic are more suitable for real-time control
problems, whereas genetic algorithms and PSO are
classified as heuristic methods, better suited for offline
preprocessing. These intelligent algorithms can cope withimprecise data (fuzzy logic), highly non-linear models
(neural networks) and large search space heuristics
(genetic algorithms, PSO). A useful feature of these
intelligent techniques is that of adaptive learning, i.e.
the ability to learn from previous experience. Thus, they
can incorporate field knowledge which is acquired during
actual surgical operations and improve their performance
over time (3133).
Methods
We present results from a literature review pertaining
to commercial medical systems, which incorporate
autonomous and semi-autonomous features, as well
as experimental work involving automation of various
surgical procedures. The results are drawn from major
bibliographic databases (IEEE, ScienceDirect, PudMed,
SAGE, Springer, Wiley and Google Scholar). More focus
has been put on newer published work (mainly in the
last decade). A selection process was also used, excluding
papers where their contribution was not experimentally
implemented on real robots, except in cases where the
results were deemed significant enough for inclusion.
Results
Experimental work
There have been many efforts to develop surgical robots
capable of performing some tasks autonomously. Much
of this research involves visual servoing, which combines
visual tracking and control theory, although different
modalities are also widely in use, e.g. ultrasound imaging
has been investigated by several researchers, due to itslow cost and real-time feedback. The target operations
vary from laparoscopic surgery to cochlear implantation
to heart surgery, and so on. Depending on the type of
intervention, automation is inserted into various steps
of the procedure. Analysis of experimental research is
presented in the following sections.
Autonomous suturing
Knot tying is a common procedure during surgery.
Automating this task would greatly reduce surgeon
fatigue and total surgery time. Building a good knot-tyingcontroller is difficult because the spatial orientations and
manoeuvring of multiple instruments must be precisely
controlled. The first to investigate robotic knot tying in
MIS were Kang and Wen. They have developed a custom
robotic system called Endobot (34,35), comprising two
manipulators which can be controlled in three modes;
manual, in shared control mode and autonomously. In
manual mode the surgeon operates the manipulators
directly (not in a telesurgical sense), while the controlleroffers gravity compensation. In shared control, some axes
are controlled by the robot while leaving the remaining
axes to the surgeon. Of course the most interesting
mode is the autonomous mode. The robot operates in
a supervisory fashion, performing tasks on its own. Kang
and Wen describe the process of tying a square knot,
having the robot follow a reference trajectory using a
simple proportionalintegral derivative (PID) controller.
Although they provide positive experiments, it seems that
the robot operates using a hard-wired policy, meaning
that it always repeats the same motion and excludes any
possibility of performing the same task with unfamiliarinstrument positions.
In a similar fashion, Bauernschmitt et al. have also
developed a system for heart surgery, able to reproduce
prerecorded knot-tying manoeuvres (36). The system
consists of a two KUKA KR 6/2 robotic arms, equipped
with two surgical instruments from Intuitive Surgical
Inc. A third arm provides 3D vision through a suitable
endoscopic camera. The surgeon controls the robot at the
master-end via two PHANToM haptic devices (Sensable
Inc., MA, USA). The surgical instruments have been
adapted with force/strain gauges in order to capture
forces at the grip. Knot-tying experiments provided
positive results, reproducing the manoeuvres even attwice the speed. However, the blind re-execution of
prerecorded surgical gestures does not leave room for any
practical implementation in a clinical situation.
A more robust control would be provided if the
user could teach a series of correct examples to the
controller. An interesting study on automating suture
knot winding was published by Mayer et al. (37), using
the EndoPAR robot, involving a class of recurrent artificial
neural networks called long short-term memory (LSTM)
(38). LSTM can perform tasks such as knot tying
where the previous states (instrument positions) need
to be remembered for long periods of time in order toselect future actions appropriately. The EndoPAR robot
comprises four Mitsubishi RV-6SL robotic arms that are
mounted upside-down on an aluminium gantry. Three of
the arms hold laparoscopic grippers, attached with force
sensors, while the fourth holds a laparoscopic camera.
The arms are controlled through PHANToM devices.
The authors considered a knot-tying task, breaking it
into six consecutive steps; note that all three robotic
arms were used. The authors used LSTMs for their
experiments and trained them to learn to control the
movement of a surgical manipulator to successfully tie
a knot. The training algorithm used was the Evolino
supervisory evolutionary training framework (39). TheEvolino-trained LSTM networksin these experiments were
able to learn from surgeons and outperform them on
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
7/18
Evolution of autonomous and semi-autonomous robotic surgical systems
the real robot. The current approach only deals with
the winding portion of the knot-tying task. Therefore,
its contribution is limited by the efficiency of the other
subtasks required to complete the full knot. Initial results
using this framework are promising; the networks were
able to perform the task on the real robot without access
to teaching examples. These results constitute the firstsuccessful application of supervised learning to MIS knot
tying.
Mayer et al. have also recently presented a different
approach on automated knot tying, developing a system
able to learn to tie knots after just one demonstration
from the surgeon (40). It calculates motion primitives
for the skill, along with a fluid dynamics planning
algorithm that generalizes the demonstrated knot-tying
motion. However, this system acts more as a proof of
concept, since its success rate in knot-tying experiments is
approximately 50%. Learning by demonstration has also
been investigated by van den Berg et al. (41), using twoBerkeley surgical robots (Figure 6).
The authors used a Kalman smoother to infer a
reference trajectory for knot tying from multiple human
demonstrations. Following a linear quadratic regulator
(LQR) guided the robot towards the reference, alongside
with an iterative learning algorithm in order to improve
the quality of convergence. In the experiments a thread
was passed through two rings, while a weight was tied
to one end, keeping the thread in tension. The goal
was to tie a knot around one ring. The system was
able to perform the knot with increasing speed, going
faster up to seven-fold of the normal demonstration (see
Figure 7 for a graphical description of the knot-tying
motion decomposition).
In all three approaches above, only the knot-tying
task was considered, requiring manual help in several
preparatory stages, e.g. grasping the needle. Tissue
piercing in suturing has also been investigated using the
EndoPAR robot (42). In this setting, one robotic arm holds
a circular needle, while a second one employs a stereo
camera. The surgeon uses a laser pointer to pinpoint
the place of entry and the robot autonomously performs
the stitch. The system uses visual servoing to position
the needle on the right spot. Experiments on phantoms
Figure 6. The Berkeley surgical robot, used in automatic
knot-tying experiments by van den Berg et al. (41). Image
2010 IEEE
Figure 7. Knot-tying decomposition according to van den Berg
et al. (41). The gesture consists of three stages: in the first (1),
robot A loops the thread around the gripper of robot B; in the
second stage (2, 3), robot B grasps the thread and closes its
grippers; in the third stage (4), both robot arms are moved awayfrom each other to tighten the knot. Image 2010 IEEE
and actual tissue provided encouraging results, albeit the
tissue presented difficulties in the experiments, such as
diffraction of the laser and variable stiffness.
Visual servoing has also been deployed by other
researchers for the automation of robotic MIS suturing.
Hynes et al. have used two seven-degrees of freedom
(DOF) PA-10 (Mitsubishi Heavy Industries Ltd, Tokyo,
Japan) robotic manipulators to perform knot tying, using
image feedback from a stereo camera (43). The robots
were mounted with laparoscopic graspers which were
marked with an optical pattern. This pattern was based
on Grey encoders and was used to infer the position and
orientation of the tools. The system was used to replicate
prerecorder knot-tying movements, although some initial
steps were done manually, e.g. passing the needle through
a test foam surface. User input was also required in the
beginning in order to indicate points of interest (position
of the needle and tail). In the experiments the robot was
able to tie a knot in approximately 80 s. Failures were
also reported, mainly due to incorrect grasping of the
needle, slipping, etc. Suturing in robotic microsurgical
keratoplasty has been reported by Zong et al. (44),
using a custom suturing end-effector mounted on a six-DOF robotic manipulator. The end-effector includes a
one-axis force microsensor and performs the motion of
tissue piercing and subsequently pulling the thread out.
Vision feedback was provided through two CCD cameras
mounted on a stereo surgical microscope. The visual
servo controller autonomously guided the needle tip with
great precision, to a point specified by the user (the
point of needle entry). However, no complete suturing
experiments were reported in the study.
Cochlear implantation
Cochlear implantation has become widespread for
patients with severe hearing impairment in the last
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
8/18
G. P. Moustriset al.
Figure 8. The micro-drilling surgical robotic system used by
Taylor et al. (46) for robotic cochleostomy. Reproduced by
permission of SAGE Publications Ltd
20 years. Surgery in the middle ear requires delicate
movements, since the space is confined, involving
sensitive structures. Cochleostomy is a basic step in the
procedure, where a hole is drilled on the outer wall
of the cochlea, through which the electrode implant
is inserted. Perforation of the endosteal membrane by
the drill may result in contamination of the endolymph
and perilymph with bone dust, increase the risk of
postoperative infection and reduce the residual hearing.
To accommodate this problem, Brett et al. have developed
an autonomous micro-drilling robot performing the
cochleostomy (45,46). The robot consists of the micro-
drill mounted on a linear guide, attached to a passive
robotic arm (Figure 8).During the operation, the surgeon moves the arm,
placing it at the correct pose with the drill facing towards
the desired trajectory. Following that, the arm is locked
and the drill autonomously creates the hole, leaving the
endosteal membrane intact, which is then opened by a
knife. The controller monitors the force and the torque
transients exerted on the tool tip and, by analysing
them, detects when breakthrough is about to occur,
thus stopping the drilling (Figure 9). Clinical experiments
(47,48) showed promising results.
A different approach was put forth by Majdani
et al., aiming at minimally invasive robotic cochleostomy(49,50). The main purpose here was to create an access
canal to the inner ear and perform the cochleostomy using
an autonomous robot, without performing mastoidectomy
and exposing critical anatomical structures. To this end,
a specially designed robot was constructed comprising
a KKR3 (KUKA GmbH, Augsburg, Germany) six-DOF
robot with a surgical drill serving as its end effector
(Figure 10).
The system used a camera along with special markers in
order to perform localization and pose estimation for the
robot as well as the surgical field (patient). Preoperative
planning using patient CT images was also used for
the calculation of the optimal drilling trajectory, takingunder consideration the distance from critical structures
such as the facial nerve, the corda, etc. The system
Figure 9. View of a cochleostomy with the drill bit retracted and
endosteal membrane intact, using the micro-drilling surgical
robot (46). Reproduced by permission of SAGE Publications Ltd
Figure 10. View of the robotic set-up used by Majdaniet al. (49,50) for minimally invasive robotic cochleostomy.
Reproduced by permission of Springer Science+Business
Media
Figure 11. Experiment in minimally invasive robotic cochleo-
stomy using a temporal bone (49,50). Fiducial markers placed
on the bone are used for localization and registration. Optical
markers are also placed on the robot tip and the temporal bone
holder. Reproduced by permission of Springer Science+Business
Media
performed in a closed-loop fashion, using image feedback
for calculating the error signals of the robot to the
reference trajectory. Thereafter, the robot autonomouslydrilled the canal and the cochlea according to the
preoperative plan (Figure 11). Tests were performed in
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
9/18
Evolution of autonomous and semi-autonomous robotic surgical systems
10 cadaveric specimens with positive results. However,
even though the canal was opened in all experiments, in
one the cochleostomy was not completely performed,
which can be attributed to noise error in the visual
tracking system. Another drawback of image registration
is that the fiducial markers must always be visible from
the camera, something which cannot be guaranteed inthe operating room. However, the results show great
promise and, by combining the work of the same team
in automating cochlear implant insertion as described
in (51), fully autonomous robotic cochlear implantation
might be just around the corner.
Ultrasound guidance for percutaneousinterventions
Ultrasonography is a popular imaging modality for
visualizing percutaneous body structures, since it is
cheap, non-invasive, real-time and with no known long-
term side-effects. Megali et al. (52) describe one of the
earliest attempts to guide a robot using two-dimensioanl
(2D) ultrasound guidance for biopsy procedures. Their
system consisted of a manipulator mounted with a
biopsy needle at its end-effector, an ultrsound probe
and a 3D localizer. These components were integrated
into a workstation, fusing the data and providing a
graphical interface to the user. The surgeon selected
the biopsy target and the position of needle insertion
into the body by clicking on the ultrasound image in
the computer. The robot automatically acquired the
correct pose so as to provide linear access for theneedle to the target point, although the actual bioptic
sampling was performed manually. Tests in a water
tank showed an average accuracy of 2.05 mm, with
a maximum error of 2.49 mm. A similar approach to
ultrasound-guided robotic transperineal prostate biopsy
was presented by Phee et al. (53), where a transrectal
ultrasound probe was used to scan the prostate and create
a 3D model with the help of an urologist. Subsequently,
the entry trajectory was planned and the robotic biopsy
system would configure itself to the correct position. The
actual needle insertion was performed manually. In vivo
experiments were demonstrated, with a placement errorreaching approximately 2.5 mm.
Given their ability to reconstruct interesting structures,
such as cysts, in three dimensions, 3D ultrasound (3DUS)
devices have also been used in order to provide guidance
to biopsy robots. Since 2006, the Ultrasound Transducer
Group at Duke University has performed several feasibility
studies regarding the use of real-time 3D ultrasound for
the autonomous guidance of surgical robots, involving
breast biopsy (54), shrapnel detection (5456) and
prostate biopsy (57). The first study investigated the
guidance of a three-DOF Gantry III (Techno Inc., New
Hyde Park, NY, USA) Cartesian robot, using real-time
3D ultrasound (58). Three experiments were performedin order to assess the positional accuracy. In the first
two, the targets were submerged into a water tank, while
the ultrasound probe performed scanning (the targets
consisted of wire models). After the coordinates of the
targets had been manually extracted, they were sent to
the robot, which moved a probe needle towards them.
In the third experiment, a hypo-echoic lesion inside a
tissue-mimicking slurry was used. An in vivoexperiment,
using a canine cadaver, was also performed. The goalwas to puncture a desired position on the distal wall of
the gall bladder. The accuracy error of the system was
approximately 1.30 mm. However, the system did not
operate in a closed loop and relied on user input for
target acquisition.
The latter was investigated by Fronheiser et al. (59),
using the same in vitro experimental set-up, but the
process was now streamlined. The 3DUS data were
captured by the probe and were subsequently transferred
to a MATLAB program, which analysed them and
automatically extracted the goal position. The appropriate
movement command was then passed to the robotwithout any human intervention. Breast cyst biopsy was
successfully demonstrated using a 2 cm spherical anechoic
lesion (a water-filled balloon) in a tissue-mimicking
phantom, as well as in excised boneless turkey breast
tissue (55,60) (see Figure 12).
Automatic guidance using real-time 3D catheter trans-
ducer probes for intravascular and intracardiac applica-
tions was also investigated in (59) and further analysed
in (61). Two experiments were performed using a water
tank, while a third one involved a bifurcated abdominal
aortic graft. In all three the goal was to drive the robot
probe to touch a needle tip at a specified position, using
the catheter transducer for 3D imaging. Error measure-
ments for the first two experiments gave an error of 3.41
and 2.36 mm, respectively. No measurements were taken
for the third. Note that the MATLAB position-extraction
algorithm was not used and the needle position was
extracted manually from the 3D data.
Prostate biopsy using a forward-viewing endoscopic
matrix array and a six-DOF robot has also been
demonstrated (55,57). The robots gripper held the
Figure 12. Experiment in US-guided robotic biopsy, described
by Ling et al. (55). A simulated cyst is placed inside a boneless
turkey breast with the biopsy robot using real-time US guidance.(a) 3D-rendered image of the cyst in turkey breast; (b) B-scan of
the cyst. (ce) Simultaneous B- and C-scans, respectively, of the
needle tip penetrating the cyst. Image 2009 IEEE
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
10/18
G. P. Moustriset al.
Figure 13. A trial in robotic prostate multiple-core biopsy using
3D US guidance (57). The tissue phantom is a turkey breast
divided into eight sectors. The robot has to stick each one
successfully. Arrows indicate placement of the needle tip. In (h),
the needle is placed in the correct zone but the needle tip has
failed to penetrate the prostate surface. Image 2009 IEEE
transducer, which was equipped with an echogenic
biopsy needle and targeted a turkey breast acting as
the prostate phantom. The 3DUS produced a volumetric
representation of the phantom, which was then passed to
a program to automatically calculate its coordinates. The
voxels of the prostate phantom were divided into eight
equal sectors, while the robot was expected to sample
each one of them (Figure 13). The robot autonomously
performed the biopsy with a success rate of 92.5%.
Since ultrasound can provide real-time vision feed-
back, visual servoing has been investigated by several
researchers as a means of guiding a robot. Vitrani et al.
(62) describe the use 2D ultrasound for the guidance of a
MIS robot in heart mitral valve repair surgery. The robot,
holding surgical forceps, is introduced through a trocarin the patients torso, while an ultrasound probe, placed
in the oesophagus, provides 2D imaging. The forceps
intercept the echographic plane at two points. Keeping
the probe still, the surgeon designates new coordinates
for the forceps on the ultrasound image and the visual
servo controller is expected to carry out the command.
Simulation andin vitroresults show and exponential con-
vergence and robustness of the control, which is further
exemplified by in vivo experiments on porcine models
(63). Similar experiments are presented by Stoll et al.
(64), using ultrasound images to get a robotic manipula-
tor to touch a target (grape) submerged in a compoundof oil and water. The reported success rate was 88%.
Percutaneous cholecystostomy via robotic needle inser-
tion is described in Hong et al. (65), where the authors
used a five-DOF robot to reach the gallbladder. A notable
feature in this work was the compensation of movement
and deformation of the target by involuntary motions,
such as respiration. The visual controller analysed the
ultrasound images and updated the correct insertion path
in real time. However, during the actual insertion the sub-
ject had to hold his/her breath to stop the deformation.
The problem of tumour mobility in breast biopsy was
also considered by Mallapragada et al. (66), describing
an interesting system which manipulates the ultrasoundprobe as well as the breast, in order to compensate for
out-of-plane motions and keep the tumour visible.
Visual servoing using 3DUS was first demonstrated by
Stoll et al. (67). The authors used a PHANToM robot,
mounted with a hollow steel cannula at its end-effector,
and a 3DUS scan head in order to localize the instrument
and provide pose information. The instrument featured a
passive marker at its end, which enabled the estimation
of position and orientation. The US data was fed toa PC which calculated the error of the instruments
tip to a goal position and issued movement commands
through a linear PD controller. Experiments showed a
position error 3 mm/s could destabilize the
system. A faster visual servo controller utilizing 3DUS was
presented by Novotnyet al. (68), by means of performing
much of the image processing on a graphics-processing
unit (GPU). GPUs are specially designed to perform very
fast computations in image manipulation, and thus the
control loop was able to attain a speed of up to 25 Hz. A
different approach to ultrasound visual servo control hasbeen described by Sauvee et al. (69), deploying a non-
linear model predictive controller (NMPC). The NMPC
was used to control a Mitsubishi PA10 robot, respecting
system constraints such as actuator saturation, joint limits
and ultrasound range.
Motion compensation
Motion compensation refers to the apparent cancellation
of organ motion in the surgical field through image
processing and robot control algorithms. Typically, the
motion of the field (e.g. heart beat, respiratory motion,etc.) is captured by an imaging device in real time,
is rectified and presented to the surgeon as still.
Concurrently, the robot maintains a steady pose with
respect to the field, essentially tracking its motion
and moving along with it. This function, however, is
transparent to the surgeon on the master end of the
robotic telesurgery system, who effectively operates on
a static image without perceiving the motion of the
robot on the slave end. This approach is particularly
interesting in off-pump coronary artery bypass graft
surgery (CABG), because it can obviate the need for
mechanical and vacuum stabilizers. The control schemefalls under the shared control paradigm, since both the
controller (software) and the surgeon use the robot at the
same time. Motion compensation present challenges on
two ends. The first is the image capture and rectification
of the motion itself (although different modalities, such
as ultrasound, have also been used), as it can present
very fast dynamics (e.g. beating heart). This mandates
the use of high-speed cameras (range 5001000 fps) and
increased processing power. At the other end, the control
of the robot is also demanding because of having to track
very fast-moving targets.
Among the first attempts to develop a motion
compensator for beating heart surgery was the workpresented by Nakamura et al. (70), who introduced the
notion of heartbeat synchronization. The authors used a
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
11/18
Evolution of autonomous and semi-autonomous robotic surgical systems
six-DOF robot and a high-speed camera at 995 fps in order
to track a point on the image, created using a laser pointer.
The image was moved in the image buffer so as to keep
the point at the same position, thus no rectification was
performed. In vitro and in vivo experiments on a porcine
beating heart were positive, giving a maximum tracking
error of approximately 0.5 mm. Tracking of a beatingheart was also investigated by Ginhoux et al. (71). The
authors placed four light-emitting diodes (LEDs) on the
heart surface in order to capture the motion with a 500 fps
camera. Model predictive control (MPC) algorithms were
also used, employing a heart-beat model for reference.
In vivo tests on a pig heart produced encouraging results
of low variance tracking error with a median value of 0.09
and 0.25 px on the xand yaxes, respectively. MPC was
further developed in (72,73). Motion prediction was also
investigated in (74), using a least squares approach and an
artificial neural network implementation. An interesting
feature in this study was the ability to predict the motionof visually occluded parts of the heart and the fusion
of biological signals (ECG and RSP) in the estimation
algorithms. The algorithms, however, were not tested on
a real robot.
Use of biological signals for reference estimation in MPC
was also treated by Bebeket al. (7577). However, the
authors did not employ vision tracking but used instead
a sonomicrometry system to collect motion data from the
heart. This bypassed the problem of visual occlusion of the
surgical field by the robotic manipulators or other surgical
tools. Experiments with a PHANToM robot produced an
RMS error of approximate 0.6 mm in the three axes.
3DUS-guided motion compensation for beating heartmitral repair was presented by Yuen et al. (7881). The
authors were able to control a one-DOF linear guide for
an anchoring task, using feedback from a 3DUS system.
Due to the latency of the capturing process, predictive
filters were also employed. Experiments showed an RMS
synchronization error of 1.8 mm. Based on these results,
Kesner and Howe have also presented an ultrasound-
guided cardiac catheter utilizing motion compensation
(82).
A different approach was presented by Cagneauet al.
(83), using force feedback from a force sensor mounted
on a MC
2
E robot for motion compensation. Under theassumption that the motion is periodic, the authors used
an iterative learning controller, along with a low-pass
filter, to cancel the motion. In vitro results showed the
potential of this approach; however the assumption of
periodicity was an oversimplification of the actual motion
of the heart. A more robust approach to motion estimation
was discussed by Duindam and Sastry (84), using ECG
and respiratory signals in order to model and estimate the
full 3D motion of the hearts surface.
Optical coherence tomographyguidance for vitreoretinal surgery
Optical coherence tomography (OCT) is a relatively new
optical tomographic technology which uses light in order
to capture 2D and 3D images of optical scattering media,
at a m level (85). OCT has achieved real-time 3D
modes and is mostly used in retinal surgery, as well
as optical biopsies. Due to its unique features, OCT
has recently been integrated into a vitreoretinal robotic
surgery system, providing real-time guidance. This work
was described by Balicki et al. (86), using the peelingof epiretinal membranes as a reference application. The
authors modified a vitreoretinal pick (25 gauge), passing
through the cannula a single optical fibre to act as the
OCT probe. The fibre was connected to an OCT system,
and was mounted onto a high precision Cartesian robot.
A force/torque-sensing handle was also attached to the
robot for hands-on control.
The system accommodated three tasks: a safety
barrier task, much like a hard virtual fixture, where
the robot constrained the probe from approaching the
retinal surface closer than an imposed limit; a surface
tracking task, where the robot tracked the motion ofthe surface, keeping a steady distance of 150 m; and
a targeting task, where the robot would insert the
pick in a user-designated location. In vitro experiments
produced encouraging results, although further research
is also needed to overcome limitations in this study.
For example, the probe was always perpendicular to
the surface, while in actual surgery oblique angles are
common. Better controller design is also important, in
order to minimize overshoot and tracking errors.
Clinical Applications
Autonomous and semi-autonomous systems have already
been used in neurosurgery and orthopaedics, mainly
because the bony framework of these operations offers
a good material for stereotactic orientation of the
instruments. At the same time, many projects are
still in the experimental phase for thoracoscopic and
laparoscopic surgery because, as mentioned previously,
tissues in these settings are deformable and the
preoperative images may differ from the intraoperative
conditions.
Examples of orthopaedic robots
Replacement of hip joints that have failed as a result
of disease or trauma is very common. In the current
manual procedure, the cavity is cut by the surgeon by
handheld broaches and reamers forced into the femur,
which leaves a rough and uneven surface. In order
to obtain higher precision, research led to a robotic
approach for sculpting the femoral cavity (87). The
Robodoc system was developed in the mid-1980s and
is now widely commercially available (88). Clinical trials
have confirmed that the femoral pocket is more accurately
formed using the Robodoc. Also, because of the needto provide precise numerical instructions to the robot,
preoperative CT images are used to plan the bone-milling
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
12/18
G. P. Moustriset al.
procedure. This gives the surgeon an opportunity to
optimize the implant size and placement for each patient.
Titanium pins are used in the femoral condyles and
greater trochanter for registration purposes. The control
of Robodoc is essentially autonomous: the robot follows
the planned cutting paths without the surgeons guidance.
After the pocket is milled, the surgeon continues as in the
manual procedure (87).
Recent reports on approximately 130 hip replace-
ments from an ongoing clinical study in the USA used
radiographs to compare Robodoc-treated patients with a
control group (89). The Robodoc cases showed signifi-
cantly less space between the prosthetic and the bone.
Placement of the implant was also improved. Further-
more, no intraoperative femoral fractures occurred for
the Robodoc group, whereas three were observed in the
control group. The results also showed improved pros-
thetic fit, and the overall complication rate was reduced
to 11.6% from the reported manual procedure rates of
16.633.7%. In addition, the surgical time decreased dra-
matically as surgeons gained experience with the system
and modified the procedure: the first 10 cases aver-
aged 220 min, whereas the current level is 90100 min.
Robodoc has succeeded in improving fit. However, a
number of disadvantages are still there to overcome: the
traumatic procedure of pin placement and a slow pin-
finding registration process. Efforts aim toward reducing
the number of pins and even eliminating them altogether,
using other registration techniques. Many other issues
arise from the process of fixing the femur to the base of
the robot, which is time-consuming and may also be thecause of postoperative pain. In relation to this, motion
of the bone within the fixator during cutting can be a
major problem. Several incidents of femur motion can
extend the operation significantly. Better fixation or con-
tinuous monitoring and registration should be further
developed (87). Finally, although prosthetic fit and posi-
tioning appear to be improved, it is crucial to address
the question of whether this improves treatment in the
long term. More studies showing significant correlation
between implant fit and long-term outcome are expected
in the future (88).
In a large consecutive series of 143 total hipreplacements (128 patients) using the Robodoc system,
the authors concluded that the system achieves equal
results as compared to a manual technique. However,
there is a high number of technical complications directly
or indirectly related to the robot (90). Another recent
study compared a non-fiducial based surface registration
technique (DigiMatch) with the conventional locator pin-
based registration technique in performing cementless
total hip arthroplasty (THA) using the Robodoc system.
The authors concluded that the advantages of the
DigiMatch technique were the lack of need for prior
pin implantation surgery and no concern for pin-relatedknee pain. Short-term follow-up clinical results showed
that DigiMatch Robodoc THA was safe and effective (91).
Total hip arthroplasty
The HipNav system for accurate placement of the
acetabular cup implant is being developed (92). The
system consists of a preoperative planner, a range-
of-motion simulator and an intraoperative tracking
and guidance system. The range-of-motion simulatorhelps surgeons to determine the orientation of the
implants at which impingement would occur. Used in
conjunction with the planning system and preoperative
CT scans, the range-of-motion simulator permits surgeons
to find the patient-specific optimal orientation of the
acetabular cup (88). A 2003 study aimed towards a non-
invasive registration of the bone surface for computer-
assisted surgery (CAS), by developing an intraoperative
registration system using 2D ultrasound images. The
approach employs automatic segmentation of the bone
surface reflection from ultrasound images tagged with
the 3D position to enable the application of CAS tominimally invasive procedures. The authors concluded
that ultrasound-based registration eliminates the need
for physical contact with the bone surface, as in point-
based registration (93). Navigational systems are under
development for various knee-related procedures, such
as anterior cruciate ligament replacement. Most robotic
assistant systems for the knee, however, are aimed at
total knee replacement (TKR) surgery.This procedure
replaces all of the articulator surfaces by prosthetic
components. Several robotic TKR assistant systems have
been developed to increase the accuracy of the prosthetic
alignment. Many of these systems include an image-based
preoperative planner and a robot to perform the bone
cutting (94).
Spine surgery
Spinal fusion procedures attach mechanical support
elements to the spine to prevent relative motion of
adjacent vertebrae. Current research in spinal surgery
focuses on image-guided passive assistance in aligning
the hand-held surgical drill. Preoperative CT images are
integrated with tracking devices during the procedure.
Targets may be attached to each vertebra to permitconstant optical motion tracking during the procedure.
Using these techniques, Merloz et al. reported a far
lower rate of cortical penetration for computer-assisted
techniques compared with the manual procedure (95).
Work is under way on the use of intraoperative ultrasound
or radiograph images to register the CT data with
the patient (96). The screws may then be inserted
percutaneously, eliminating the need for exposing the
spine.
Examples of neurosurgical robots
Image-guided techniques were applied for the first time
in the field of neurosurgery. Just prior to surgery,
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
13/18
Evolution of autonomous and semi-autonomous robotic surgical systems
stereotactic frames were attached to the patients head
before the imaging process and remained in place
throughout the operation. The instruments were guided
by calculating the relationship between the frame and
lesion observed in the image (87). Frameless stereotaxy
is a newer image-guided approach, using optical trackers
for navigation and less invasive fiducial markers orvideo images for registration of the instruments (97,98).
In the past 15 years, a number of robotic systems
have been developed to enhance stability, accuracy
and ease of use in neurosurgical procedures (99101).
In spite of the rigid cranium, which stands as a
good reference material for image-guided surgery,
brain tissue itself is soft and prone to unwanted
shifting during the procedure. In effect, this alters the
spatial relationship between the preoperative imaging
examination and the actual patient anatomy. Deformable
templates for non-rigid registration have been proposed
to overcome this limitation. These templates are oftenbased on biomechanical models of soft tissue (102).
Alternatively, the use of intraoperative imaging would
also permit continuous monitoring of brain anatomy and
instruments. This would require compatible machinery
which would integrate both the imaging data and
the space constraints, i.e. robotic manipulators (103).
The StealthStation (Medtronic, MN, USA) visualizes
both instruments and anatomy in real time and
performs surgical actions accordingly. Intraoperative
navigation allows for less invasive surgery and more
precise localization without the need of continuous
intraoperative imaging. Another prominent neurosurgical
robot is the Neuromate (Renishaw plc, Gloucestershire,
UK). Neuromate (Figure 14) is a stereotactic robot
used in various functional neurosurgical procedures,
such as deep brain stimulation (DBS) and stereotactic
electroencephalography (SEEG). It can also provide
sterotaxy in neuro-endoscopy, radiosurgery, biopsy and
transcranial magnetic stimulation (TMS), supporting both
frame-based and frameless stereotaxy.
Figure 14. The Neuromate stereotactic neurosurgical robot.
Reproduced by permission of Renishaw plc
Stereotactic radiosurgery
Radiosurgery aims to administer high doses of radiation
in a single session to a small, critically located intracranial
volume without opening the skull. The goal is the
destruction of cells in order to hold the growth
or reduce the volume of tumours. Radiosurgery hasbecome an important treatment alternative to surgery
for a variety of intracranial lesions (104). Stereotactic
radiosurgery (SRS) in selected patients with pituitary
adenoma delivers a favourable tumour growth control,
preserving the functional status. Thus, it has become
an attractive treatment modality and is often used
instead of external beam radiotherapy (104107).
Current radiosurgery systems include the Gamma Knife,
manufactured by Elekta (based in Sweden); Novalis,
manufactured by BrainLabs (based in Germany); and
CyberKnife, manufactured by Accuray (based in the
USA). CyberKnife is the name of a frameless roboticradiosurgery system invented by John R. Adler, Stanford
University Professor of Neurosurgery and Radiation
Oncology (108,109).
Cyberknife
Cyberknife (Figure 15) uses a miniature linear accelerator
(LINAC), which is mounted on a robotic arm to deliver
radiation to the selected target. A real-time targeting
system eliminates the need for the previously used
head frame. The position of the patient is located by
image guidance cameras; the robotic arm is guided toprecisely deliver small beams of radiation that converge
at the tumour from multiple angles. The cumulative
dose is high enough to destroy the cancer cells, while
radiation exposure to surrounding healthy tissue is
minimized. The level of accuracy achievable by this
system allows higher doses of radiation to be used,
resulting in greater tumour-killing effect and a higher
likelihood of radiosurgical success. During the actual
treatment, patient movement is monitored by the systems
low-dose X-ray cameras. The CyberKnifes computer-
controlled robotic arm compensates for any changes in
Figure 15. The CyberKnife system. Reproduced by permission of
Accuray Inc
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
14/18
G. P. Moustriset al.
tumour during treatment, using the synchrony respiratory
tracking system (110). Radiosurgery achieves equivalent
growth control, hormonal remission and neurological
complication rates when compared to conventional
radiotherapy, but the damage to surrounding tissues is
less.
One of the best indications for radiosurgery of pituitaryadenomas is residual or recurrent tumour that is not safely
removable using microsurgical techniques. In addition,
Cyberknife can easily apply the advantages of multisession
radiosurgery for perioptic lesions, due to the lack of need
for stereotactic frame fixation. This is one of the greatest
advantages of CyberKnife. In fractionated radiation, the
tumour control rate is in the range 7697% (111). The
tumour control rate for pituitary adenomas following
treatment with Gamma Knife is in the range 93.394%
(112). Endocrinopathies respond well with Gamma Knife
at a ratio of 77.793% and the normalization rate
is in the range 21 52.4% (112,113). In fractionatedradiation, endocrinological improvement is 38 70%
(114,115). As a result, current results of Cyberknife
(endocrinological improvement 100%, endocrinological
normalization 44%) are similar to that of Gamma Knife
and a little superior to that of fractionated radiation.
Complication rate ranges for Gamma Knife and
fractionated radiation (most commonly visual loss)
have been 012.6% and 12100%, respectively (116).
Complication rates (visual disturbance 7.6%) were similar
to that of Gamma Knife and much superior to that
of fractionated radiation. There were no incidences of
pituitary dysfunction, probably due to the multisession
radiosurgery.
Indications for spinal radiosurgery
Currently evolving indications for spine radiosurgery
using CyberKnife include lesions of either benign
or malignant histology as well as spinal vascular
malformations (117). The most important indication for
the treatment of spinal tumours is pain, and spinal
radiosurgery is most often used to treat tumour pain.
Radiation is well known to be effective as a treatment
for pain associated with spinal malignancies, with a92% improvement in pain after CyberKnife therapy. This
beneficial result includes radicular pain caused by tumour
compression of adjacent nerve roots (117). Another
indication concerns partially resected tumours during
open surgery. In that case, fiducials can be left in place
to allow for postoperative radiosurgery treatment to the
residual tumour. Such treatments can be given early in the
postoperative period, as opposed to the usual delay before
the surgeon permits external beam irradiation (117).
CyberKnife radiosurgery offers the ability to deliver
homogeneous radiation doses to non-spherical structures,
such as the trigeminal nerve. Preliminary results have
been reported by Romanelli et al. for the treatment ofpatients with trigeminal neuralgia (118). Although a
70% short-term response rate has been described, the
long-term safety and efficacy demand further studies to
be conducted (119).
Discussion
Clinical implementationand acceptance issues
Safety is an obvious concern for robotic surgery,
and regulatory agencies require that it should be
addressed for every clinical implementation. As with
most complex computer-controlled systems, there is no
accepted technique that can guarantee safety for all
systems in every circumstance (120,121). Some robotics
developers have asserted that it is important to keep
control of the procedure in the hands of the surgeon,
even in image-guided surgery. A system developed by
Ho et al. for knee surgery prevents motion outside ofthe planned workspace (122). In contrast, the Robodoc
lets autonomous control the cutting instrument, while
the surgeon monitors progress. This freedom of the
robot has raised concerns, especially in Europe, over
accepting the autonomous mode. Thus, it is important to
include user interfaces so that the surgeon supervises the
systems plan of action and status in real time during the
operation.
Robots will be successful in surgery only if they prove
to be beneficial in terms of patient outcomes and total
costs. Unfortunately, in many cases outcome cannot be
assessed until many years after the procedure (e.g. robotic
vs manual hip replacement). Early acceptance of the
technology increases the number of cases, and clinicians
often improve the procedure, which results in better
outcomes and lower costs. Ability to use the robot for
multiple procedures is an important feature not found in
certain robotic systems (e.g. knee replacement systems
are unable to perform hip replacements). In contrast,
telesurgical systems aim towards a variety of conditions
and even specialties and this is probably the reason for
their wider acceptance. People react differently when a
failure comes from a robot than when it comes from a
human. The question of responsibility in case of morbidity
or mortality still remains, especially when dealing withautonomous systems. Concerns about the legal framework
covering robotic autonomous systems may also bring
difficulties with insurance coverage. Technologies in all
of these areas should be developed in a way that gives
consideration to their potential benefits and shortfalls
(123).
Emerging trends
Research in surgical robots has already produced new
designs, breaking the telemanipulation paradigm. For
example, mobile mini-robots for in vivo operation havealready been described in the literature (124,125), as well
as hyper-redundant (snake) robots (126,127), continuum
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
15/18
Evolution of autonomous and semi-autonomous robotic surgical systems
robots (128), NOTES mini-robots (129), fixed-base robots
(130) and crawler robots (131). However, a true potential
in revolutionizing medicine lies with micro/nanorobotics.
Micro/nanorobots present a paradigm shift in current
robotic technology and could bring about a breakthrough
in many fields, such as medicine, drug delivery,
fabrication, telemetry, etc. However, they also presentmajor challenges regarding fabrication, power supply,
actuation and localization techniques.
In MIS, several areas of application have been proposed.
For example, a first application could be the circulatory
system, where the nanobots could enter the blood flow
and reach target sites in order to perform actions such
as targeted drug delivery, removal of plaques from
vessels and destruction of blood clots, act as a stent
to maintain blood flow, etc. Pioneering work towards
this goal has been conducted by Martel et al., who have
managed to navigate a small magnetic bead through
the carotid artery of a living swine through magneticpropulsion utilizing MRI technology (132,133). Another
application area is the central nervous system, where the
nanorobot could navigate through available space in order
to reach neural sites. Such a space could be the spinal
canal, the subarachnoid space or the brain ventricles.
The nanorobots could provide services such as targeted
drug delivery on cancer cells in brain tumours, act as
markers for active neuronavigation in brain surgery in
cooperation with stereotaxy or perform neurostimulation
on selected neural sites. The urinary system is a third
possible application area, where the nanorobots could
enter the urinary tract and reach the prostate andkidneys in order to dissolve kidney stones or deposit
radioactive seeds on cancerous cells in the prostate.
Several other targets have also been proposed, such as
the eye, the ear, the fetus, etc. [for an up-to-date review,
see (134)].
As micro- and nanotechnologies evolve, a variety of
sensors and actuators operating in the submillimeter
range has emerged. As a result, various research
groups started recently to develop microrobotic systems
for a wide range of applications: precision tooling,
endoscopic surgery, biological cells manipulation (135),
AFM microscopy, etc. However, most of these devices
are not really autonomous, either concerning energy
supply or intelligence. But autonomy is a major issue
for a lot of innovative applications of micro-robots where
tele-operation is not possible or not desirable (136). On
the way towards these fascinating innovations, one must
always identify the key parameters that limit downscaling
(137,138). There has also been an increased interest
in the use of microelectro-mechanical systems (MEMS)
for surgical applications. MEMS technology not only
improves the functionality of existing surgical devices
but also adds new capabilities, allowing the surgeons
to develop new techniques and perform totally new
procedures (139). MEMS may provide the surgeon withreal-time feedback on the operation, thus improving the
outcome (140).
Conclusion
As depicted by the progress reviewed here, robotic
technology is going to change the face of surgery in the
near future. Robots are expected to become the standard
modality for many common procedures, including hip
replacement, heart bypass, cochlear implantation and
abdominal surgery. As a result, surgeons have to
become familiar with technology, and technology should
come closer to the everyday needs of a surgical
team. Autonomous and semi-autonomous modes are
increasingly being investigated and implemented in
surgical procedures, automating various phases of the
operation. The complexity of these tasks is also shifting
from the low-level automation early medical robots
to high-level autonomous features, such as complex
laparoscopic surgical manoeuvres and shared-control
approaches in stabilized image-guided beating-heart
surgery. Future progress will require a continuousinterdisciplinary work, with breakthroughs such as
nanorobots entering the spotlight. Autonomous robotic
surgery is a fascinating field of research involving progress
in artificial intelligence technology. However, it should
always be faced with caution and never allow the
exclusion of human supervision and intervention.
References
1. Gharagozloo F, Najam F.Robotic Surgery: Theory and OperativeTechnique, 1st edn. McGraw-Hill Medical: New York, 2008.
2. Taylor RH, Stoianovici D. Medical robotics in computer-integrated surgery. IEEE Trans Robotics Autom 2003; 19(5):765781.
3. Wolf A, Shoham M. Medical automation and robotics. InSpringer Handbook of Automation. Springer: Berlin, 2009;13971407.
4. Jakopec M, Rodriguez y Baena F, Harris SJ, et al. The hands-onorthopaedic robot acrobot: early clinical trials of total kneereplacement surgery. IEEE Trans Robotics Autom 2003;19(5):902911.
5. Kwon D-S, Lee J-J, Yoon Y-S, et al. The mechanism andregistration method of a surgical robot for hip arthroplasty.In Proceedings of IEEE International Conference on Roboticsand Automation 2002 (ICRA 02), vol. 2, Washington, DC,2002; 18891894.
6. Lonner JH, John TK, Conditt MA. Robotic arm-assisted UKAimproves tibial component alignment: a pilotstudy. Clin Orthop
Relat Res 2010;468(1): 141146.7. Taylor RH, Mittelstadt BD, Paul HA, et al. An image-directedrobotic system for precise orthopaedic surgery. IEEE Trans
Robotics Autom 1994;10(3): 261275.8. Hagio K, Sugano N, Takashina M, et al. Effectiveness of the
ROBODOC system in preventing intraoperative pulmonaryembolism.Acta Orthop Scand 2003;74(3): 264269.
9. Bauer A, Borner M, Lahmer A. Robodoc animal experimentand clinical evaluation. In CVRMedMRCAS 97. Springer:Berlin, 1997; 561564.
10. Baena FR, DaviesB. Robotic surgery: from autonomous systemsto intelligent tools. Robotica 2010; 28(02): (special issue ):163170.
11. Guthart GS, Salisbury JK. The Intuitive telesurgery system:overview and application. In IEEE International Conference onRobotics and Automation (ICRA 00), vol. 1, 2000; 618621.
12. Dasgupta P, Jones A, Gill IS. Robotic urological surgery: a
perspective.BJU Int2005;95(1): 2023.13. Hagn U, Konietschke R, Tobergte A, et al. DLR MiroSurge:
a versatile system for research in endoscopic telesurgery. IntJ Comput Assist Radiol Surg 2010;5(2): 183193.
Copyright 2011 John Wiley & Sons, Ltd. Int J Med Robotics Comput Assist Surg (2011).DOI: 10.1002/rcs
-
8/12/2019 users.ntua.gr_kdelip_Moustris et al. - Evolution of autonomous and semiautonomous roboti
16/18
G. P. Moustriset al.
14. Niemeyer G, Preusche C, Hirzinger G. Telerobotics. InSpringerHandbook of Robotics. Springer: Berlin, 2008; 741757.
15. OMalley MK, Gupta A. Passive and active assistance for humanperformance of a simulated underactuated dynamic task. In11th Symposium on Haptic Interfaces for Virtual Environmentand Teleoperator Systems (HAPTICS 2003); 348355.
16. Tavakoli M, Patel RV, Moallem M. Haptics for TeleoperatedSurgical Robotic Systems. World Scientific: Singapore, 2008.
17. Dario P, Hannaford B, Menciassi A. Smart surgical tools andaugmenting devices. IEEE Trans Robotics Autom 2003; 19(5):782792.
18. Cleary KR, Stoianovici DS, Glossop ND, et al. CT-directedrobotic biopsy testbed: motivation and concept. In Medical
Imaging 2001: Visualization, Display, and Image-GuidedProcedures, Mun SK (ed.). SPIE: San Diego, CA, 2001;231236.
19. Huang H-M. The autonomy levels fFor unmanned systems(ALFUS) framework: interim results. In Performance Metricsfor Intelligent Systems (PerMIS) Workshop, Gaithersburg, MD,2006.
20. Arkin RC.Behavior-based Robotics. MIT Press: Cambridge, MA,1998.
21. Rosen J, Brown JD, Chang L,et al. Generalized approach formodeling minimally invasive surgery as a stochastic processusing a discrete Markov model. IEEE Trans Biomed Eng 2006;53(3): 399413.
22. Kragic D, Marayong P, Li M, et al. Humanmachine col-laborative systems for microsurgical applications.Int J Robotics
Res2005;24(9): 731741.23. Hyosig Kang, Wen JT. Robotic knot tying in minimally invasive
surgeries. In IEEE/RSJ International Conference on IntelligentRobots and Systems 2002, vol. 2; 14211426.
24. Lin HC, Shafran I, Murphy TE,et al. Automatic detection andsegmentation of robot-assisted surgical motions. In Medical
Image Computing and Computer-Assisted Intervention (MICCAI)2005. Springer: Berlin, 2005; 802810.
25. Reiley CE, Lin HC, Yuh DD, et al. Review of methods forobjective surgical skill evaluation. Surg Endosc 2011; 25(2):356366.
26. Reiley CE, Hager GD. Task versus subtask surgical skillevaluation of robotic minimally invasive surgery. InProceedings
of the 12th International Conference on Medical ImageComputing and Computer-Assisted Intervention (MICCAI), PartI. Springer: Berlin, 2009; 435442.
27. Reiley CE, Lin HC, Varadarajan B,et al. Automatic recognitionof surgical motions using statistical modeling for capturing
variability.Stud Health Technol Inform 2008;132: 396401.28. Reiley CE, Plaku E, Hager GD. Motion generation of robotic
surgical tasks: learning from expert demonstrations. In AnnualInternational Conference of the IEEE, Engineering in Medicineand Biology Society (EMBC), 2010; 967970.
29. Antsaklis PJ, Passino KM, Saridis GN. An Introductionto Intelligent and Autonomous Control. Kluwer Academic:Dordrecht, 1992.
30. Kennedy J, Eberhart RC, Shi Y. The particle swarm. In SwarmIntelligence. Morgan Kaufmann: San Francisco, CA, 2001;287325.
31. Haykin S. Neural Networks and Learning Machines, 3rd edn.
Prentice Hall: Englewood Cliffs, 2008.32. Goldberg DE.Genetic Algorithms in Search, Optimization, and
Machine Learning. Addison-Wesley Professional: Boston, 1989.33. Passino KM, Yurkovich S. Fuzzy Control. Addison Wesley:
Menlo Park, 1997.34. Kang H, Wen JT. EndoBot: a robotic assistant in minimally
invasive surgeries. In IEEE International Conference onRobotics and Automation, (ICRA), vol 2, 2001; 20312036.
35. Kang H, Wen JT. Autonomous suturing using minimallyinvasive surgical robots. In IEEE International Conference onControl Applications, 2000; 742747.
36. Bauernschmitt R, Schirmbeck EU, Knoll A, et al. Towardsrobotic heart surgery: introduction of autonomous proceduresinto an experimental surgical telemanipulator system.Int J Med
Robot2005;1(3): 7479.37. Mayer H, Nagy I, Knoll A, et al. The Endo[PA]R system for
minimally invasive robotic surgery. In IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS), vol. 4,2004; 3637 3642.
38. Mayer H, Gomez F, Wierstra D, et al. A system for roboticheart surgery that learns to tie knots using recurrent neural
networks. In IEEE/RSJ International Conference on IntelligentRobots and Systems 2006; 543548.
39. Schmidhuber J, Wierstra D, Gomez F. Evolino: hybridneuroevolution/optimal linear search for sequence learning.In19th international Joint Conference on Artificial Intelligence,
Edinburgh, UK. Morgan Kaufmann: San Francisco, CA, 2005;853858.
40. Mayer H, Nagy I, Burschka D, et al. Automation of manual
tasks for minimally invasive surgery. In Fourth InternationalConference on Autonomic and Autonomous Systems (ICAS08), Gosier, Guadeloupe, 2008; 260265.
41. van den Berg J, Miller S, Duckworth D, et al. Superhumanperformance of surgical tasks by robots using iterative learningfrom human-guided demonstrations. In IEEE InternationalConference on Robotics and Automation, Anchorage, AK, 2010;20742081.
42. Staub C, Osa T, Knoll A, et al. Automation of tissue piercingusing circular needles and vision guidance for computer aidedlaparoscopic surgery. In IEEE International Conference onRobotics and Automation (ICRA), 2010; 45854590.
43. Hynes P, Dodds GI, Wilkinson AJ. Uncalibrated visual servoingof a dual-arm robot for mis-suturing. In First IEEE/RASEMBSInternational Conference on Biomedical Robotics andBiomechatronics (BioRob), 2006; 420425.
44. Zong G, Hu Y, Li D, et al. Visually servoed suturing forrobotic microsurgical keratoplasty. In IEEE/RSJ InternationalConference on Intelligent Robots and Systems, 2006;23582363.
45. Brett PN, Taylor RP, Proops D, et al. A surgical robot forcochleostomy. In 29th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBS)2007; 12291232.
46. Taylor R, Du X, Proops D, et al. A sensory-guided surgicalmicro-drill.Proc Inst Mech Eng C J Mech Eng Sci 2010;224(7):15311537.
47. Coulson CJ, Reid AP, Proops DW. A cochlear implantationrobot in surgical practice. In 15th International Conference onMechatronics and Machine Vision in Practice (M2VIP), 2008;173176.
48. Coulson CJ, Taylor RP, Reid AP,et al. An autonomous surgicalrobot for drilling a cochleostomy: preliminary porcine trial.
Clin Otolaryngol2008;33(4): 343347.49. Majdani O, Rau TS, Baron S, et al. A robot-guided minimally
invasive approach for cochlear implant surgery: preliminaryresults of a temporal bone study. Int J Comput Assist RadiolSurg2009;4(5): 475486.
50. Eilers H, Baron S, Ortmaier T,et al. Navigated, robot assisteddrilling of a minimally invasive cochlear access. In IEEEInternational Conference on Mechatronics (ICM), 2009; 16.
51. Hussong A, Rau TS, Ortmaier T,et al. An automated insertiontool for cochlear implants: another step towards atraumaticcochlear implant surgery.Int J Comput Assist Radiol Surg2009;5(2): 163171.
52. Megali G, Tonet O, Stefanini C, et al. A computer-assistedrobotic ultrasound-guided biopsy system for video-assistedsurgery. In Medical Image Computing and Computer-assisted
Intervention MICCAI 2001, Niessen W, Viergever M (eds).
Springer: Berlin, 2001; 343350.53. Phee L, Di Xiao, Yuen J, et al. Ultrasoundguided robotic systemfor transperineal biopsy of the prostate. In Proceedings ofthe 2005 IEEE International Conference on Robotics and
Automation (ICRA), 2005; 1315 1320.54. Rogers AJ, Light ED, von Allmen D, et al. Real-time 3D
ultrasound guidance of autonomous surgical robot for shrapneldetectionand breastbiopsy. In Proceedings of SPIE, Lake Buena
Vista, FL, USA, 2009; 72650O.55. Liang K, Lig