human prosthetic interaction: integration of several · 2017-09-27 · human prosthetic...

7
HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL TECHNIQUES Dandara T. G. de Andrade * , Antonio Ribas Neto * , Guilherme M. Pereira * , Eric Rohmer * * Cidade Universit´aria Zeferino Vaz - Av. Albert Einstein, 400 State University of Campinas - School of Electrical and Computer Engineering Campinas, Sao Paulo, Brazil Emails: {dandrade, tomribas, gpereira, eric}@dca.fee.unicamp.br Abstract— Current open-source upper limb robotic prosthesis projects have limitations regarding the func- tionality they offer to users. Moreover, prostheses that succeed in this requirement are extremely expensive and involve a challenging interaction, which requires hard practice from the amputee to be able to control the device. To decrease the training time that upper limb amputees usually spend to control the prosthetic hands, and to simplify the interaction, we developed friendly human-machine interfaces based on hybrid Electromyography (EMG) with Inertial Measurement Unit (IMU) and Radio Frequency Identification (RFID). The results suggest that the use of these interfaces in prosthesis is beneficial since they do not demand a lot of effort from the user, meaning they will be able to control the prosthetic hands easily and without an extended period of training. Keywords— Robotics, Prosthetic Hands, Hybrid Human-Machine Interfaces, Electromyography Resumo— Projetos atuais livres de pr´otese de membro superior tem limita¸c˜ oes quanto a funcionalidade que oferecem aos usu´arios. Al´ em disso, as pr´oteses que obtˆ em ˆ exito nesse requisito possuem custo extremamente alto e envolvem uma intera¸c˜ ao desafiadora, que requer uma pr´ atica r´ ıgida do amputado para controlar o dispositivo. Para diminuir o tempo de treinamento que os amputados de membros superiores costumam gastar para controlar asm˜aosprot´ eticas e para simplificar a intera¸c˜ao, desenvolvemos interfaces homem-m´aquina amig´aveis baseadas em eletromiografia h´ ıbrida (EMG) com Unidade de Medida Inercial (IMU) e Identifica¸c˜ao por R´ adio Frequˆ encia (RFID). Os resultados sugerem que o uso dessas interfaces em pr´ otese ´ e ben´ efico, uma vez que n˜ ao exigem muito esfor¸ co do usu´ario, o que significa que eles ser˜ao capazes de controlar as m˜aos prot´ eticas facilmente e sem um longo per´ ıodo de treinamento. Palavras-chave— Rob´ otica,Pr´otesesdem˜ao,InterfaceHomem-M´aquinaH´ ıbrida, Eletromiografia 1 Introduction According to (Association, 2017), assistive tech- nology is the term used to any item whose ob- jective is to increase or maintain the functional capabilities of people with disabilities. Typical ex- amples of assistive devices are orthoses, crutches, wheelchairs and prostheses (Organization et al., 2011). In developing countries, 80% of peo- ple do not have access to prosthetic care (Slade et al., 2015). Thus they have difficulties perform- ing most activities of daily living. Besides, ampu- tation has an unfortunate effect on social integra- tion and acceptance. There are solutions available for amputees to try to overcome all those problems. A first one addresses the social acceptance and integration through the use of a purely cosmetic prosthesis, which looks like a regular arm but cannot per- form any grasp. The second solution is to use a myoelectric prosthesis that might not be aesthetic, but that perform a great set of grasps from the most basic one, such as opening and closing the fingers together to more complex hand gestures. Although this solution, also called high-end pros- thetic hands (Bebionic, 201-), (Dynamics, 201-), and (Bionics, 201-), offers several possible inter- actions, the high cost, and the increased diffi- culty in the mean of interacting with the device are disadvantages that cannot be forgotten. A third solution emerged from the community of makers, thanks to new low-price 3D printers tech- nology. Open source prosthesis projects such as (Robohand, 201-) or (Langevin, 201-) emerged to address the problem of functionality, aesthetic, and high cost. Some projects like our previous work (Fajardo et al., 2015) are aiming at offering the same level of functionalities as the high-end prosthesis. At the same time, we aim at reduc- ing the complexity of interaction that constrains a user of an electromyography actuated high-end prosthesis to train hard on a daily base to fully control the motion of the fingers. Even though, as indicated in (Camacho Navarro et al., 2012), handicapped people need to make less effort and learn faster how to control electromyography-based systems, the training process might take several months depending on the level of amputation. A higher level of amputation means that the functional loss of the limb will be higher and that the training process will be harder since the muscles are interwoven in a much smaller place after the surgery. It adds difficulty for the user to control the prosthesis through the activation of these muscles (da Silva, 2014). Because of this, it is not trivial to control a prosthetic hand at first. Additionally, the prosthesis must be comfortable XIII Simp´osio Brasileiro de Automa¸ ao Inteligente Porto Alegre – RS, 1 o – 4 de Outubro de 2017 ISSN 2175 8905 1209

Upload: others

Post on 02-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL · 2017-09-27 · HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL TECHNIQUES Dandara T. G. de Andrade , Antonio Ribas Neto

HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERALTECHNIQUES

Dandara T. G. de Andrade∗, Antonio Ribas Neto∗, Guilherme M. Pereira∗, EricRohmer∗

∗Cidade Universitaria Zeferino Vaz - Av. Albert Einstein, 400State University of Campinas - School of Electrical and Computer Engineering

Campinas, Sao Paulo, Brazil

Emails: {dandrade, tomribas, gpereira, eric}@dca.fee.unicamp.br

Abstract— Current open-source upper limb robotic prosthesis projects have limitations regarding the func-tionality they offer to users. Moreover, prostheses that succeed in this requirement are extremely expensive andinvolve a challenging interaction, which requires hard practice from the amputee to be able to control the device.To decrease the training time that upper limb amputees usually spend to control the prosthetic hands, andto simplify the interaction, we developed friendly human-machine interfaces based on hybrid Electromyography(EMG) with Inertial Measurement Unit (IMU) and Radio Frequency Identification (RFID). The results suggestthat the use of these interfaces in prosthesis is beneficial since they do not demand a lot of effort from the user,meaning they will be able to control the prosthetic hands easily and without an extended period of training.

Keywords— Robotics, Prosthetic Hands, Hybrid Human-Machine Interfaces, Electromyography

Resumo— Projetos atuais livres de protese de membro superior tem limitacoes quanto a funcionalidade queoferecem aos usuarios. Alem disso, as proteses que obtem exito nesse requisito possuem custo extremamente altoe envolvem uma interacao desafiadora, que requer uma pratica rıgida do amputado para controlar o dispositivo.Para diminuir o tempo de treinamento que os amputados de membros superiores costumam gastar para controlaras maos proteticas e para simplificar a interacao, desenvolvemos interfaces homem-maquina amigaveis baseadasem eletromiografia hıbrida (EMG) com Unidade de Medida Inercial (IMU) e Identificacao por Radio Frequencia(RFID). Os resultados sugerem que o uso dessas interfaces em protese e benefico, uma vez que nao exigem muitoesforco do usuario, o que significa que eles serao capazes de controlar as maos proteticas facilmente e sem umlongo perıodo de treinamento.

Palavras-chave— Robotica, Proteses de mao, Interface Homem-Maquina Hıbrida, Eletromiografia

1 Introduction

According to (Association, 2017), assistive tech-nology is the term used to any item whose ob-jective is to increase or maintain the functionalcapabilities of people with disabilities. Typical ex-amples of assistive devices are orthoses, crutches,wheelchairs and prostheses (Organization et al.,2011). In developing countries, 80% of peo-ple do not have access to prosthetic care (Sladeet al., 2015). Thus they have difficulties perform-ing most activities of daily living. Besides, ampu-tation has an unfortunate effect on social integra-tion and acceptance.

There are solutions available for amputees totry to overcome all those problems. A first oneaddresses the social acceptance and integrationthrough the use of a purely cosmetic prosthesis,which looks like a regular arm but cannot per-form any grasp. The second solution is to use amyoelectric prosthesis that might not be aesthetic,but that perform a great set of grasps from themost basic one, such as opening and closing thefingers together to more complex hand gestures.Although this solution, also called high-end pros-thetic hands (Bebionic, 201-), (Dynamics, 201-),and (Bionics, 201-), offers several possible inter-actions, the high cost, and the increased diffi-culty in the mean of interacting with the device

are disadvantages that cannot be forgotten. Athird solution emerged from the community ofmakers, thanks to new low-price 3D printers tech-nology. Open source prosthesis projects such as(Robohand, 201-) or (Langevin, 201-) emerged toaddress the problem of functionality, aesthetic,and high cost. Some projects like our previouswork (Fajardo et al., 2015) are aiming at offeringthe same level of functionalities as the high-endprosthesis. At the same time, we aim at reduc-ing the complexity of interaction that constrainsa user of an electromyography actuated high-endprosthesis to train hard on a daily base to fullycontrol the motion of the fingers.

Even though, as indicated in(Camacho Navarro et al., 2012), handicappedpeople need to make less effort and learn fasterhow to control electromyography-based systems,the training process might take several monthsdepending on the level of amputation. A higherlevel of amputation means that the functionalloss of the limb will be higher and that thetraining process will be harder since the musclesare interwoven in a much smaller place after thesurgery. It adds difficulty for the user to controlthe prosthesis through the activation of thesemuscles (da Silva, 2014). Because of this, it isnot trivial to control a prosthetic hand at first.Additionally, the prosthesis must be comfortable

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

ISSN 2175 8905 1209

Page 2: HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL · 2017-09-27 · HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL TECHNIQUES Dandara T. G. de Andrade , Antonio Ribas Neto

to wear and interact, which otherwise, userssimply stop wearing them within their first yearof use (Foundation, 2017). Other challengesfaced to control a prosthetic hand are electrodeshift that happens each time the user dons aprosthesis, variation in the position of the limb,and transient changes in EMG (Erik Scheme andKevin Englehart, 2011).

(Bicchi, 2000) Points out that human oper-ability is necessary for a mechanical hand. Thatis, systems like a prosthetic hand have to have afriendly and easy interface with the human oper-ator. Taking this into account, we developed in-novative and friendly hybrid human-machine in-terfaces as a first step to building a competitiveprosthetic hand: similar functionalities of high-end ones but at an affordable price for people inemerging countries.

This paper is organized as follows: section 1described a brief review of the importance of thisstudy. Section 2 presents the developed hybridinterfaces for prosthetic hands. In section 3 theanalysis of the modules are described. Section 4concludes this work. Finally, in section 5, the fu-ture works are described.

2 System Overview

As shown in Figure 1, the robotic platform isdivided into independent modules. Each mod-ule is a different hybrid human-machine inter-face solution that combines Electromyography(EMG) and other techniques - Inertial Measure-ment Unit(IMU) or Radio Frequency Identifica-tion(RFID) - to activate a particular predefinedgrasp sequence of the prosthetic hand simulatedin V-REP (Rohmer et al., 2013). The central con-troller is a Raspberry Pi 3 with Jessie Lite op-erating system. It controls how the communica-tion among the robot simulator and the moduleshappens(TCP/IP for Motion Module and RemoteAPI for RFID Module), as well as manages all theprocesses involving the acquisition of informationfrom the sensors and the logic of the system. Eachmodule created has its features explained in sec-tions 2.2 and 2.3.

2.1 Simulation

The simulation (Figures 2 and 3) contains 14 pre-defined grasps kinematically equivalent to a realprosthetic hand with high-end prosthesis capabil-ities, allowing future users to train without hav-ing an actual prosthesis. There are dynamic andstatic grips available in the simulation. The dy-namic grasp (active index grip, and mouse grip)are the ones having triggers in their behavior,which means that besides their initial motion toreach a hand movement, they have a second sub-set of motion to accomplish tasks. For example,

RFID Module

Simulator

Controller: Raspberry Pi 3

Myo

BluetoothSerial

Remote API

Motion Module

Simulator

Controller: Raspberry Pi 3

Myo

Bluetooth

TCP/IP

Figure 1: Overview of the system. The Myo armband inthe Motion Module sends information (EMG and IMU) tothe Controller through Bluetooth. In the RFID Module,the information from the RFID device is transmitted tothe Controller via Serial by an Arduino Uno.

Figure 2: Mouse grip: initialmotion.

Figure 3: Mouse grip:triggering signal.

the mouse grip has its original sequence to holda mouse (Figure 2), and after the user sends asecond triggering signal, a sub action is occur-ring to click the mouse button (Figure 3). Thestatic predefined grasping sequences available inthe simulation are precision open grip, precisionclose grip, key grip, column grip, abduction grip,finger point, hook grip, open palm grip, pinchgrip, power grip, relaxed hand, and tripod grip.(Kapandji, 2000) highlights these as being impor-tant grasp for everyday life’s activities.

2.2 Motion module

In this module, a hybrid interface solution basedon EMG and IMU defines the motion of the fin-gers in the prosthetic hand. This module takesinto consideration the direction and the contrac-tions measured by the sensors. Figure 4 showsthe architecture of this module. The Myo arm-band gathers the EMG sensors used to acquirethe contractions of the muscles and the IMU de-vice used to obtain the orientation of the arm. ARaspberry Pi 3 controls the signal processing, con-traction detection, analysis of direction, controllerprocesses, and the connection with the armbandthrough Bluetooth.

Firstly, the EMG sensors acquire the raw elec-tromyographic signals and send them to the Rasp-berry Pi through Bluetooth so that EMG signalsare detected as a list of N numbers, each one as-sociated with the number of EMG electrodes pre-sented in the armband. Then, in the Contractiondetection box, this list of values is processed to

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

1210

Page 3: HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL · 2017-09-27 · HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL TECHNIQUES Dandara T. G. de Andrade , Antonio Ribas Neto

Figure 4: Motion module architecture

Table 1: Examples of interaction with the Motion Module

Contraction(EMG)

Direction(IMU)

Grasp choice

11

Up +Right

Active index grip

1 Up Finger point

n > 1 Wave-out Cancel

1 Down Hook Grip

12

Up +Down

Tripod grip

sort out strong contractions of the muscles. Thisstep is important because we discard contractionsthat users made unwittingly from the ones theyintend to be a command.

Parallel to this, a similar process happens tothe IMU information. The IMU sensor in the arm-band sends to the Raspberry Pi the orientation ofthe arm as a quaternion. The direction of the armis acquired whenever a user contracts the muscle.In the Analysis of direction box, the quaternion isturned into Euler angles, and the direction (right,up or down) the user performed the contraction isdefined. Finally, the Controller receives the paircontraction-direction, validates this command andassociates them to a grasp to send to the simulatorthrough a TCP/IP socket. Table 1 shows some ex-amples of commands the user can do and the gripassociated with it.

2.3 RFID module

In this module, the motion of the fingers in theprosthetic hand is defined by the RFID tag read-ings the system performs and by contractionsthe user does to confirm or to cancel commands.Thus, it is necessary, as shown in Figure 1, thatthe system has a RFID reader, which must beplaced in the prosthesis to perform the requiredreadings for the system to work.

This module works as follows: the prosthe-sis is initially in the relaxed hand position (IdleState). When the user approaches an object thathas a RFID tag on it, the tag is read, and the sys-tem goes to the state S1, where it waits for threecommands: read another tag, confirmation or can-cel. The wave-in contraction (Figure 5) tells thesystem to go to the state S2, where the grasp isperformed, and the state machine waits for other

Figure 5: Wave-in con-traction (confirmationcommand)

Figure 6: Wave-outcontraction (cancellationcommand)

idlestart S1

S2 S3

read tag

cancel,confirm read tag

confirm

cancel

cancel

read tag

trigger

ε

Figure 7: RFID Module finite state machine

commands. Otherwise, the user can perform awave-out contraction (Figure 6), which tells thesystem that the user wants to cancel the currentgrasp and go back to state Idle. The third com-mand is to read another tag. The user might ap-proach another object, the RFID tag on this ob-ject is read, and the system remains on state S1

(waiting for other commands). The Finite StateMachine on Figure 7 is implemented in the centralcontroller to manage this behavior of the module.

One can see that once the state machine isin state S2, the user can perform the cancel com-mand, can approach an object to read the tag onit or can carry out the trigger command. Thetrigger is only available in the dynamic grasps ofthe set. The trigger command is also a wave-incontraction; the difference is when entering stateS3, the trigger is sent to the prosthesis, the graspis performed on V-REP, and the state automati-cally changes to state S2, so the user can eithercontinue sending triggers to the prosthesis or dosomething else.

The architecture of this module is similar tothe Motion module. Besides being classified asan active contraction, the system will also dis-tinguish which type of contraction the user per-formed: wave-in for confirmation and trigger orwave-out for canceling a command. Another partshowed in Figure 8 that differs from Motion mod-ule is the RFID part. The first step of the processis to read the RFID tag, which is made with amicrocontroller since an Android device might re-place the Raspberry Pi 3 in a further step. ALED is turned on as feedback when the microcon-

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

1211

Page 4: HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL · 2017-09-27 · HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL TECHNIQUES Dandara T. G. de Andrade , Antonio Ribas Neto

Figure 8: RFID module architecture

Table 2: Volunteers’ information

Moduletested

ID Gender Amputee Age

RFID

1 M N 342 M N 273 F N 264 M N 385 F N 34

Motion 6 M N 34

troller reads the tag, so the user knows what ishappening with the system. The tags on the ob-jects are associated with a type of grasp in whatwe call a dictionary. After that, this informationis available and is validated since it is possible forthe system to read a tag which does not repre-sent a type of grasp. Subsequently, the controllerreceives the grip and send it to V-REP.

2.4 Modules testing description

Both modules were tested in our laboratory. Allparticipants (information regarding the partici-pants are shown in Table 2 ) provided writtenconsent to take part in the experiment, which wasapproved by the Ethical Committee of the Univer-sity of Campinas (Brazil) under the CAAE num-ber 58592916.9.1001.5404.

The RFID module was assessed using NASATask Load Index. This methodology is used togather workload ratings to evaluate the system ef-fectiveness and performance. This method takesinto consideration six scales: Mental, Physical,and Temporal Demand, Effort, Performance, andFrustration. During the first part of the test,the subjects are asked to evaluate each of thesescales, rating them from 0 to 100. In the sec-ond part, they will create individual weighting ofthese measures according to the importance of thescale of the tasks they performed. More informa-tion about this method is available on (NASA-TLX, 2011).

Due to the lack of availability of people withamputation to the transradial level, the tests wereperformed on five non-amputated people to eval-uate whether or not the interfaces developed arevalid and to have early feedback on the improve-ments the system needs. During the test, the sub-jects were asked to carry out a task. The task con-

Table 3: Test result with Motion Module: accuracy of at-tempts

Grasp Correct attempts (%)

Active index grip 56.65Finger point 85Tripod grip 50Cancel command 94.5

sisted of using the system for 10 minutes and send-ing commands to check if the system was workingas planned. For instance, if the user sends thecommand to grasp a mouse, the simulator is sup-posed to show a prosthetic hand gripping a mouse.If a user sends a command to the prosthetic handto perform a power grip and the simulation showsanother type of grasp but power grip, the systemis considered flawed/faulty.

The tests made with the Motion Module weredifferent from the ones previously described. Thereason is that the Motion Module works differentlyfrom the RFID module. As stated in section 2.2,for the user to have full control of this module,he/she needs to learn the motion language devel-oped, and its grammar takes into considerationthe movements the user must perform to select anaction. Therefore, we asked a participant to usethe interface and try to send commands to thevirtual prosthetic hand. These commands wereActive Index Grip, Finger Point, Tripod Grip andthe Cancellation. The functionality of the systemwas explained, and he was invited to perform thecommands previously described to become famil-iar with the system. Once the familiarization pro-cess finished, the user was asked to try out fourtypes of grasp, so we could record how difficultit was to use the system and the percentage ofcorrect commands he could send in less than onehour of training distributed in two days.

3 Results

3.1 Analysis of Motion module

Table 3 shows the hand movements the user sentto the simulator (Grasp) and the accuracy whensending each command (Correct attempts). Thesequence of commands he sent to the system toachieve the grips are described in Table 1. Inless than an hour of training, the user was ableto learn with 94.5% of accuracy the cancellationcommand, while finger point had 85% of success.Therefore, we consider wave-out contraction be-ing the most easier control of the set. The nextcommand with the higher rate of correct attemptswas Active Index Grip with 56.65% of success, fol-lowed by Tripod Grip with 50%.

The reason why the participant could not ac-complish all the grasps with 100% of accuracy is

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

1212

Page 5: HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL · 2017-09-27 · HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL TECHNIQUES Dandara T. G. de Andrade , Antonio Ribas Neto

that he needs, as stated before, to learn the se-quence of commands and the timing of the sys-tem. For example, a contraction for confirmationcommand must be performed after 2 seconds thatthe state machine reached a correct contraction-direction combination. Therefore, if the user con-tracts his muscle to send a confirmation commandbefore the 2 seconds between his last contraction,the system will understand the user is still tryingto reach a valid combination. The system timingwas perceived to be the most difficult concept forthe user to understand.

3.2 Analysis of RFID module

Figure 9 shows the results for NASA Task LoadIndex. According to this result, all the subjectsagreed that Mental Demand is not the primarysource of workload for tasks involving the RFIDmodule. That means the participants did nothave to remember any patterns to perform thework and the tasks were straightforward and easyenough to be carried out by them since they allmarked this scale with less than 5 points on a scaleranging from 0 to 100.

The tests also show that the subjects felt littleor no pressure during the task – the largest work-load score was 1. Most participants of the tests didnot think Frustration was significant for the task,and this scale had a low score as 3 out of 5 ratingwere 0 and the other two less than 5. RegardingPerformance, all subjects were satisfied with howsuccessful they were in accomplishing the goals ofthe task, although it seems in the chart they failed.This fact is explained because Performance wasthe third greatest contributor to workload, so theraw rating is multiplied by the weight each subjectchose during the source of workload comparisoncards.Figure 10 shows the raw rating for Perfor-mance scale and, therefore, how satisfied were thesubjects with their performance.

Unlike Mental Demand, where the scores wereless than 5, Physical Demand required more fromthe subjects since they have to contract their mus-cles to confirm or cancel a command. Neverthe-less, the score is still below or equal to 15, whichreveals that even subjects who had more difficultduring confirmation/cancellation process, had notused a lot of energy or strength to finish the task.The tests showed that physical demand and ef-fort are the scales that represent the more sig-nificant contributors to the workload for the taskthe subjects performed. It means that no mat-ter how much of the other scales are scored, themost important ones are those two. ConsideringFigure 9 shows ratings below or equal to 15 forthese two measures, we believe the use of this in-terface is valid to be implemented in a low-costprosthetic hand since this interface proved itselfto be friendly and easy to use.

Frustration

Effort

Performance

Temporal Demand

Physical Demand

Mental Demand

0 5 10 15 20 25 30

Subject 1

Subject 2

Subject 3

Subject 4

Subject 5

Adjusted Rating

Sca

le T

itle

Figure 9: RFID Module: NASA Task Load Index result.

0

10

20

30

40

50

60

70

80

90

100 Subject 1

Subject 2

Subject 3

Subject 4

Subject 5

Performance

Pe

rce

nta

ge

(%

)

Figure 10: RFID Module result: performance raw ratingfor each participant of the test.

4 Conclusion

Although high-end prosthetic hands offer advan-tages for amputees such as a great set of graspsand smooth movements, their cost makes them in-accessible for most of the people who could benefitfrom them. Besides, these prostheses require theuser intense training to learn how to control theprosthesis, and this training is not guaranteed tobe successful, leading the users to get frustrated.Hybrid EMG human-machine interfaces help usersto decrease the preparation time since they willnot need to activate only a particular group ofmuscles, as happens in most high-end prosthetichands. With less effort to send a command, theylearn faster how to interact with the system, whichled them to a better experience.

In this paper, two solutions for the problem ofchallenging interactions in prosthetic hands wereaddressed. The first approach combines EMG andmotion (using an IMU) to build the interface be-tween the user and the machine. Using a combi-nation of contraction and directions, the users cansend commands to the prosthesis and manipulateobjects in the simulation environment. In the sec-ond approach, EMG is combined with RFID tocontrol the prosthetic hand, and the user needsonly to get closer to the object and confirm if thatis the object she or he wants to grasp.

The RFID solution has been proven to be rel-atively easy to use, and calibration phase crucialto obtain good experience while using the system.Visual or audio feedback also play a significantrole in the user experience. When first tested, we

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

1213

Page 6: HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL · 2017-09-27 · HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL TECHNIQUES Dandara T. G. de Andrade , Antonio Ribas Neto

could never be sure a command had been sent or,if, in the RFID module, the tag in the object hadbeen read. After adding feedback information, theusers felt more confident about the system.

The Motion Module showed itself to be harderto use than the RFID solution, but it was alreadyexpected since it requires more effort from theuser regarding controlling strategy of the prosthe-sis since a language has to be learned. In thismodule, feedback for the user and calibration alsohappened to be essential for the user experience.

It becomes apparent that the two methods de-veloped cannot be compared relating the complex-ity of use since their fundamental principle is dif-ferent, although they can complement each other.While RFID Module is more straightforward touse than Motion Module, it requires tagging allpossible objects the future users of the prostheseswill need to grasp or place all the possible tagson the users (e.g. put the tags in the pocket).Although in a controlled environment such as alaboratory or their house this is not a problem,the users would have difficulties using this inter-face in public places. The Motion Module wouldnot have this issue since it depends only on theability of the user to remember the commands.

In spite of the fact that the modules describedin this paper could not be tested with amputees,the solutions are valid to be implemented in pros-thetic hands for people who have the transradiallevel of amputation. Three reasons support thisaffirmation. First, the Motion Module uses direc-tions measured by an IMU device in the Myo arm-band, and the fact people have a transradial levelof amputation does not affect the way this deviceworks. Second, for the RFID Module, reading tagsusing this technology was already presented in theliterature as a viable solution. Our approach dif-fers in the ways of interaction (using contractionsto confirm or cancel a command). Lastly, bothmodules were designed to distinguish at most twodifferent types of contraction, which is possible toacquire in amputees since they still can flexion andextend their muscles.

To summarize, it is important to increase theaccessibility to prosthetic care in developing coun-tries and to improve human operability, a key fac-tor in success (Bicchi, 2000), we developed easyand friendly HMI. Although the tests were per-formed with non-amputee people, due to the lackof amputees available to test the systems, we canensure that those solutions are valid to be placedin prosthetic hands.

5 Future works

A larger group of people has to evaluate the Mo-tion Module to prove the users can learn the lan-guage developed in it, and that it requires lesstraining than high-end prosthetic hands to con-

trol the device.

Acknowledgments

The authors are thankful to the volunteers fortheir participation. This work is supported bygrants from Fundo de Apoio ao Ensino, a Pesquisae a Extensao (FAEPEX) and Brazilian Instituteof Neuroscience and Neurotechnology (BRAINN).

References

Association, A. T. I. (2017). What is a.t.?,urlhttps://www.atia.org/at-resources/what-is-at/. (Accessed in 20/03/2017).

Bebionic (201-). The hand, url-http://bebionic.com/the hand. (Accessed in20/03/2017).

Bicchi, A. (2000). Hands for dexterous ma-nipulation and robust grasping: A difficultroad toward simplicity, IEEE Transactionson robotics and automation 16(6): 652–662.

Bionics, T. (201-). Our products: i-limb quantum,urlhttp://www.touchbionics.com/products.(Accessed in 20/03/2017).

Camacho Navarro, J., Leon-Vargas, F. and Bar-rero Perez, J. (2012). Emg-based systemfor basic hand movement recognition, Dyna79(171): 41–49.

da Silva, J. R. A. (2014). Avaliacao e certifi-cacao de dispositivos proteticos e orteticospara membro inferior, Technical report, Uni-versity of Porto.

Dynamics, A. A. (201-). Michelangelo hand, url-http://armdynamics.com/pages/michelangelo.(Accessed in 20/03/2017).

Erik Scheme, P. and Kevin Englehart, P. (2011).Electromyogram pattern recognition for con-trol of powered upper-limb prostheses: Stateof the art and challenges for clinical use,Journal of rehabilitation research and devel-opment 48(6): 643.

Fajardo, J., Lemus, A. and Rohmer, E. (2015).Galileo bionic hand: semg activated ap-proaches for a multifunction upper-limb pros-thetic, Central American and Panama Con-vention (CONCAPAN XXXV), 2015 IEEEThirty Fifth, IEEE, pp. 1–6.

Foundation, A. R. (2017). Sixprosthetic options, url-http://www.armswithinreach.org/amputee information/prosthetic options.html. (Ac-cessed in 20/03/2017).

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

1214

Page 7: HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL · 2017-09-27 · HUMAN PROSTHETIC INTERACTION: INTEGRATION OF SEVERAL TECHNIQUES Dandara T. G. de Andrade , Antonio Ribas Neto

Kapandji, I. (2000). Fisiologia articular: Esque-mas comentados de mecanica humana: Vol.i: Membro superior.

Langevin, G. (201-). inmoov robot, url-http://inmoov.fr/. (Accessed in20/03/2017).

NASA-TLX (2011). urlhttp://www.nasatlx.com/.(Accessed in 20/03/2017).

Organization, W. H. et al. (2011). World reporton disability., World Health Organization.

Robohand (201-). Some things you shouldknow about robohand devices, url-http://www.robohand.net/about/. (Ac-cessed in 20/03/2017).

Rohmer, E., Singh, S. P. and Freese, M. (2013).V-rep: A versatile and scalable robot simula-tion framework, Intelligent Robots and Sys-tems (IROS), 2013 IEEE/RSJ InternationalConference on, IEEE, pp. 1321–1326.

Slade, P., Akhtar, A., Nguyen, M. and Bretl,T. (2015). Tact: Design and performanceof an open-source, affordable, myoelectricprosthetic hand, Robotics and Automation(ICRA), 2015 IEEE International Confer-ence on, IEEE, pp. 6451–6456.

XIII Simposio Brasileiro de Automacao Inteligente

Porto Alegre – RS, 1o – 4 de Outubro de 2017

1215