igi chapter preprint€¦ · therapists for assistive systems. four years later in 2010 another...

15
Copyright © 2015, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. &KDSWHU DOI: 10.4018/978-1-4666-7373-1.ch006 $VVLVWLYH 6\VWHPV IRU WKH :RUNSODFH 7RZDUGV &RQWH[W$ZDUH $VVLVWDQFH $%675$&7 Recent advances in motion recognition allow the development of Context-Aware Assistive Systems (CAAS) for industrial workplaces that go far beyond the state of the art: they can capture a user’s movement in real-time and provide adequate feedback. Thus, CAAS can address important questions, like Which part is assembled next? Where do I fasten it? Did an error occur? Did I process the part in time? These new CAAS can also make use of projectors to display the feedback within the corresponding area on the workspace (in-situ). Furthermore, the real-time analysis of work processes allows the implementa- tion of motivating elements (gamification) into the repetitive work routines that are common in manual production. In this chapter, the authors first describe the relevant backgrounds from industry, computer science, and psychology. They then briefly introduce a precedent implementation of CAAS and its in- herent problems. The authors then provide a generic model of CAAS and finally present a revised and improved implementation. ,1752'8&7,21 Assistive technology has always applied new developments to better support and empower humans. In the form of route guidance systems, context-aware assistive systems (CAAS) have become ubiquitous in cars and smartphones. In work environments, however, context-aware assistance focusing on the worker remained un- explored for a long time. While the quality gates Oliver Korn University of Stuttgart, Germany Markus Funk University of Stuttgart, Germany Albrecht Schmidt University of Stuttgart, Germany

Upload: others

Post on 19-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

Copyright © 2015, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

&KDSWHU���

DOI: 10.4018/978-1-4666-7373-1.ch006

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

7RZDUGV�&RQWH[W�$ZDUH�$VVLVWDQFH

$%675$&7

Recent advances in motion recognition allow the development of Context-Aware Assistive Systems (CAAS) for industrial workplaces that go far beyond the state of the art: they can capture a user’s movement in real-time and provide adequate feedback. Thus, CAAS can address important questions, like Which part is assembled next? Where do I fasten it? Did an error occur? Did I process the part in time? These new CAAS can also make use of projectors to display the feedback within the corresponding area on the workspace (in-situ). Furthermore, the real-time analysis of work processes allows the implementa-tion of motivating elements (gamification) into the repetitive work routines that are common in manual production. In this chapter, the authors first describe the relevant backgrounds from industry, computer science, and psychology. They then briefly introduce a precedent implementation of CAAS and its in-herent problems. The authors then provide a generic model of CAAS and finally present a revised and improved implementation.

,1752'8&7,21

Assistive technology has always applied new developments to better support and empower humans. In the form of route guidance systems,

context-aware assistive systems (CAAS) have become ubiquitous in cars and smartphones. In work environments, however, context-aware assistance focusing on the worker remained un-explored for a long time. While the quality gates

Oliver KornUniversity of Stuttgart, Germany

Markus FunkUniversity of Stuttgart, Germany

Albrecht SchmidtUniversity of Stuttgart, Germany

Page 2: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

in modern production lines successfully remove failed products from the workflow, they usually operate in a spatial and temporal distance from the workplace and the worker. Thus workers have to rely on their skills and their expertise to make the right choices and the right movements. They lack the opportunity to learn from problems on the fly by real-time feedback.

Impaired workers often cannot cope with these high demands – or this low level of assistance. As a result they are assigned comparatively simple tasks or they are removed from the production process completely. Thus both the impaired work-ers and the organizations providing sheltered work or supported work would profit from a feedback system that operates closer to the worker. In fact these organizations are eager to establish systems empowering their employees to meet the rising customer demands and thus become more profit-able (Kronberg, 2013).

A second area where CAAS can be used are “regular” companies facing aging employees. Due to demographic change the percentage of employees aged 60 and above is rapidly growing. Especially in the more developed regions, the ratio is increasing at 1.0% per year before 2050 (United Nations, Department of Economic and Social Affairs, Population Division, 2013). CAAS in production environments potentially improve learning, increase productivity and even enhance the motivation of elderly and impaired workers.

%$&.*5281'

CAAS combine elements from different contexts and disciplines:

• Projection and motion recognition clearly are means to realize implicit interaction with computers and thus belong to the computer science.

• Assembly tables belong to the domain of production where computerization follows different rules.

• The integration of motivating elements (gamification) combines psychology with computer science.

Each of these contexts is briefly introduced to illustrate in which aspects the CAAS approach differs from existing solutions and traditions. Also the target users (elderly and impaired persons) are described in this sub-chapter.

,QGXVWULDO�3URGXFWLRQ

In spite of increasing automation there still are many assembly workplaces the industry. Due to increased product variation resulting in smaller lot sizes (Kluge, 2011) their number even grows in spite of technical advances like semi-autonomous robots.

A workplace for manual assembly usually is a table with attached tools which can be pulled into the workplace area when needed. The parts required for the assembly task are placed in small boxes at the back of the table (see Figure 1).

During the assembly the worker needs to pick the right part or parts and use the right tool to complete a single working step. Often the box to pick from is highlighted (“pick-by-light”) and the pick is controlled by light barriers. While the assembly processes are described in manuals or displayed on a monitor, apart from the picking control the worker’s actions do not influence the feedback. An inexperienced or confused worker can easily produce a series of faulted products. To avoid this, impaired workers usually either work with reduced complexity (i.e. simple products, few work steps) or need a supervisor to handle the complexity of more demanding workflows (i.e. complex products with several steps).

While new forms of interaction and assistance are readily adopted in many domains, their trans-gression into the industrial domain, especially into

Page 3: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

production environments, has been slow. Today Human Machine Interaction (HMI) still lacks behind the possibilities explored in “regular” HCI: as a Fraunhofer study on HMI explains, from the variety of modern interaction techniques only touch screens found their way to machine interfaces in production environments (Bierkandt, Preissner Hermann & Hipp, 2011). This cautious-

ness becomes more plausible if the potential outcomes of errors in apps for mobile devices or regular business software are compared with the effects resulting from human errors or respectively software bugs in production environments – here errors can immediately result in severe injuries of workers. As a result most manufacturers are very

Figure 1. Assembly table as currently used in industrial production

Page 4: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

conservative when changing HMI and prefer “safe and slow” over “new and intuitive”.

One of the few examples of context-aware assistance currently used in industry is “pick-by-light” – a solution where the next box a worker has to pick parts from is marked by a small indicator lamp attached below and the pick is controlled by a light barrier. A reason for the prevalence of this comparatively advanced and intuitive form of HMI might be that the light barriers are integrated as sensors, so this form of assistance could easily be realized using the programmable logic controllers (PLC) common and accepted in industry.

Obviously new forms of HCI are implemented more readily if they have become part of an ac-cepted standard like ISO 9241 (ISO/TC 159/SC 4, 2006) which is covering the “ergonomics of human-system interaction”. Although this and related standards like ISO 14915 are updated regularly they are not designed to describe very recent approaches: motion recognition and accord-ingly implicit interaction has not yet been covered although this type of HCI is widely used today. With CAAS the assembly workplace is augmented by such elements.

,PSOLFLW�,QWHUDFWLRQ

From a computer science perspective an assistive system primarily is a computer-based system integrating data from users with special require-ments. This makes it a representative of the vast and quickly growing field of human-computer interaction (HCI). With the success of webcams in the mass market in the early nineties, wide-spread sensors integrated real-time data from the real world without the need for human interaction. From this point onwards HCI rapidly integrated new forms of input devices (GPS, accelerometers, motion sensors) and output devices (mobile phones, tablets, projectors). It is not surprising that in this time the idea of “ubiquitous comput-ing” emerged.

Later the concepts of “embedded interaction” and “implicit use” were established: they imply an embedding of information into people’s environ-ments (Schmidt, Kranz, & Holleis, 2005). The authors describe the unobtrusive integration of context-specific information on displays in every-day contexts like wardrobes or umbrella stands. Thus the idea of using everyday motions in work environments to implicitly interact with devices and the idea of projecting information directly into these work contexts by CAAS are logical advancements of existing lines of HCI research.

Although the concept of implicit interaction was influential it took several years until the small computers and sensors reached a broad audience. Like often when computer-based technology crosses the border from specialized applications for industry and research to the mass market, the game industry was a driving force. Soon gam-ing technologies like Nintendo’s Wii released in 2006 were used and adapted by researchers and therapists for assistive systems.

Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution in HCI with Microsoft’s Project Natal and the launch of the Kinect. This time the breakthrough was the capability to inter-pret three-dimensional human body movements in real-time without the need for markers. While the Wii still required the Wii Remote, the Kinect made the human body the controller. It was the first solution allowing real-time interactions on consumer hardware while being able to handle a full range of human body shapes and sizes in motion (Shotton et al., 2011). Thus implicit inter-action reached the mass market. The Kinect and other depth sensors were a technical requirement for the realization of CAAS.

3URMHFWLRQ

One of the first systems that combined projection with interaction was the “DigitalDesk Calculator” (Wellner, 1991). In this prototype of tangible

Page 5: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

interaction the camera “sees” where the user is pointing, and the system performs adequate ac-tions like projecting a calculator or reading parts of documents that are placed on a physical desk. It can be seen as an early realization of what now is called “natural interaction” and was an inspira-tion to the subsequent approaches including the CAAS approach presented later.

Ten years later, the “Everywhere Displays Projector” (Pinhanez, 2001) was another approach to make office rooms interactive. The device used a rotating mirror to steer the light from a projec-tor onto different surfaces of an environment and employed a video camera to detect hand interaction with the projected image using computer vision techniques. It was envisioned as a permanent setup for collaborative work in meeting rooms. In 2004 a more robust system allowed direct manipulation of digital objects by combining hand tracking with projected interfaces (Letessier & Bérard, 2004). Although it was confined to a planar display sur-

face, this simplification allowed a latency below 100 ms on regular computers.

In 2010 a novel algorithm using a depth camera as touch sensor for arbitrary surfaces was presented (Wilson, 2010). It allows to interact with projected content without instrumenting the environment. Hardy and Alexander (2012) improved Wilson’s algorithm in their UbiDisplays toolkit by cluster-ing points based on neighbor density.

The focus of these developments mostly has been office use, home use (especially entertain-ment) or mobile computing. The use of interactive projections in production environments has not been in the center of research in computer science so far. One of the rare exceptions is a system for checking the quality of spot welding on car bod-ies. It projects visual data onto arbitrary surfaces and provides just-in-time information to a user in-situ within a physical work-cell (Zhou et al., 2011). Another recent example that could be at-tributed to the sphere of production is an assistive

Figure 2. Consoles Wii with Wii Remote (left) and X-Box 360 with Kinect (right)

Page 6: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

system for guiding workers in sterilization supply departments (Rüther, Hermann, Mracek, Kopp, & Steil,2013). It projects instructions directly into the workplace and assists the workflow. Moreover a depth sensor is used to detect the user’s move-ments and thus allows a projected user interface.

*DPLILFDWLRQ

As the sub-chapter on implicit interaction shows, gaming technologies have always transcended the traditional boundaries of their medium. This process has been depicted by different terms, e.g. “applied games” or “games with a purpose” (Von Ahn & Dabbish, 2008). The most recent term describing this phenomenon is “gamification” – adequately defined as an “umbrella term for the use of video game elements to improve user experience and user engagement in non-game services and applications” (Deterding, Sicart, Nacke, O’Hara, & Dixon, 2011).

Especially in the context of health, gamifica-tion already has a long tradition. In 2007 the “games for health” approach reached a new level with the release of Nintendo’s Wii which was repeated when the Kinect launched in 2010 (see the sub-chapter on Implicit Interaction). So while gamification is an established concept in the health sector (although it may be called differently in various texts), it is a completely new concept for the domain of production.

As described in the sub-chapter on industrial production, the requirements for new technologies or concepts to be integrated in production envi-ronments are high: ideally the innovations are to be described in an established industry standard. Even if assistive systems in production use new interaction techniques, these are purely functional: they display instructions to decrease the workers’ cognitive load and reduce the sources of errors like the use of wrong tools. Although the success of attractive mobile devices sensibilized the pro-viders of assistive systems for production for the importance of user experience (UX), making work

more attractive or “increasing fun” so far has not been a goal for assistive systems in production. For this reason, apart from the research presented here, to our knowledge assistive systems in production have not yet been influenced by gamification.

In the context of this work gamification is seen as a means to achieve “flow” – a mental state in which a person feels fully immersed in an activity, experiencing energized focus and believing in the success of the activity. It is an area where a high skill and adequate challenge converge, proposed by Csíkszentmihályi in 1975 and described in several publications (Csíkszentmihályi, Abuhamdeh, & Nakamura, 2005). If tasks in production are re-designed to create and preserve a feeling of flow, they have to scale to match a person’s changing performance levels. Also to be permanently mo-tivating, an activity has to be designed in phases that partly arouse the user and partly give him or her the feeling of control, so that flow comes in waves or curves. This can be achieved by CAAS.

7DUJHW�8VHUV��7KH�,PSDLUHG�DQG�WKH�(OGHUO\

Although every worker benefits from context-aware assistive systems, impaired persons and elderly persons with reduced physical and mental capabilities require such systems the most. CAAS have the potential to empower them to do more complex work or to remain in regular jobs lon-ger. Thus they combine economic benefits with inclusion and address the demographic change.

When we talk about “the elderly” it has been established that the term refers to persons aged 60 and above. However, when talking about “dis-abilities” the classification is more difficult – es-pecially since recent approaches aim to integrate the interaction of disabled individuals with the society they live in. In the International Classifica-tion of Functioning, Disability and Health (ICF) the WHO (World Health Organization) defines disabilities as follows:

Page 7: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

Disability is an umbrella term for impairments, activity limitations and participation restrictions. It denotes the negative aspects of the interaction between an individual (with a health condition) and that individual’s contextual factors (envi-ronmental and personal factors). (World Health Organisation, 2001)

The advantage of this classification is the widespread acceptance of the ICF. It was officially endorsed by all 191 WHO Member States in the 54th World Health Assembly in 2001. However, this definition marks a decisive shift in the under-standing of the concept of disability: by focusing the individual’s interaction with the environment the ICF “mainstreams” the experience of disabil-ity. In this context CAAS are the instruments to empower individuals to better overcome problems caused by the (work) environment.

The number of potential users is very large: based on the latest World Health Survey (World Health Organization, 2004) conducted 2002 to 2004, the average prevalence rate in the adult popu-lation aged 18 years and over derived was 15.6% (some 650 million people of the estimated 4.2 billion adults aged 18 and older in 2004) ranging from 11.8% in higher income countries to 18.0% in lower income countries. This figure refers to adults who experienced significant functioning difficulties in their everyday lives. The average prevalence rate for adults with very significant functioning difficulties was estimated at 2.2% or about 92 million people in 2004.

&217(;7�$:$5(�$66,67,9(�6<67(06

Context-aware assistive systems (CAAS) have been used in industrial contexts before, as the pick-by-light example above has illustrated. However, they were restricted to light barriers while today’s advances in sensor technology allow to supervise the whole production process by using movement

data. This results in a new dimension of work as-sistance described in the following.

In this subchapter we describe a precedent implementation, a generic model for CAAS and finally present a revised and improved imple-mentation.

3UHFHGHQW�,PSOHPHQWDWLRQ�RI�&$$6�%DVHG�RQ�0RWLRQ�'DWD

First attempts to use motion recognition for as-sistance in the domain of production have been described in the author’s recent work (Korn, Brach, Schmidt, Hörz, & Konrad, 2012; Korn, Schmidt, & Hörz, 2012). Also first attempts to integrate projection into the workplace were described (Korn, Schmidt, & Hörz, 2013b).

However, an extensive evaluation of the re-sulting implementation (Korn, Abele, Schmidt, & Hörz, 2013; Korn, Schmidt, & Hörz, 2013a) disclosed a major problem: the resolution of the motion recognition system was not sufficient to robustly analyze the intricate movements common in assembly processes. While the resolution was sufficient to check which box a worker picked from, it did not suffice to analyze the more intricate movements that occur in assembly processes like the fastening of a screw.

As a result this implementation increased the workers’ speed – but their error rate increased as well: a speed-accuracy tradeoff occurred. While a certain amount of errors and latency is well-tolerated in other domains (e.g. web design or apps for mobile devices) it is not acceptable in the domain of production for the reasons explained in the background chapter.

As long as the details of the assembly pro-cesses cannot be analyzed, CAAS remain partly blind-folded and cannot provide the important feedback on quality-related problems. However, the underlying model already describes the archi-tecture of future CAAS.

Page 8: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

0RGHO�IRU�&$$6

The model for CAAS mainly draws on the estab-lished HAAT model (Human Activity Assistive Technology) which describes four basic compo-nents and functions of assistive technology (Cook & Hussey, 1995):

• Activity (areas of performance, e.g. work, play, school);

• Human (“intrinsic enabler”, including skills);

• Context (setting, social, cultural, physical); and

• Assistive technology (“extrinsic enabler”).

On the highest level the CAAS model presented here also separates the human (green area) and the context-aware assistive system (blue area).

The model aims to show the parallels in processing information: both the human and the CAAS share an environmental interface consisting of sensors on the input side and various actors on

the output side. The overall aim is that the input side of a CAAS receives enough data for the in-terpreter to create a fitting model of the current state of the user.

While the physical input (i.e. the user’s body movements) can be analyzed with motion tech-nology, the robust derivation of the emotional state requires additional data sources to increase the model’s accuracy, e.g. the heart rate or the facial expressions (which both can potentially be extracted from a high resolution video).

The model’s structural analogies continue on the processing side. Both the human and the CAAS share an interpreter and a generator. The CAAS interpreter uses the data from the envi-ronmental interface to model the human state. This model is then used to determine the user’s position on the flow curve, i.e. to analyze if the current trend of movement goes towards arousal or towards control. This analysis eventually results in an adjustment of the operational mode. This could affect the speed of production, the number of steps to be assembled by this person or even

Figure 3. Overview of preceding system with limited depth resolution

Page 9: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

the product. Since a typical phase of a curve lasts several minutes, determining the suitable point for changes is of essence.

If for example the interpreter needs to deter-mine if a worker reduces work speed because of boredom or because of exhaustion, specific data reflecting the emotional state (e. g. nervous hand movements, sweat or a fixed gaze) increase the accuracy of the modeled stress level. The behavior after an adaptation of the operation mode will also indicate if the human state was modeled correctly – in the above example increased speed would indicate that the state was interpreted cor-rectly as under-challenge while reactions showing stress symptoms would indicate that the state was misinterpreted and the person was in fact already above the upper challenge limit and outside of the flow channel. Thus the iterative interpretation of behavior changes as results of the adaptions can be used to correct modeling errors.

Finally the CAAS generator adapts the inter-ventions: the gamification component (e.g. the speed of visual elements or their size and position-ing), the instructions (e.g. by increasing the level

of detail in situations of stress or after multiple error occurrences) and the feedback (tone, length, and modality, i.e. visual or audio or both). The adapted interventions are then distributed over various output channels like projections or a moni-tor and speakers if auditory feedback is needed.

,PSURYHG�,PSOHPHQWDWLRQ�RI�&$$6

The recent approaches towards CAAS in pro-duction environments discussed above focused on light barriers or the use of a single motion detection device. The improved prototype (Figure 5) presented here combines three senor devices:

• A top-mounted Kinect that is used primar-ily to detect touch with the workplace sur-face making use of the UbiDisplay toolkit (Hardy & Alexander, 2012).

• A bottom-mounted Leap Motion that cap-tures hand movements above the surface.

• A top-mounted web-camera that identifies currently used tools and components.

Figure 4. Model of CAAS – abstract version

Page 10: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

As the Kinect and the Leap Motion use two separate coordinate systems, the system transforms the points from the Leap Motion and the points from the UbiDisplay into a unified coordinate system. To capture the maximal space above the surface and to optimally track the user’s hands above the surface, the Leap Motion was mounted in a 30 degree angle (Funk, Korn, & Schmidt, 2014).

In opposition to precedent approaches which could not robustly analyze movements with a granularity below one centimeter, this new setup allows to identify and survey the actions in the work area with a granularity that is accurate to the millimeter. The improved CAAS detects movement trajectories and compares them to a reference-trajectory. If the trajectory includes errors (like picking the wrong piece, mixing the order of the working steps or using the wrong tool), the CAAS generator can create an interven-tion. This intervention can be a simple feedback message, but it can also imply a larger change in the work process like providing more detailed instructions or even changing to a new product variant to reduce or increase the challenge level.

As the cognitive and even the motoric condi-tion of impaired persons can vary within short

intervals (even within a single day), the improved CAAS implements several levels of adaptation. On the first level, only the basic instructions are displayed, e.g. an arrow pointing towards the correct assembly position of the current work piece. In other levels more detailed instruction provide animations of how pieces are assembled, information on the success of the work step and interactive manuals for training-on-the job.

The adaptivity of the system is also used to motivate the workers with gamification elements. The improved implementation of CAAS allows that these elements are projected directly into the working area to give immediate feedback on the current task to the worker (Figure 6). The workers progress in a work process is directly color-coded by the circle which slowly changes color from green to red. This allows users to keep track of their own performance. Achievements like a quick assembly highlight and structure the repetitive work routine. At the same the detection of errors described above ensures that no simple speed-accuracy tradeoff occurs.

Finally, the improved CAAS prototype with its high resolution motion detection allows new use-cases for the everyday working life, e.g. user-

Figure 5. Improved implementation of the CAAS model using multiple sensors

Page 11: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

defined tangibles (Funk et al., 2014). Here the user can define everyday objects as tangible controls for digital functions. Using the visual sensors of the system, the tangible objects are recognized based on their visual appearance and do not have to be equipped with markers. The unique combination of sensors also allows to perform 3D-gestures on objects to add further control. This makes the interaction with future CAAS even more intuitive and natural.

)8785(�5(6($5&+�',5(&7,216

The use of multiple sensors in the improved implementation of CAAS provides a huge increase of accuracy and thus allows error detection. Nev-ertheless several tasks still have to be addressed to reach a perfect rendition of the CAAS model.

Currently the emotional detection has not been implemented. However, this is not a complex technical challenge, since a simple video camera potentially allows to detect facial expressions which can then be analyzed by the CAAS inter-preter. Also the implementation of motivational elements which are recognizable but do not draw

too much attention away from the work process is an ongoing research and development process.

The improved CAAS as a whole still needs empirical validation as to the extent speed is increased and errors are reduced. It will be es-pecially interesting to find out if performance improvements triggered by gamification are lasting or just temporary.

&21&/86,21

Context-aware assistive systems (CAAS) will permanently change the way we work. Like route guidance systems changed the way we drive in unfamiliar areas (and the amount of time we spend in preparation), CAAS will change the way we work. In the case of work in production, errors will be addressed “in the making” and persons with cognitive or motoric impairments will be able to remain in active production longer.

On the way towards a “perfect” rendition of CAAS for production work, several technical challenges had to be solved: a major problem was the system’s accuracy. This could be addressed by integrating multiple sensors and combining human

Figure 6. Gamification elements projected directly into the working area

Page 12: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

body tracking with object tracking as described in this chapter.

The improved CAAS also allows to detect changes in performance and to adjust both the level of feedback and the challenge level. Ide-ally this results in workers staying in the “flow channel” where a high level of concentration is accompanied by a feeling of satisfaction and accomplishment. While this advanced feature has been implemented prototypically, it would benefit from the real-time detection of emotions. Thus the gamification of production processes is an ongoing research process. The results of this process will be constitutive for future renditions of the CAAS model.

5()(5(1&(6

Bierkandt, J., Preissner, M., Hermann, F., & Hipp, C. (2011). Usability und human-machine interfaces in der production: Studie qualitäts-merkmale für entwicklungswerkzeuge (D. Spath & A. Weisbecker, Eds.). Stuttgart, Germany: Fraunhofer-Verl.

Cook, A. M., & Hussey, S. M. (1995). Assistive technologies: Principles and practice. St. Louis, MO: Mosby.

Csíkszentmihályi, M., Abuhamdeh, S., & Naka-mura, J. (2005). Flow. In Handbook of competence and motivation (pp. 598–608). New York, NY: Guilford Press.

Deterding, S., Sicart, M., Nacke, L., O’Hara, K., & Dixon, D. (2011). Gamification. using game-design elements in non-gaming contexts. In Proceedings of the 2011 Annual Conference Extended Abstracts on Human Factors in Comput-ing Systems (Vol. 2, pp. 2425–2428). New York, NY: ACM. doi:10.1145/1979742.1979575

Funk, M., Korn, O., & Schmidt, A. (2014). An augmented workplace for enabling user-defined tangibles. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems. New York, NY: ACM. doi:10.1145/2559206.2581142

Hardy, J., & Alexander, J. (2012). Toolkit support for interactive projected displays. In Proceedings of the 11th International Confer-ence on Mobile and Ubiquitous Multimedia (pp. 42:1–42:10). New York, NY: ACM. doi: doi:10.1145/2406367.2406419

ISO/TC 159/SC 4. (2006). Ergonomics of human-system interaction. International Organization for Standardization.

Kluge, S. (2011, November 21). Methodik zur fähigkeitsbasierten planung modularer montag-esysteme [Methodology for capability-based plan-ning of modular assembly systems]. University of Stuttgart. Retrieved from http://elib.uni-stuttgart.de/opus/volltexte/2011/6834/

Korn, O., Abele, S., Schmidt, A., & Hörz, T. (2013). Augmentierte production: Assistenzsys-teme mit projektion und gamification für leistungs-geminderte und leistungsgewandelte menschen. In S. Boll, S. Maaß, & R. Malaka (Eds.), Mensch & computer 2013 - Tagungsband (pp. 119–128). Munchen: Oldenbourg Wissenschaftsverlag. doi:10.1524/9783486781229.119

Korn, O., Brach, M., Schmidt, A., Hörz, T., & Konrad, R. (2012). Context-sensitive user-centered scalability: An introduction focusing on exergames and assistive systems in work contexts. In S. Göbel, W. Müller, B. Urban, & J. Wiemeyer (Eds.), E-learning and games for training, education, health and sports (Vol. 7516, pp. 164–176). Berlin: Springer Berlin Heidelberg. doi:10.1007/978-3-642-33466-5_19

Page 13: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

Korn, O., Schmidt, A., & Hörz, T. (2012). As-sistive systems in production environments: ex-ploring motion recognition and gamification. In Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments (pp. 9:1–9:5). New York, NY: ACM. doi:10.1145/2413097.2413109

Korn, O., Schmidt, A., & Hörz, T. (2013a). Aug-mented manufacturing: a study with impaired persons on assistive systems using in-situ pro-jection. In Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments (pp. 21:1–21:8). New York, NY: ACM. doi:10.1145/2504335.2504356

Korn, O., Schmidt, A., & Hörz, T. (2013b). The potentials of in-situ-projection for augmented workplaces in production: A study with impaired persons. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (pp. 979–984). New York, NY: ACM. doi: doi:10.1145/2468356.2468531

Kronberg, A. (2013). Zwischen pädagogik und production: Qualitätsmanagementsysteme in werkstätten für behinderte menschen [Between and Production: Quality Management in Sheltered Work Organizations]. Lützelsdorf, Germany: Ros-sol. Retrieved from http://www.verlag-rossol.de/titel/kronberg-qm-in-wfbm/

Letessier, J., & Bérard, F. (2004). Visual track-ing of bare fingers for interactive surfaces. In Proceedings of the 17th Annual ACM Sympo-sium on User Interface Software and Technol-ogy (pp. 119–122). New York, NY: ACM. doi:10.1145/1029632.1029652

Pinhanez, C. S. (2001). The everywhere displays projector: A device to create ubiquitous graphi-cal interfaces. In Proceedings of the 3rd Inter-national Conference on Ubiquitous Computing (pp. 315–331). London, UK: Springer-Verlag. doi:10.1007/3-540-45427-6_27

Rüther, S., Hermann, T., Mracek, M., Kopp, S., & Steil, J. (2013). An assistance system for guiding workers in central sterilization supply depart-ments. In Proceedings of the 6th International Conference on Pervasive Technologies Related to Assistive Environments (pp. 3:1–3:8). New York, NY: ACM. doi:10.1145/2504335.2504338

Schmidt, A., Kranz, M., & Holleis, P. (2005). In-teracting with the ubiquitous computer: Towards embedding interaction. In Proceedings of the 2005 Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-aware Services: Usages and Technologies (pp. 147–152). New York, NY: ACM. doi:10.1145/1107548.1107588

Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., & Blake, A. (2011). Real-time human pose recognition in parts from single depth images. In Proceedings of the 24th IEEE Conference on Computer Vision and Pat-tern Recognition (Vol. 2). IEEE. doi:10.1109/CVPR.2011.5995316

United Nations, Department of Economic and Social Affairs, Population Division. (2013). World population prospects: The 2012 revision. Author.

Von Ahn, L., & Dabbish, L. (2008). Designing games with a purpose. Communications of the ACM, 51(8), 58–67. doi:10.1145/1378704.1378719

Wellner, P. (1991). The DigitalDesk calcula-tor: Tangible manipulation on a desk top dis-play. In Proceedings of the 4th Annual ACM Symposium on User Interface Software and Technology (pp. 27–33). New York, NY: ACM. doi:10.1145/120782.120785

Wilson, A. D. (2010). Using a depth camera as a touch sensor. In Proceedings of ACM Interna-tional Conference on Interactive Tabletops and Surfaces (pp. 69–72). New York, NY: ACM. doi: doi:10.1145/1936652.1936665

Page 14: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

World Health Organisation. (2001). The interna-tional classification of functioning, disability and health (ICF). Retrieved October 29, 2013, from http://www.who.int/classifications/icf/en/

World Health Organization. (2004). World health survey. Retrieved from http://www.who.int/healthinfo/survey/en/

Zhou, J., Lee, I., Thomas, B., Menassa, R., Far-rant, A., & Sansome, A. (2011). Applying spatial augmented reality to facilitate in-situ support for automotive spot welding inspection. In Proceed-ings of the 10th International Conference on Virtual Reality Continuum and its Applications in Industry (pp. 195–200). New York, NY: ACM. doi:10.1145/2087756.2087784

$'',7,21$/�5($',1*

AAL Contents Working Group Task Force. (2013). ICT-based solutions for supporting occupation in life of older adults. Retrieved from http://www.aal-europe.eu/wp-content/uploads/2013/03/AAL-2013-6-call-text-20130326.pdf

Anders, T. R., Fozard, J. L., & Lillyquist, T. D. (1972). Effects of age upon retrieval from short-term memory. Developmental Psychology, 6(2), 214–217. doi:10.1037/h0032103

Bailey, R. W. (1989). Human performance en-gineering : using human factors/ergonomics to achieve computer system usability. Englewood Cliffs, NJ: Prentice Hall.

Cook, A. M. (2010). The future of assistive technologies. In Proceedings of the 12th In-ternational ACM SIGACCESS Conference on Computers and Accessibility (p. 1). ACM Press. doi:10.1145/1878803.1878805

Geller, T. (2014). How do you feel? Your com-puter knows. Communications of the ACM, 57(1), 24–26. doi:10.1145/2555809

McGonigal, J. (2011). Reality is broken: Why games make us better and how they can change the world. Penguin books.

Reeves, B., & Read, J. L. (2009). Total engage-ment: Using games and virtual worlds to change the way people work and businesses compete. Harvard Business Press.

Salthouse, T. A. (1990). Working memory as a pro-cessing resource in cognitive aging. Developmen-tal Review, 10(1), 101–124. doi:10.1016/0273-2297(90)90006-P

Schmidt, A. (2000). Implicit human computer interaction through context. Personal Technolo-gies, 4(2-3), 191–199. doi:10.1007/BF01324126

United Nations Convention on the Rights of Per-sons with Disabilities. (2008). Retrieved from http://hpod.pmhclients.com/pdf/ConventionIm-plications.pdf

.(<�7(506�$1'�'(),1,7,216

Context-Aware Assistance (CAA): CAA is provided by an assistive system which uses sen-sors (e. g. motion sensors) to model the user in order to generate real-time feedback.

Flow: A mental state in which a person feels fully immersed in an activity, experiences en-ergized focus and believes in the success of the activity.

Gamification: The use of elements from game design like leaderboards and achievements in non-game areas.

HAAT-Model: An established interaction model describing four basic components and functions related to the use of assistive technology.

In-Situ: A Latin expression for “directly on the spot”. In the case of this work this means directly in the workspace.

Kinect: A sensor using motion recognition. It was developed by Microsoft and originally

Page 15: igi chapter preprint€¦ · therapists for assistive systems. Four years later in 2010 another controller for a gaming console, Microsoft’s X-Box 360, repeated this revolution

���

$VVLVWLYH�6\VWHPV�IRU�WKH�:RUNSODFH�

intended as a peripheral for the X-Box gaming console. Later versions can be used with any PC.

Motion Recognition: A technology using in-frared (IR) light to generate 3D maps of a person or an object. The sensor devices Kinect or Leap Motion use this technology.

Pick-by-Light: A technology used in manual assembly in production environments. It marks the next box a worker has to pick parts from. Often the system integrates a light barrier to check if the pick actually took place.

Tangibles: In the realm of computer science “tangible objects” are real-world objects which can be used to interact with software.

User Experience (UX): A concept that broad-ens the concept of usability. Besides aspects such as utility, ease of use and efficiency it includes a person’s emotions when interacting with a system.