abstract - kevincurran.org · web viewrheumatoid arthritis affects around 1% of the world ......
TRANSCRIPT
An automated system for reading hand measurements in patients with rheumatoid arthritis
Aaron [email protected]
Computer Science BSc Hons
Supervisor: Dr. Kevin Curran
December 2011
1
Contents
Abstract.................................................................................................................................................4
Acknowledgements...............................................................................................................................5
1. Introduction......................................................................................................................................6
1.1 Project Aims & Objectives............................................................................................................6
1.2 Existing approaches.....................................................................................................................7
1.3 Project Approach.........................................................................................................................7
1.4 Chapter Overview......................................................................................................................7
2. Background & Related Work...........................................................................................................8
2.1 Methods of detecting and measuring hand movement..............................................................8
2.1.1 Current physical goniometric methods.................................................................................8
2.1.3 Glove-based systems..........................................................................................................15
2.2 Considerations for Patients Suffering Rheumatoid Arthritis......................................................16
2.2.1 Patient Mobility..................................................................................................................17
2.2.2 Patient Comfort..................................................................................................................17
2.3 Research Conclusions................................................................................................................18
3. Requirements Analysis....................................................................................................................20
3.1 Problem Statement....................................................................................................................20
3.2 Functional Requirements...........................................................................................................21
3.3 Non-Functional Requirements...................................................................................................21
3.4 Detailed Functional Requirements............................................................................................25
3.4.1 Use-Case Diagrams.............................................................................................................25
3.5 Software Development Methodologies.....................................................................................27
4. Project Planning...............................................................................................................................30
4.1 Plan of Work..............................................................................................................................30
4.2 Time Allocation and Milestones.................................................................................................32
4.3 Gantt Chart................................................................................................................................32
4.8 Conclusion.................................................................................................................................33
5. Design..............................................................................................................................................34
5.1 Kinect Sensor.............................................................................................................................34
5.2 Software....................................................................................................................................35
5.3 System design............................................................................................................................36
5.4 Database....................................................................................................................................37
2
5.5 Form design...............................................................................................................................39
5.6 Measurements...........................................................................................................................41
6. Implementation...............................................................................................................................43
6.1 Technical structural overview of system....................................................................................43
6.2 Determining finger information.................................................................................................44
6.3 Monitoring exercises.................................................................................................................50
6.4 Sampling information................................................................................................................53
6.5 Graphical user interface (GUI)...................................................................................................54
7. Testing.............................................................................................................................................65
7.1 Functional tests.........................................................................................................................65
7.2 Non-functional tests..................................................................................................................66
8. Evaluation........................................................................................................................................71
8.1 Evaluation of functional requirements......................................................................................71
8.2 Evaluation of non-functional requirements...............................................................................72
8.3 Summary of evaluation..............................................................................................................73
8.3 Future work and enhancements................................................................................................73
MySQL dump of database.................................................................................................................101
References.........................................................................................................................................102
3
AbstractRheumatoid arthritis affects around 1% of the world’s population. Detection of the
disease relies heavily on observation by physicians. The effectiveness of these kinds of tests is
dependent on ability and experience and can vary depending on the observer. This research
will aim to investigate the use of Xbox Kinect for monitoring in rheumatoid arthritis patients
as a cost effective and precise method of assessment.
A system has been developed which implements the Kinect sensor for usage in a hand
recognition and digit measurement capacity. This system performs the tasks usually
completed by a physician such as digit dimension monitoring and exercise observations.
Measurements taken include digit width and height; measurements which can be
accomplished different distances from the Kinect and in varied environmental conditions.
The tests completed are stored in a database for retrieval and analysis at a later
date. This allows the physician to monitor a patient over a period of time without requiring
multiple appointments where the measurements are taken manually. Ultimately, the system
is proof that a Kinect-based solution is not only plausible but highly reliable and functional in
many scenarios which call for regular observation of a patient. With the system being
designed to be portable and easy-to-use, it is an ideal solution for both the physician
monitoring patients in a clinic as well as posing a possible solution for patients wishing to
monitor their own condition in their homes.
4
Acknowledgements
I would like to thank Dr Kevin Curran who has inspired and supported me throughout this project. His continued direction and assistance has been invaluable in the development of this solution.
I would also thank my family and friends for their irreplaceable support and encouragement.
5
1. Introduction
Rheumatoid arthritis (RA) is a chronic disease that mainly affects the synovial joints
of the human skeleton. It is an inflammatory disorder that causes joints to produce more
fluid and increases the mass of the tissue in the joint resulting in a loss of function and
inhibiting movement in the muscles. This can lead to patients having difficulties performing
activities of daily living (ADLs). Treatment of RA is determined by physicirans through x-rays,
questionnaires and other invasive techniques. An example of this would be angle
measurements taken using instruments such as tape measures or a dynamometer to
measure grip strength. There is no cure for RA but clinicians aim to diagnose it quickly and
offer therapies which alleviate symptoms or modify the disease process. These treatment
options include injection therapy, physiotherapy, manual therapy (i.e. massage therapy or
joint manipulation) and drugs which can reduce the rate of damage to cartilage and bone.
These treatments are assisted by patient education. Patients are shown methods of
Joint Protection, educated in the use of assistive tools to aid in ADLs and shown altered
working methods. This document proposes to research the viability of using the Xbox Kinect
camera and sensor to accurately record a patient’s hand measurements. Its proposed
functionality would allow detection of joint stiffness over a period of time. If shown to be a
viable option it would aid in the diagnosis of RA and the discovery of appropriate treatments
for the patient.
1.1 Project Aims & ObjectivesThis main purpose of this document is to analyse the relevant issues faced when
implementing a system designed to assist in the assessment of rheumatoid arthritis
patients. The primary aim of this project is to assess the viability of a Kinect-based software
system for the real-time and historical measurement of hand movement and deformation in
RA patients. Its development proposes a viable gain over current goniometric measurement
methods.
6
1.2 Existing approaches
Existing methods and approaches to digitally measuring hand dimensions and
movement have failed to address the key issues surrounding RA treatment. While these
solutions seek to allow automatic and accurate measurement, many use non-commercial
hardware and rely on proprietary software which can be very expensive. Similarly, these
devices tend to be highly technical and require the supervision of a trained technician.
1.3 Project Approach
The hand recognition and measurement system designed and implemented in this
project will aim to present a functional and user-friendly alternative to current goniometric
measurement methods. It will attempt to overcome challenges and limitations of other
physical systems and establish the best solution to the common issues.
1.4 Chapter Overview
Chapter 2 provides a background and review in the areas of defining rheumatoid arthritis,
current goniometric assessment methods, state-of-the-art computer vision and
contemporary glove-based hand monitoring technology. It also outlines important
considerations for working with RA patients.
Chapter 3 is a requirements analysis, specifying increased detail on the proposed system
and the constraints which apply to it.
Chapter 4 is a development plan. Herein lies the time-management and project planning
documentation.
7
2. Background & Related Work
2.1 Methods of detecting and measuring hand movement
Measuring hand movement in this context refers to the ability of a given system to
determine finger-digit movement in relation to the rest of the hand. Also, some methods
may allow for automatic detection and measurement of swelling and deformities of the
hand. These characteristics are essential when tackling the development of a system aimed
at assessing the symptoms and progression of an RA patient.
2.1.1 Current physical goniometric methods
Current goniometric methods for monitoring and assessing joint mobility and
deformity are mostly analogue. Among the measures and practices used to establish the
patient’s disease activity are several self and physical assessments. These are essential for
the continued treatment of RA in a patient, allowing the physician to determine joint-
protection exercises as well as potential medicinal treatment in the form of anti-
inflammatory and auto-immune medications.
Measurements of a patients hand is recommended to be taken at regular visits to
their doctor (Handout on Health: Rheumatoid Arthritis, 2009). These assessments can
include hand measurements, blood tests (among other lab tests), and X-ray imaging of the
affected hand.
Sphygmomanometer (Grip Pressure)
A sphygmomanometer is used to assess a patients grip strength in their affected
hand. This is achieved by inflating the cuff of the device to a standard pressure in a rolled
manner, then having the patient grip the cuff in the palm of their hand. After the patient has
squeezed the cuff, the physician can take a reading of pressure which can be used to
indicate patient’s grip strength (Eberhardt, Malcus-Johnson, & Rydgren, 1991). However,
using the modified sphygmomanometer can proved misleading results. This instruments
pressure gauge is activated when the patient squeezes the air filled compartment (Ashton &
Myers, 2004). The limitation of this being that patients will larger hands will have artificially
8
lower pressure readings than patients with smaller hands. This is due to the variance in
pressure applied over the surface area (Fess, 1995).
Jamar Dynamometer
The Jamar dynamometer is seen as a reliable alternative to the modified
sphygmomanometer. It is a hydraulic instrument, functioning within a sealed system. It
measures grip strength in kilograms or pounds of force (Ashton & Myers, 2004). Its
versatility, simplistic functionality and cost effective features makes this method easily
accessible (Fees, 1987). It has been found to provide accurate readings and the results are
reproducible (Hamilton, Balnave, & Adams, 1994). This is an additional benefit the Jamar
dynamometer has over its mechanical counterpart the Stoelting dynamometer. This
mechanical method measures tension when force is applied to a steel spring and is not
viewed as a reliable measurement (Richards & Palmiter-Thomas, 1996).
Questionnaires
In assessing the patient’s discomfort, the physician must rely on several
questionnaires in order to gain an understanding of disease progression.
The Stanford health assessment questionnaire, for example, is designed to assess the
average morning stiffness the patient feels in the affected joints of their hands (Eberhardt,
Malcus-Johnson, & Rydgren, 1991). This is measured and recorded in minutes and is used to
gain an understanding of how long it takes for the patients joints to loosen and become
supple again.
Similarly, the patient is assessed on their experience of pain levels since their
previous examination. This is done through a questionnaire in which they must evaluate
their pain levels and discomfort over the preceding period.
Another assessment of this form is comprised of several questions regarding ability
and pain when performing ADLs. Commonly, the ADLs which are assessed include “dressing
and grooming”, eating, cutting and preparing food and general hand control over cups and
jars (Eberhardt, Malcus-Johnson, & Rydgren, 1991).
Patients are also assessed using the Visual Analogue Scale to measure their level of
pain and discomfort. This consists of a line, marked on one side as “No pain” and on the
9
other as “Worst pain imaginable” and patients are asked to mark a spot along the line which
reflects the current feeling (Schofield, Aveyard, & Black, 2007).
Similarly, a Health Assessment Questionnaire is designed to establish the patient’s
ability to perform daily tasks, with each question grading their capability using a “four-point
grading system”. This measures their “daily functionality level” (Fries, Spitz, Kraines, &
Holman, 1980).
Radiographic assessment
As a result of RA, joints in a patients hand and fingers can suffer bone erosion to
varying degrees. In order to measure and document this, the patient will undergo
radiographic tests in the form of X-ray imaging and MRI scans of the affected areas. This
shows how the bones in the patients hand are affected and can be measured over a period
of time to show disease progression and activity.
Another method which has the potential to highlight key areas of bone-degradation
and joint swelling is an ultrasound imaging of the affected hand. This offers a less invasive
method of assessing bone density and level of swelling in the patient. However, Chen,
Cheng & Hsu (2009) have shown that “prognostic value of MRI is not directly transferable to
Ultrasound” and therefore it is, not yet, an adequate option for assessment.
Clinical tests
Typically, several clinical tests are performed to establish the disease activity level;
including urine, blood and other tests. From these tests, the patients Erythrocyte
Sedimentation Rate and C - reactive protein results are established (DAS Booklet - Quick
reference guide for Healthcare Professionals, 2010). In patients with rheumatoid arthritis,
the C – reactive protein and ESR levels are used as a measurement and indication of
inflammation in the patient’s joints (Black, Kushner, & Samols, 2004).
General techniques
When visiting their doctor, the patient will have their movements assessed in the
areas where their RA is affecting them. The physician will check for the presence of finger-
10
thumb drift, swan neck/boutonniere deformity, as well as Bouchard and Heberden nodes
(Rheumatoid: Hand Exam, 2011). The examination consists of a patient placing their hand
flat on a table (where possible – depending on patient discomfort) with their elbow and
wrist resting flat. Using a goniometer, the physician examines (in degrees) extension,
flexion, adduction and abduction of the proximal interphalangeal (PIP), metacarpopalangeal
(MCP) and distal interphalangeal (DIP) joints of the fingers (Arthritis: Rheumatoid Arthritis,
2008).
This determines thumb-index finger drift (position of index finger away from thumb)
and palmar abduction (de Kraker, et al., 2009). The measurements are all documented in
handwritten forms and are recorded to aid future assessments. These readings are all
influenced by physician training and observations therefore they can vary between
examiners.
2.1.2 Camera-based movement detection
There are many options when attempting to determine a movement of a subject via
camera-based methods. Providing a system with “computer vision” and allowing it to assess
variables such as movement, size and depth of an object, is the goal in camera-based
solutions. Some camera based-solutions require proprietary hardware, while others are able
to utilise common devices and already existing technologies.
Open Source Computer Vision (OpenCV)
OpenCV is a cross-platform function library focusing on real-time image processing.
The aim of this library is to supply an application with “Computer Vision”, the ability to take
data from a still or video camera and transform it into new representation or decision
(Bradski & Kaehler, 2008). By taking pixel location and colour information, the library builds
an image matrix which it uses to “see”.
OpenCV was originally developed and released by Intel. Since its release in 1999, the
library has allowed a method of tracking motion within captured video and given developers
the ability to discern movement angles and gestures. Also, in terms of utilising images and
11
making a decision, here it refers to the ability for any given system to then automatically
determine people or objects within a scene. Functions like this are possible with statistical
pattern recognition, located within a general purpose Machine Learning Library (MLL)
included in the library. This allows for implementation of many features including “Object
Identification, Segmentation and Recognition, Face Recognition, Gesture Recognition,
Camera and Motion Tracking” (Chaczko & Yeoh, 2007).
The library is now supported by Willow Garage, meaning it has a consistent release
schedule – therefore the project is fully supported and reliable for future development. Use
of the library allowed the Stanford Racing Team from Stanford University to complete and
win the DARPA Grand Challenge, an autonomous vehicle race in which OpenCV was used to
provide the vehicle with “Computer Vision”.
OpenCV is optimised to run on Intel-based systems where it finds the Intel
Performance Primitives. Bradski & Kaehler (2008) note that while the library consistently
outperforms other vision libraries (LTI and VXL), its own processing is optimised by about
20% with the presence of IPP.
The OpenCV library works well with installed camera drivers to ensure that it
functions with most commercially available devices. This allows developers to create
applications and rely on non-proprietary, widely available camera equipment. Therefore
cost and development become a lot more practical for potential developers. Furthermore,
in relation to potential environments and scenarios in which applications may be deployed,
utilising existing cameras and commonly available devices means that applications can be
implemented in a wide array of locations.
Prosilica GC1290 System
The Prosilica GC1290 system is a product designed to facilitate the measurement of
a hand for patients with RA (GigE Vision for 3D Medical Research, 2010). Designed by
threeRivers 3D, the device is intended to monitor the joint swelling in a hand by recording
changes in the volume of the patients joints. A metal frame (80cm high, 60cm wide and
40cm deep) houses a total of four cameras and scanners. Two 3d laser scanners project
patterns and grids onto the patient’s hand which is then returned in order to create a 3d
12
representation. The laser scanners are equipped with a monochrome camera in order to
record this image and identify the laser grid. A colour camera picks up a standard image and
is used to monitor joint deformation, while a thermal imaging camera detects joint
inflammation. There is also a device intended to measure thermal information located near
the hand rest; this is used to provide reference information: ambient room temperature,
patients general thermal information.
All data taken from the device is recorded and displayed in real time in order to
minimise problems such as motion blurring because of hand movement. This data is then
processed by proprietary software packaged with the device to display this information (at
32 frames per second) to the patient and the physician. The software system used is also
deployable to all major operating systems (GigE Vision for 3D Medical Research, 2010). With
the range of information gathered by this device, it would allow physicians to gather very
specific and relevant information on a patient; and process it in a relatively short period of
time. Similarly, using a device which outputs measurements on a patients hand standardises
the procedure and readings; making them more assessable. This is because the information
gathered by the device is statistical and provides a quantitative assessment of disease
progression. Furthermore, this limits human error in measurements taken and does not rely
on the physicians judgement. The Prosilica system does have some drawbacks, however.
Since the device is bespoke it is not commercially available but is designed for medical use.
This results in the device requiring direct contact with the manufacturer. This also has an
adverse effect on the affordability of the device. The device itself is relatively large,
consisting of the aforementioned cameras and frame. While the device could be suited for
use in a doctor or physician’s office or surgery it would not accommodate home visits and
physician mobility. In cases where a physician is required to perform a home visit to the
patient, it is not feasible that the device could accompany them due to size and associate
cost.
Microsoft Kinect
The Kinect is a device which facilitates the translation of real-world objects and
motion into 3d representations. The basics of the device were initially developed by
13
PrimeSense, who later sold the technology to Microsoft. The device utilises a number of
sensors in order to accumulate input which can be compiled into a digital representation. It
has one camera which allows for input in the infra-red (IR) spectrum which returns a depth
map. This map is transmitted from an IR transmitter located next to the IR receiver and
consists of a projection of dots onto the target area1. Also, the sensor contains a third
camera which receives standard RGB (human spectrum) input in order to gain a colour
image of the target area. The colour input camera receives information at a resolution of
640x480 pixels while the IR receiver gathers input at 320x240 pixels. Both cameras run at 30
frames per second. The field of view on the depth image is 57.8 degrees (Limitations of the
Kinect, 2010).
The device also contains a microphone array for receiving sound input (which can
allow voice recognition and commands). This consists of 4 microphones placed along the
bottom of the Kinect. Lastly, the Kinect features a motorised base. This base allows for
targeting of the sensor bar; adjusting its position to acquire the best perspective of the
target space. This base allows for manoeuvring allows for a total alteration of 27 degrees
vertically in either direction. All of these features of the Kinect make it capable of processing
an area to determine distance to an object as well as colour and audio ambience.
While a standard camera with computer vision software may be able to determine
objects in a space, it can become difficult if there is a lack of colour differentiation between
the object and the surrounding space. Tölgyessy & Hubinský (2011) assert that with the
extra cameras and sensors, performing tasks such as image segmentation becomes a lot
easier, especially with the distance threshold which can be assigned to the input. This allows
unwanted background data to be filtered out and reduces the noise in the input. Microsoft
has also released an SDK which contains drivers and other files associated with producing an
application utilising the Kinect. The SDK allows for the device to be used with a Windows 7
operating system and supports C++, C# and Visual Basic programming languages. Along with
access to the raw sensor information the Kinect is gathering, the SDK also allows for skeletal
tracking (identifying humans and human gestures) via bundled libraries (Ackerman, 2011).
1 Kinect Dots - Night Vision with Kinect Nightvision Infrared IR http://www.youtube.com/watch?v=-gbzXjdHfJA&feature=related
14
One of the main advantages of the Kinect is the accessibility of its hardware. The
Kinect is a relatively advanced device allowing for computer vision. By combining advanced
hardware with a commercial price-point and making an SDK available, Microsoft have
allowed developers to capitalise on the capabilities of the device at relatively low cost. This
promotes its use in varied environments since maintenance and cost are comparatively
small when regarding other advanced computer vision utilities. The device is mobile too.
The Kinect sensor was designed and built for home use, making it reliable in many
conditions. For optimal functionality, the device requires standard room lighting. It requires
that the room be lit well enough that the standard RGB camera can pick up information but
also not so bright that the IR patterns become indistinguishable (Carmody, 2010). A
downside of the system is that for accurate readings, the subject must be at least the
minimum distance from the device. This minimum distance for the Kinect sensor is 0.6m and
the maximum range is variable between 5m – 6m (Limitations of the Kinect, 2010). -
However there is an inexpensive add-on for the Kinect which acts as a zoom lens, reducing
the minimum distance required.
2.1.3 Glove-based systems
As an alternative to current goniometric methods, there have been many
investigations into glove-based technologies. These aim to assess a patient’s finger and joint
movement in order to aid in diagnosis and treatment of RA. Existing glove-based solutions,
use varied methods of reading joint mobility and tension. Among the technology used are
sensors using magnetic technology, electrical resistors and contacts or LEDs with flexible
tubes (Dipietro, Sabatini, & Dario, 2008).
5DT Data Glove
Previous research into the use of glove-based technologies has shown the 5DT Data
Glove to be among the most accurate versatile gloves available (Condell, et al., 2010). It
utilises fourteen fiber-optic sensors; with two sensors per digit and one sensor for each
knuckle on the patients hand. It also has a tilt sensor mounted on the back of the hand to
15
measure the orientation of the patient’s hand. The sensors on the glove work by measuring
light travelling through the sensors. As the patient moves their hand, the stress on the
sensor changes, altering the amount of light passing through the receiver.
The glove is produced by Fifth Dimension Technologies and allows for accurate
measurement of hand/finger movements; passing the information via USB to either the
bundled software or software developed to utilise specific aspects of the glove. To
accomplish the creation of custom software to utilise the glove, it comes with a cross-
platform SDK in order for developers to make better use of the data they are able to collect.
However, this glove is only beneficial if the hand to be tested is always going to be either the
left or right hand of a patient. Since the glove is designed to fit only one side of the patient,
a new glove must be used should the measurements being taken be desired from the other
hand. Furthermore, if the measurements are to be taken from a patient with a different
sized hand than the glove which is available, a more suitable one must be found.
Dipietro et al. (2008) also found that the most accurate results were read from the
device when the cloth of the glove fit the patients hand well. Were the cloth too tight, the
glove would restrict movement in the patient and give readings which were more extreme
than the actual movements. However, if the glove material was loose on the patient,
readings were not representative and were less than the actual movements. While the glove
allows for highly accurate information readings from the patient’s hand; it has some
problems which are intrinsic to its design. Gloves like this one are designed to measure hand
movements and gestures while the software has been designed to incorporate that use into
hand assessment tools for RA patients. One of the main symptoms of RA is hand and finger
deformation along with “periarticular osteopenia in hand and/or wrist joints” (Arnett, et al.,
1988). Combined, this results in limitations to hand movements and articulation. Thus, the
finger and wrist articulation which is needed in order to manoeuvre the hand into a glove
can become painful and difficult.
16
2.2 Considerations for Patients Suffering Rheumatoid Arthritis
Solutions designed to facilitate and aid diagnosis of vulnerable patients – ones which
are in chronic or debilitating pain, for example – face an array of unique requirements.
Rheumatoid Arthritis affects around 1% of the population (Worden, 2011) and causes
synovial joints in affected areas to become inflamed due to extra synovial fluid being
produced. This can lead to a breakdown of the cartilage in the joints and can cause the
bones in the joint to corrode. As a result, patients commonly exhibit deformation in their
fingers and joints; as well as note regular and occasionally disabling pain (Majithia & Geraci,
2007).
2.2.1 Patient Mobility
Typically, assessing patient mobility is a case of factoring in the patient attending
their local medical practitioner for tests or treatment. This can become difficult however if a
patient has limited mobility. For a patient who is suffering RA, it is possible that their disease
is afflicting more than one set of joints in their body. Also, having the disease increases the
risk of osteoporosis in the patient due to the nature of the disease and the medication they
are required to take (Handout on Health: Rheumatoid Arthritis, 2009).
In effect, this can mean that the patient would require home visits more commonly
than a patient who is not suffering joint pain. Physicians required to visit the home of their
patients in order to assess the current disease progression and possible treatments must
have access to portable equipment. Therefore the equipment used must be mobile, easily
set-up and be an inexpensive product. Portable low cost equipment does exist that aids
treatment at home, however these methods have their own limitations that must be
considered. The Jamar dynamometer has proven to be an inexpensive and reliable gauge of
grip strength, providing data used in assessment. However, in patients with decreased
mobility, a grip strength test would prove to aggravate there symptoms and increase levels
of pain. This option is also open to false reading from patients not willing to exert their
maximum grip strength due to the uncomfortable nature of the test (Richards & Palmiter-
Thomas, 1996). There appears to be a lack of a measurement device that can record
patients’ treatment progression that is portable, cost effective and which has maximum
consideration for patient discomfort level.
17
2.2.2 Patient Comfort
It is important to understand the difficultly some RA patients have in completing
simple movements. In order to gain some insight it is essential to comprehend how their
joint function compares with average joint function (Panayi, 2003). Healthy joints require
little energy to move and the movement is usually painless. However for RA patients their
joints have a thickened lining, crowed with white blood cells and blood vessels. Movement
of affected joints not only causes bone erosion but also triggers the release of a chemical
within the white blood cell causing a general ill feeling (Panayi, 2003). This secreted
substances cause the joint to swell, become hot and tender to the touch while also inducing
varying levels of pain. Increased swelling, triggered by the white blood cells response causes
joint deformation.
Severe joint deformity can render traditional methods, such as the manual devices
mentioned earlier, ineffective. However it also presents limitations for the proposed
advance methods currently being developed. The glove method requires the patient to fit
their hand into a standard size glove. This method fails to address the fact that RA patients
do not have standard joint movement; therefore manoeuvring their hand into the glove
could cause unnecessary pain and discomfort. Additionally a standard glove does not
accommodate for joint deformity, especially not the extreme deformities that are
symptomatic of RA. The difference in finger and joint size is also not considered. RA patients
usually have symmetrical joint deformity, i.e. if there third knuckle on their right hand is
affected then it is likely that the same joint on the left hand will be affected (Panayi, 2003).
Expanding this example, if the same joint on both hands is swollen then the glove would
either fit appropriately to the swollen joints or the surrounding joints. This increases result
variability as hand movement cannot be standardised. In order for the glove method to
accurately measure joint movement and limit discomfort a custom version would be needed
for each patient. This would not be a viable option since the progression of RA would
require patients to have multiple gloves fitted.
2.3 Research Conclusions
Current goniometric tests are not repeatable and are subject to human error. This
can lead to adverse effects on the patient treatment. However, proposed solutions in the
18
areas of glove-based measurements fail to address the fundamental issues like patient
comfort and differing hand sizes. Moreover, the cost incurred with these solutions renders
the systems impractical. In order to maximise patients comfort during testing, an external
non-contact device is needed for RA patients. This is one of the proposed benefits of a
potential Kinect method. The patient would perform movement tasks but they would not be
restricted by any outside materials. Movement would be recorded digitally aiding treatment
analysis.
The Kinect’s versatility and cost effectiveness address accessibility issues. It would be
a beneficial, portable piece of equipment that could be purchased by physicians and also
patients. Therefore patients could carry out movement tasks daily; the results would be
recorded by the Kinect and a computer. The data could then be assessed by the physician at
a later date. A continual data supply would aid treatment planning and could also indicate
differences in movement throughout the day. Providing a fuller grasp of movement
functionally that is not currently assessed, due to the time restrictions of appointment
allocations for patients. Also, since the Kinect is an external sensor and is only a means of
providing raw data to a software system, a computer-vision library such as OpenCV can
potentially be implemented to handle the standard image recognition tasks. This maximises
the effectiveness of the Kinect since it would be combining the libraries available with the
Kinect SDK and also the open source libraries which are contributed to by a large community
of developers.
19
3. Requirements Analysis
Detailed in this section is an analysis of the problems surrounding the development of
software for a camera-based solution to current rheumatoid arthritis patient assessment.
This section outlines the functional and non-functional requirements of the proposed
solution and details several development methodologies which will be considered. The
selected methodology will be chosen based on its merits in meeting the requirements of the
solution for design and implementation.
3.1 Problem Statement
Already in this document several key areas which prove problematic have been
identified in relation to a RA assessment solution. Existing methods in practice by doctors
and physicians involve physical measurements and personal judgement to assess the
patient’s disease progression. This results in some measurements being inaccurate due to
human aspects like perspective and personal opinion which can have adverse effects on
patient treatment. These methods can prove inconsistent and fail to provide an accurate
representation of the patient’s current disease level. Furthermore, many attempts to
automate this process of assessment via glove-based and camera based systems have
proven ineffective, not taking into account aspects of patient comfort and mobility as
outlined in section 2.2.
The aim of this project is to implement a software based solution which will
incorporate the use of the Microsoft Kinect movement sensor to monitor hand movements
in patients with RA. Of the current solutions available, most utilise proprietary hardware
which tends to be expensive. With the use of advanced features of the Kinect – a
commercially available product – this project aims to make the solution affordable and
effective. The solution will provide digital feedback on the measurements of a patient’s
hand (size, joint angles) over time in order to assess disease progression. Further to this, it
will allow physicians to have the patient perform exercises and the system will determine
maximum flexion and extension for the manipulated joints among other necessary
calculations. These calculations and readings will be collected and stored in a database so
20
that the historical data can be viewed by the physician, expediting treatment selection and
disease analysis.
3.2 Functional Requirements
Wiegers (2003) describes the functional requirements of a system as the expected or
intended behaviour, documented “as fully as necessary”. While this is a difficult part of the
development process, it gives the developer a proper definition of exactly what the
proposed system is intended to do. The following is a succinct list of the functional
requirements of the system which integrates Kinect functionality with a software based
solution. These have been established based on research in the area of RA and from
communications with RA patients.
The proposed system will be able to:
determine base hand measurements of the patient
determine initial joint angles at resting position
monitor maximum flexion of a specified joint during predefined exercise
monitor maximum extension of a specified joint during predefined exercise
assess time taken to perform predefined exercise
establish a connection with a database in order to record measurements and
assessments
give real-time feedback on measurements
run on Windows based computers
3.3 Non-Functional Requirements
Chung, Cesar & Sampaio (2009) state that the requirements analysis of a project are
essential as it establishes what the “real-world problem is to which a software system might
be proposed as a solution.” In addition, Wiegers (2003) also defines non-functional
requirements as the “performance goals” of the system; including aspects of design such as
“usability, portability, integrity, efficiency, and robustness.”. Below is a list of the non-
functional requirements of the proposed system. It is categorised based on the
21
recommendations of Roman (1985); Interface, Performance, Operating, Software and
Hardware Requirements.
Interface Requirements
This section details how the system will interface with its environment, users and
other systems.
The system will conform with HCI best practices in sections which exhibit user interfaces and
utilise display which presents an easy-to-understand depiction of the measurements
Performance Requirements
This section details how the system will behave in order to meet its functional
requirements in an optimal manner. This includes addressing unexpected situations and also
methods which will be employed in order to allow the continued operation of the system.
The system will be able to:
cope with or present notification of adverse lighting conditions for image recognition
handle erroneous measurements taken from the system, disregarding readings
which are outside of logical bounds
connect to the database for historical data with little or no wait before the
information is retrieved
automatically determine if the patients hand exhibits deformity in order to construct
the activities or exercise which will be performed by the patient during examination
determine if the subject is in the correct operating space for optimal reception of
information in the sensors, and adjust or notify accordingly
deal with unexpected closure of the software application or disconnection of the
Kinect sensor
run on laptops or desktop computers running Windows , allowing for connectivity of
the Kinect sensor via standard USB 2.0 connections
display sensitive patient information to the appropriate users (i.e. if the system is
used by multiple physicians, a physician will only see their patients)
22
encrypt database information so that it does not allow for sensitive data to be
accessed on the host machine
Operating Requirements
This section details aspects of the design which account for possible user limitations
and system accessibility in the case of maintenance. This also includes elements such as
portability and repair.
The system will be:
accessible to physicians who have appropriate login information and patient data
user-friendly in that it allows for users with little or no training in the use of the
system perform an assessment of a patient by following on screen prompts
easily maintained or replaced as it consists of a commercially available (and relatively
inexpensive) device
robust enough to withstand a lifecycle in a physician’s office which is usually quite
busy
portable in cases where it is required on home-visits (system would consist of laptop
and Kinect sensor)
23
Hardware Requirements
This section details the hardware which is required to develop the system and which
is required in its implementation. The hardware required for the development and
implementation does not differ.
The required hardware is:
a Microsoft Kinect Sensor (no modification necessary)
a sufficiently powered computer or laptop, capable of running the software outlined
in the requirements below
Software Requirements
This section details the software required to design and implement the application.
The required software is:
Microsoft Visual Studio 2010
Kinect SDK (contains drivers required to access the sensor)
24
3.4 Detailed Functional Requirements
Through the use of the Unified Modelling Language (UML), this section will detail the
requirements specified in section 3.2.
3.4.1 Use-Case Diagrams
What follows is a specification of the potential use-case scenarios of the proposed
system. These use-cases define the actors and their interactions with the system and one
another. The primary actors in this system are the physician (or potentially, a nurse) and the
patient being assessed. Interaction between the system itself, the physician and the patient
is intrinsic to its design and usage. Figure 1 shows the possible high-level use-case of the
system.
Figure 1: Use Case scenario
The use-cases reflect only the high level actions of performed within the system. These
actions are described below.
Configure Environment – The physician configures the Kinect sensor by placing it in position
and connecting it with the computer. This also includes the application portion of the
25
system initialising. Once it has been started, the physician will configure the application for
use with the individual patient, setting up intended exercises, personal information and
other data which may be recorded.
Begin assessment – The physician begins the assessment in the application. The system will
notify the physician if there are any issues with the current set up configuration (including
positioning of the sensor). The system will first make note of the patients hand dimensions.
Next it prompts the patient with exercises to perform in order to perform the included
actions; monitoring joint angulation and the extremes of the movements performed.
Make assessment – The physician analyses the historical data for the current patient; this
includes previous hand measurements and evaluations performed in the system. From this
data, the physician can determine a course of action for the treatment of the patient. This is
then recorded along with the digital readings taken by the system and saved to a database
for future reference.
26
3.5 Software Development Methodologies
Software development methodologies consist of a framework designed to structure
the design and implementation process. The main elements of the software development
lifecycle are outlined within a methodology in order to establish the plan which will result in
the best possible software being developed. Structurally, most methodologies refer to these
elements as Analysis, Design, Testing, Implementation and maintenance (Conger, 2011). The
main aim of this section is to establish the most prevalent software development
methodologies and determine the most appropriate one for this project.
The Waterfall Model
Due to its straightforward nature, the waterfall model has survived since the early
days of software design. In this structure, elements of the development cycle flow down
through the model from one section to the next. Conger (2011) states that the waterfall
model is easily described, where “output of each phase is input to the next phase”. Similarly,
Conger (2011) asserts that the traditional outcome of the waterfall model is an entire
application. This means that at each stage of the cycle the overall product is assessed and
the model is examined in order to best design the entire system and consider it as part of
the development before implementation. However, one of the main ideals of this
methodology is a strong reliance on documentation at each stage of the development.
Boehm (1988) states that this can become the “primary source of difficulty with the
waterfall model”. In projects which are producing interaction-intensive systems, the end
result may not be fully realised until the system requirements are established. This can
result in documentation being forced at a stage when it is not required or needed.
Rapid Application Development
The RAD model allows for faster software development and implementation. In this
structure, requirements and designs are changed and updated as the product itself is being
produced. This results in the system and the documentation being produced at the same
time, allowing for late changes and update to be done. This model is very adaptable and
will allow for unforeseen software issues or new requirements being introduced to a
project. These are introduced to the specification and are implemented as part of the
27
overall design. Often, the main stakeholders in the system have an active role to play
throughout the process. However, this methodology suffers some criticisms. While it offers
the stakeholders a strong input into the project at all levels, this can become detrimental to
the project design and implementation as it is usually responsible for an increase in scope-
creep. In this scenario, the system being developed has a specification which is constantly
shifting to match what the stakeholders are seeing of the in-process design.
Incremental and Iterative Development
Incremental development implements a structure whereby the development of an
application is broken down into key segments which are reassessed (through each of the
different sections of a traditional methodology: analysis, design and implementation). At the
initial stages of this process, a basic implementation of the designed system is produced.
This allows the stakeholders to get an idea of overall functionality and then through added
increments, additional functionality is included in the specification. This methodology allows
for issues in design and implementation to be established and addressed early in the
software development lifecycle. Each iteration of the software can be considered a
functional implementation of the design.
Deciding the Most Appropriate Methodology
The most appropriate methodology for each project can be different. With unique
constraints and requirements, the structure to the development process must also be
tailored. Deciding on which of the methodologies listed above is the most appropriate is
determined by understanding these key requirements. It is important to realise that the key
functionality of this system will be recognising and measuring a patient’s hand. Therefore,
this functionality is a priority for the system and is essential for it to prove effective in use.
However, further functionality is required for a better user experience (visual feedback in UI,
historical data). Furthermore, the development process itself may introduce issues of
software limitations that are unforeseen until the implementation of the system is
performed. It makes most sense, therefore, to implement an incremental and iterative
strategy to the development process. This methodology requires that a functional version of
the system be created from the outset, with extra functionality being layered on top of this
initial design. In practice, this would mean that the proposed system would have the most
28
important features designed and implemented first to ensure functionality. Later, extra
layers of functionality can be added which improve user experience but the system as a
whole is never rendered useless by an incomplete layer. This provides an overall modular
design, ensuring that testing and bug-tracking is also easier due to the potential
removability of layers.
29
4. Project Planning
This project aims to develop a camera based solution for assessment of RA patients.
The following chapter will address the proposed timeframe and structure of the
development of a Kinect camera based solution. Further, it will outline the details of
mitigating possible issues faced in creation of the system, as well as detailing implementing
the development with proposed IDEs and other tools.
4.1 Plan of Work
Adhering to the incremental and iterative approach to software development, this project
has been separated into desired functionality areas. These areas are:
Recognising the patients hand
Establishing hand dimensions
Monitoring pre-defined exercises
Integrating the system with a database
These general areas allow the development to be separated into individual iterations. These
iterations represent products with key functionality.
Iteration 1
Following this iteration the system shall:
Feature a basic user interface which allows the application to be started and the
information being received from the Kinect be shown
Display the raw sensor data from the Kinect. This will allow the information being
received to be seen and for comparisons to interpreted information
Determine whether a hand is presented in front of the sensor when required. This is
only required under ideal conditions at this stage
This stage does not require information essential to assessment be shown. The primary goal
is to get the system functional.
30
Iteration 2
As well as performing the functions implemented in the first iteration, at the end of this
iteration the system shall:
Recognise the presence of a hand in less-than-ideal conditions, allowing for the
device to have a more versatile usage environment
Be capable of displaying basic hand dimensions such as width and height in profile
Implement a more functional UI in order to display the measurements of the hand it
is monitoring
Iteration 3
As well as performing the functions implemented in the second iteration, at the end of this
iteration the system shall:
Implement a more functional UI which displays the designated exercises for the
patient to perform; allowing the system to make more accurate readings
Take readings from the patients hand movements regarding extremes of motion
(flexion and extension) in the joints
Differentiate between resting and in-motion states
Iteration 4
As well as performing the functions implemented in the second iteration, at the end of this
iteration the system shall:
Be able to connect with a database in order to record the measurements of the
patients hand and exercises
Perform some encryption on the information stored
Allow the physician to analyse historical measurements of the patients hand via an
adequate UI
31
4.2 Time Allocation and Milestones
Below, the intended time allocation for each section of the development process, based on
the iterative steps outlined in section 4.3.
Development Stage Time (weeks) Date
System design 2 02/01/2012
Iteration 1 in development 3 16/01/2012
iteration 2 in development 2 06/02/2012
iteration 3 in development 2 20/02/2012
iteration 4 in development 4 05/03/2012
Testing and Evaluation 1 02/04/2012
Finalising system and project 1 09/04/2012
The milestones below will allow progress in the development process of the system to be
judged.
# Milestone Date of completion
1 Complete system design 16/01/2012
2 Complete iteration 1 in development 06/02/2012
3 Complete iteration 2 in development 20/02/2012
4 Complete iteration 3 in development 05/03/2012
5 Complete iteration 4 in development 02/04/2012
6 Complete testing 09/04/2012
7 Complete project 16/04/2012
4.3 Gantt Chart
32
4.8 Conclusion Ultimately, this report is intended to produce the fundamental starting blocks which
will form the foundation of a solid project. This document achieves this by detailing the core
issues surrounding the contemporary and state of the art solutions, presenting these
findings in a manner which will aid in the development of the proposed system. By outlining
key problems such as patient comfort during assessment, this document has categorically
proven that current approaches such as glove-based methods have fundamental
weaknesses in design. It is here that the solution this document proposes will prove
effective.
Further, the development plan outlined in section 4 of this document will allow for
frequent assessment of project progression, ensuring adherence to schedule and efficient
resolution to potential issues. Similarly, by conforming to the development methodology
outlined in section 3.5, the system will be sure to have a fundamental basic iteration which
allows for increased functionality to be layered on top of an already working system. These
aspects all point elements outlined within this document being the optimal method of
analysing a solution when proposing a software system. This document will ensure that the
system created will have the most potential possible to become a fully realised and
functional utility to aid physicians treating patients with RA.
33
5. DesignThis section documents the planning and design of the project. Included in this section are the
technical details and descriptions of the hardware used (Kinect Sensor) and of the software which is
the basis of the system. Furthermore, this section describes the process whereby the system will
determine finger dimensions from Kinect image data.
5.1 Kinect SensorA Kinect Sensor is the medium chosen to receive the images of the subject’s hand. The Kinect is
currently available in two models; the “Xbox 360 Kinect” and the “Kinect for Windows”. Both
models are functional with a Windows-based PC and can utilise the Kinect SDK released by
Microsoft. The Kinect for Windows has been modified to allow for readings to be taken much closer
to the device than allowed by the Xbox 360 version. This ensures a greater accuracy of data taken
from the subject. For the purpose of this design the software will be designed to work with both the
Kinect for Windows and the Xbox 360 Kinect; allowing users to utilise whichever is more accessible
with the knowledge that Kinect for Windows readings will be more accurate at closer ranges.
Figure 2: Microsoft Kinect for Windows
For the design of this project, the Xbox Kinect will be used due to affordability and accessibility of
the device. However, all code produced can run on both platforms as much of the business logic
which handles transferring information from the device to the development computer is achieved
through drivers and image-processing libraries like the Kinect SDK. This abstraction allows for
maximum versatility in the system.
34
5.2 SoftwareTo achieve the level of abstraction from hardware necessary to facilitate both versions of Kinect,
several libraries are used to pass the raw information to the program. Also, the drivers which are
initially installed when using the Xbox 360 Kinect work extremely well when utilising the Kinect SDK.
However, due to the main focus of that SDK being “skeletal tracking” (meaning that the Kinect SDK is
designed to pick up and monitor full body movements) it falls quite short when attempting to use it
for the purpose of hand recognition.
PrimeSense hardware drivers/OpenNI middleware
The PrimeSense hardware drivers work with the OpenNI framework to provide an alternative to the
Microsoft-issued drivers and SDK. This open-source combination provides a level of detail
unachievable in the standard SDK when using the Xbox 360 Kinect. By default, the standard
Microsoft SDK declares all data within a range of less than ~80cm of the device unusable. This makes
it extremely difficult to register information on a hand since the level of detail needed is much
higher than that which is afforded by the SDK. However, the PrimeSense /OpenNI framework
allows for information up to just over ~50cm.
CandescentNUI
The CandescentNUI project is an open–source Kinect library which works with both versions of the
Kinect in order to very accurately track hand information2. This library works with either C# or C++
and can be used along with the Kinect SDK or OpenNI. For the purposes of this project OpenNI is the
framework of choice since it would work to a higher level of accuracy in the Xbox 360 Kinect than is
achievable through the SDK. Furthermore, C# is chosen as it would allow the use of the Windows
Presentation Foundation for interface design.
Security
Primarily, security of the user’s data is ensured by having all readings and information stored in a
secure web-based server. This can be encrypted to safeguard against hacking breaches and can be
accessed remotely requiring secure log in information.
The log in information the user will need to provide on each start-up of the system will be a unique
username (pre-set before use) and password combination. The user’s password will also be hashed
2 Candescent NUI project page at CodePlex http://candescentnui.codeplex.com/
35
using an MD5 function which is a one-way hashing algorithm. This will ensure the storage of the
password is secure on the server and will be unlikely to be compromised.
5.3 System designThe user will log in to the system via a login window and will proceed to be presented with a window
showing their readings for the past month. From here they can choose to take new readings or
perform some hand exercises.
Figure 3: System navigation
To ensure that the measurement and exercise sections work properly the system needs to
determine that the Kinect sensor has been connected and is receiving data. If this is not the case the
system will prompt the user with an error until it has been connected. However, the history section
will work without the need for the Kinect to be connected to the system; allowing access to the data
even without the hardware needed for full usage of the system.
36
5.4 DatabaseUsing Connector/NET it is possible to integrate a C# system with web-based MySql database
implementation. This is the preferred option of database technology for a number of reasons:
Existing familiarity with MySql formats and development
Web-based data access allowing remote tests feeding back to a centralised database
Security of information – hand readings are never stored locally but instead on a web server
For design purposes a localhost server set up will be implemented in order to test functionality. This
is directly scalable to a production server environment at a later date.
The system utilises the user’s ID property as a key to link records across the tables. This allows the
records to be quickly gathered and sorted for the user upon request. The relationship between the
data taken from the Kinect and the web-based server is shown in Figure 4.
Figure 4: Kinect system server relationship
Database design
tblUsers
Field Name Type Properties(KEY) user_id Int (10) Key – simple int to create user relationsusername String(16) String to hold the users preferred usernamepassword String(40) String to hold hashed (md5) password
Figure 5: Users table design
Web-based database
Kinect Sensor
Laptop running system
37
tblReadings
Field Name Type Properties(KEY) reading_id Int(10) Key – to arrange readingstimestamp timestamp Simple int to create user relationsthumb_width double Contains user finger widthindex_width double Contains user finger widthmiddle_width double Contains user finger widthring_width double Contains user finger widthpinky_width double Contains user finger widththumb_height double Contains user finger heightindex_height double Contains user finger heightmiddle_height double Contains user finger heightring_height double Contains user finger heightpinky_height double Contains user finger height(FK) user_id Int(10) Foreign key from user table to gather results
Figure 6: Readings table design
38
5.5 Form designXHand Recognition Interface - Login
Please enter login information:
Username
Password
Login
XHand Recognition Interface - Measurements
RESULTS
Thumb 20mm
Index 18mm
Middle 19mm
Ring 18mm
Pinky 17mm
Measure right hand Measure left hand
39
XHand Recognition Interface - Exercise
Begin exercise Please stretch hand to display all fingers.
When all fingers are identified,close hand into a fist. The time from start to finish will be recorded
08/09/2012 - 09:30am | 20 | 21 | 20 | 20 | 20 | Right
08/09/2012 - 10:30am | 21 | 21 | 20 | 19.5 | 20 | Left
08/09/2012 - 11:30am | 20 | 18 | 20 | 20 | 20 | Right
08/09/2012 - 12:30pm | 19 | 21 | 20 | 19 | 19 | Left
08/09/2012 - 01:30pm | 20 | 22 | 20 | 21 | 20 | Right
Date | Thumb | Index | Middle | Ring | Pinky | Hand
Show data 07/09/2012 – 14/09/2012
XHand Recognition Interface - History
40
5.6 MeasurementsThe data gathered from the CandescentNUI provides many valuable pieces of information which can
be analysed and used to produce valid results.
Firstly, the system will only proceed to measure hand data when all 5 fingers are present and
readable by the program. This will ensure accuracy since the hand has to be properly oriented in
order for the fingers to be recognised. To establish a single finger width the process in Figure 9 is
employed:
Figure 7: Finger base value location
Figure 8: Finger height analysis
Base rightBase left
41
Figure 9: Finger width measurement process
This process ensures that each finger examined by the system is assigned to a relevant local
variable and thus can be analysed and stored based on which part of the hand object it belongs to.
Employing this method will also mean the readings from the fingers are as accurate as possible no
matter the orientation of the hand.
42
6. ImplementationThis section details the methods through which the raw image and depth information
received from the Kinect sensor is used to create useful hand and finger data. Also detailed is the
implementation of the form designs from chapter 5 into a fully realised C# system. Finally, the
methods used to access and manipulate data from the web-based database server are described
along with explanations of code design.
6.1 Technical structural overview of system
As previously mentioned in chapter 5, the system uses several supplementary software
frameworks in order to receive and analyse information from the Kinect sensor. The Kinect sensor
reads the scene and this image and depth information is passed from the Kinect to the OpenNI
framework via a set of 64bit PrimeSense drivers. The CandescentNUI implemented in the main C#
program then accesses the OpenNI data stream and constructs it into usable objects for use by the
hand recognition system.
The graphical user interface (GUI) is constructed using Visual Studio 2010’s designer and
XAML code. The type of interface created is a Windows Presentation Foundation (WPF) project
which allows for efficient form navigation in the form of XAML “pages” which can be linked to and
navigated away from. This layout allows the system to retain a sense of being light-weight and
efficient since the pages are only loaded as-and-when they are needed and are not causing too much
background processing. Furthermore, the GUI is developed in such a way as to be approachable and
easy to navigate for any user since a main objective of this project is to test viability of patient home-
use.
The Connector/NET addition to the C# project which allows integration with MySQL also
facilitates quite functional MySQL statement writing. The MySQL statements which can be used can
be input as shown in {FIGURE}:
43
6.2 Determining finger information
To begin, we create an instance of the data source in order to handle the information
coming from the Kinect into the OpenNI framework:
This code also sets the maximum depth range for the Kinect sensor to receive data as
900mm; this is chosen because at ranges close to and above 900mm, with an image resolution of
640x480 pixels, hand and contour information becomes very difficult to establish and the integrity of
the data is questionable.
The CandescentNUI code which is used for the majority of the hand recognition returns
detailed information back to the recognition interface. In the system, a listener is set up to handle
new frames of information being returned from the Kinect sensor.
This code creates a new “HandDataSource” object which we can then use to establish the
listener for the event when a new frame becomes available for analysis. Furthermore, this also
allows us to start and stop the hand recognition functions; all computation begins when the
hand.Start() method is initiated.
In order to give feedback to the user of the computation that is being performed on the
image data being received, we add a raw data window which contains much of the pertinent
information involved with the hand recognition. This window may be presented in either RGB (full
color) image format or a black and white depth interpretation of the data. For the purposes of this
system, it is more effective to show the depth data since it gives a better idea to the user of the
location they need their hand to be in in order to achieve optimal readings - Figure 10.
44
Now, when the depth data is being analysed by the system we can automatically pass it
forward to the interface of the system in order for the user to observe the changing depth and finger
recognition information as seen in Figure 10.
Figure 10: Sensor depth data and RGB with finger recognition
The location of the cluster information and finger information overlays align much better
with the depth data than with the RGB. This is due to the location of the two cameras which receive
this information being located slightly separately apart.
In the context of the code above, videoControl is the name assigned to the raw data video
window placed on the interface. Upon each updated frame, the new information is passed to this
control in the form of manipulating its source property.
Lastly, we add a few layers of information over the top of the raw depth data in order to
make it more presentable to the user.
These two layers contain the outline of the hand when it has been recognised along with
finger point and base pin-points as well as the cluster information which is used to determine the
45
whole hand. This looks like a matrix of dots which appear over the user’s hand in the image to show
that it has been recognised as seen in Figure 11.
Figure 11: Hand cluster data
The data generated by Candescent allows for the hands on screen to be enumerated and for
each hand to have a number of “finger” elements. Within these objects numerous pieces of position
and depth data are accessible.
In order to establish which finger the data is associated with, we must first determine which
finger is which in the hand object. To achieve this, a method utilising the logic of finger position on
the X axis is used.
This code is run for each finger. This code is utilised in the case of the left hand being
presented as the thumb in that context would be the digit located furthest along the X axis. This
code is run for each digit and as it is assigned, it is removed from the hand object; allowing the
system to re-enter the hand object and assign any unassigned fingers.
Each finger has a “BaseLeft” and “BaseRight” property as well as a “FingerPoint”. Each of
these objects has an X, Y and Z (depth) value. It is from these that we are able to determine
dimension data and whether the users hand is positioned correctly for optimal readings to be taken.
46
Optimal position for readings is determined by taking the Z value for the left-most digit and Z
value for the right-most digit and comparing them. Using this method we can establish horizontal tilt
of the hand and can formulate an algorithm which will alert the user if their hand is positioned too
far skewed on the X axis.
This code compares the left-most and right-most digits and uses a threshold of 4% +/- each
other to allow for some tilt of the hand. This has been assessed to be the most effective range since
it allows enough range of movement that the hand does not feel stiffened in order to be read but
maximises the integrity of the hand data. This is especially important in this project since it is quite
possible that a sufferer of rheumatoid arthritis will have difficulty orienting their hand to an exact
location in order for the system to assess it. Using this method, the user is afforded quite a lot of
freedom of movement.
The base left and base right values of an individual finger may have different Y axis values
since the hand can be tilted to many different orientations and the system will still detect it and
analyse the data. To overcome this, the finger width must be determined by using Pythagoras’
theorem to determine the distance between the two points.
47
Figure 12: Finger width Pythagoras method
As seen in Figure 12 the distance on the X axis between the two points can be determined as
“B”. When the distance on the Y axis is also determined we can use it as “A” and find “C” through
Pythagoras’ theorem3: c2=a2+b2.
A similar method is implemented in order to determine the height of a given finger. Since
the finger can be oriented in a number of different fashions (similar to Figure 12) the Y value of the
finger point is not an adequate reference to the height of the finger from its base. For this reason,
the Pythagorean theorem is again utilised in the following fashion detailed in Figure 13.
Figure 13: Finger height measurements
Since we have already established finger width, we can half it and use it to determine a base
value for a triangle as shown in Figure 13.
3 Pythagorean theorem - http://en.wikipedia.org/wiki/Pythagorean_theorem
[A]
[B]
X
Y
X
Y
[A]
[B]
[C]
48
To realise the intent of determining the finger widths as demonstrated in Figure 12, the code
below is implemented after establishing each finger.
This will return the width of the current finger so that it can be recorded in the database. By
taking getting the difference between the “BaseRight” and “BaseLeft” values we can construct a base
line for the triangle to be used for the Pythagorean theorem. The “B” line referred to in this code
snippet is the straight line distance between the lower part of the triangle and the top; effectively
giving the second side of the triangle. The “Math.Abs()” function here is the method for returning an
absolute value which is part of the Math library of C#. The purpose of this is to always return the
difference between the two values, instead of a negative value which may happen if the hand is
presented in an orientation other than that which the system was intended to handle.
In order to find the height of the finger as noted in Figure 13 we implement the following
code:
As part of a function to return the height of the finger, the purpose of this code is to do so
no matter the orientation of the hand. Were the hand in an ideal orientation, the distance from base
to tip on the Y axis would be suitable for determining the finger’s height. However, since the fingers
can be oriented at many angles diverging from the palm of the hand, this function utilises the
midpoint of the finger to construct a triangle to use the proven method noted above. The midpoint
is constructed by adding the two co-ordinate values of the “BaseLeft” and “BaseRight” and dividing it
by 2. From here the triangle is created in a similar fashion to that utilised above.
49
6.3 Monitoring exercises
The user can utilise the system to perform and monitor exercises in their hands. The data
recorded from these exercises is potentially useful to a physician and is therefore recorded to be
viewed at a later date. The exercises performed can be on either hand, (which the user must specify
before proceeding) and includes determining maximum flexion on the finger muscles. This is
assessed by the user stretching their hand to full extension, the system beginning a timer and then
prompting the user to flex their fingers inward again.
Figure 14: Hand at full flexion
Figure 14 shows the hand recognition working even when no fingers have been found. This
allows for the exercise code to check for the presence of the fingers at extension and when they
have not been found it but the hand is still visible it establishes that the full hand has been clenched
into a fist.
Figure 15: Exercise time taken message box
50
The time taken to perform this exercise as well as the maximum range of motion is
potentially useful in establishing a treatment plan for the patient and is stored with the rest of the
user’s information in the database.
To implement the functionality required for this process, we use the “newDataAvailable”
listener which is called each time a frame is passed from the Kinect sensor. In order to ensure the
functions such as starting a timer and stopping it are only performed when the hand is visible
(irrespective of how many digits are visible), the following code is executed first.
For the code to work properly, it must assess how many fingers are still being picked up by
the sensor. Since the user can take any length of time to begin movement, it makes little sense to
begin a timer on key press. Instead, the total finger count is checked upon each received frame. If
the count is still 5, the user has yet to begin movement and thus, the timer is restarted.
However, should the hand be visible and the number of digits showing be zero, we can
interpret that the user has fully clenched their hand.
This code shows the method whereby the timer is allowed it to terminate, thus recording
the duration of the flexion and extension in the user’s hand muscles. The time taken is then
displayed to the user and subsequently recorded in the database.
51
Displaying data on screen
One additional aspect of displaying the information constructed by the system involves
accessing UI elements while the finger recognition code is running. The code which accepts the new
frames from the sensor runs in a separate system thread, parallel to the main interface code. As a
result, accessing UI elements from within the finger recognition functions is not as simple as
referring to the objects and changing their properties. In order to access the UI thread we must
invoke the thread’s dispatcher which allows the code to move to that thread to perform an
operation before moving back to continue the finger functions. This is demonstrated in the code
below.
This code allows the UI updating operations to be performed within the finger recognition
thread. Since accessing any UI element and invoking the dispatcher leaves the current thread, it is
unnecessary leave the thread for each piece of UI to be updated independently. Once the dispatcher
has accessed the UI, all operations to be performed on it can be done using the “resultsGrid”
control’s dispatcher.
53
6.4 Sampling information
The information received from the Kinect can be variable at different distances over a period
of time; i.e. at 50cm over a period of 30 frames there may be a variance of around 10% in the
information received due to a combination of the hand detection algorithm and the method wherein
the Kinect senses depth data. To overcome this, a sampling method for the data has been
implemented.
By sampling data retrieved from the fingers over 10 frames it was found that the variance
decreased in readings. The level of accuracy this provided was determined to be adequate on the
testing device (Xbox Kinect sensor) and would be more than enough on the more powerful Kinect for
Windows device.
This code increments over the empty array which will hold the values for sampling. When a
set has been filled, it moves on and will only fill the next when a new frame has been passed for
analysis. Once all the samples have been filled, we get the totals and then find the mean values for
each; as seen in the code below.
As seen above, the array “Sum()” function is used to total all the values in the sampling
array. With this found, the total is divided by 10 (the size of the sample) and then rounded to 4
decimal points in order to maintain readability without sacrificing accuracy.
54
6.5 Graphical user interface (GUI)
The interface created to guide the user through the measurement and exercising process
was constructed using the form designs shown in section 5. The main intent for the design of the
interface was to create a functional yet simple design which would be as easy to utilise as possible
for the user.
Navigation
The system is built on the Windows Presentation Foundation which allows a set of
specialised form tools for creating an interface. One such tool is the “Navigation Window”. Through
the use of this form control we can implement one singular “Main Window” and from there direct
the user to any of the numerous pages. The navigation window is constructed using XAML and is
shown in the following code segment.
This window control contains the background image information utilised in all of the other
pages of the system. It also allows typical form aspects such as title, backwards and forwards
navigation and starting location to be set throughout the entire system. As seen on this control, the
“Source” property is the one which is manipulated in order to point the system to different pages.
With the NavigationWindow established, we can switch out the “Source” property and use
any of the implemented “Page” objects to allow the user to navigate through the system. An
example of a single “page”
55
Login Page
Using a simple username and password textbox combination the login screen is
implemented as shown in Figure 16.
Figure 16: Login screen
Using the input of a username, a function requests a record from the database where this
value appears. This value is case sensitive and will only return a valid record if the input exactly
matches a record in the database. When this has been found, the password which is also part of this
record is assessed to determine if it matches the user’s input. This is done using an MD5 hashing
class4. This is a one-way hashing algorithm which creates an encoded version of the user’s password
which can be compared with the encoded one held in the database.
4 MD5 algorithm class http://msdn.microsoft.com/en-us/library/system.security.cryptography.md5.aspx
56
This “GetMd5()” function is used to create a string from the MD5 object and the user
inputted password string. The code for this function is located below and allows the string located in
the database to be directly compared to the MD5 output from the user’s entry.
Should the user enter an incorrect combination, they will be prompted with an error which
will allow them to re-enter their information - Figure 17.
Figure 17: Login information error
However, if this information is correct, the navigation object will be used to direct the user
to the next stage of the system; the “History” page. First, an instance of the “History” class is created
and then passed to the “NavigationService” class.
57
Since this creates a new instance of the page, all user properties on this page must be set
before navigating away from the login page. Here, the users ID value is passed to the history page as
this will be used to find and store data in the subsequent pages.
History Page
For the history page, two “DatePicker” controls have been implemented. This allows the
user to select a date range and view the appropriate data. An example of this is seen in Figure 18.
Figure 18: History page with date pickers
Since both selections in the DatePicker controls can be any date, the system must check to
ensure that the date range implied is valid. This means checking that the date given by the “to”
control is greater than the “for” DatePicker.
58
This code will determine that the selected date range is valid and will allow the actions to
progress. If this is not a valid range, the user will be prompted with an error as shown in Figure 19.
Figure 19: Invalid date selection
With a valid range selected, the user will now see the “gridView” control. This control is
made available to the WPF project by using a “Windows Form Host” wrapper control. This piece of
XAML code enables this grid to be shown and interacted with as part of the native system.
By giving these namespace references the additional wrapper is now accessible as shown in
the following code.
We can now implement this grid within the history page to display sortable information to
the user. This information is all of the values that are saved to the database elsewhere in the system
and are relevant to the current user - Figure 20.
59
The table in Figure 20 is populated with the results from a function which retrieves data
from the MySQL database. In order to match the values coming out of the date selections with the
format in which MySQL accepts dates, an additional function was used. The code below converts the
string-based date property of the DatePicker into the MySQL “YYYY/MM/DD HH:MM:SS” format5.
Finally, using this information we are able to request the desired information from the
database. With this data returning in a “MySQL Reader” object; we iterate through the returned
rows to populate the DataView.
Each record in the “reader” object is here used to create a new “DataGridViewRow” object
which is then used in the implementation of the DataView.
5 Date/Time format as shown in the MySQL documentation -http://dev.mysql.com/doc/refman/5.1/en/datetime.html
61
Measurements Page
The page which contains the interface for measuring finger dimension data is shown in
Figure 21. While containing selections for hand measurements it also contains navigation back to
other sections of the system.
Figure 21: Finger measurements page
The “Begin Test” button shown in this page will begin the finger measurement process.
Before it starts the functions necessary to measure the hand, several checks are performed to make
sure that the system is accepting all the necessary data. Firstly, it will establish whether or not the
user has selected a “Hand to test” so that the fingers on the hand can be correctly identified later. If
this selection has not been made, the user is prompted with an error box as shown in Figure 22.
62
Figure 22: Hand selection error
With this check completed, the system must also ensure that the Kinect sensor is attached
to the testing computer. However, the OpenNI framework has no way of polling the Kinect to ensure
its presence. As a result, the first indication that it is not present is when the OpenNI framework
attempts to access the data coming back from the device. In order to combat this eventuality, a
“try..catch” method was implemented around the code which would cause the error. In the
eventuality shown in the code below, this error is gracefully caught and will throw an error while
allowing the system to continue running.
In this fashion, the system will wait for the Kinect to be attached and will still run if it is not
present at run-time but added later, allowing for plug-and-play functionality. In order to present this
error to the user, the error box in Figure 23 is employed.
Figure 23: Kinect not connected error
63
With the necessary error checking completed, the system can begin to show information to
the user in the form of the raw depth image previously shown in Figure 10. From here, the finger
recognition and dimension determination begins.
Section 6.2 notes the process whereby the system will establish whether the hand is tilted
and if the data being read from it is valid. In order to convey this process to the user, in the event
that their hand is tilted more than the system can compensate for, they will be prompted with the
notice shown in Figure 24.
Figure 24: Hand orientation notice
As shown previously, the location of the finger on the X axis is used along with the user
selection of “hand to be tested” to determine which finger is which in the hand object. Furthermore,
the sampling method documented in section 6.4 is utilised here in order to ensure the accuracy of
the information is as high as it can be with the sensor data. When the data is constructed by the
system it is displayed on screen for the user to see - Figure 25.
Figure 25: Finger measurements results
Here, the user is given the option to save the data if they feel it is representative of the
dimensions of their hand. If the data is determined to be flawed; i.e. if the user has placed their hand
at an incorrect range and the results have been skewed because of it - the user may clear the data
and attempt to run the measurements again.
64
Exercise Page
The final page object in the system is the exercise page. It is here that the user will be able to
perform flexion and extension exercises with the guidance of the system and have the data from
these exercises be recorded for reference at a later date.
Figure 26: Exercise page
Figure 26 shows the exercise page. This page has all of the same error checking documented
for the measurements page. It utilises the functionality documented in section 6.3 to determine
whether the user has completed the exercises they are being prompted to follow. When the actions
suggested by the system have been completed, their information is once again saved to the
database and can be viewed at a later date.
66
These database interactions are all dependent on the database implemented for the system.
The database is based on the designs proposed in section 5 and consists of a MySQL database
connecting with the C# system via the Connector/NET plugin. The MySQL database has two
connected tables, “readings” which stores the records of the finger dimensions and “users” which
contains the users allowed access to the system. These two tables are shown in their relational form
in Figure 27.
Figure 27: MySQL table design view with relations
Connecting to this database is handled as part of the “DatabaseConnection” class. Here,
information that is needed to access the server and the database are stored. This is also where
values would be changed should the system be scaled up and moved to a production server and
database environment.
With the “SERVER” property in the string set to “localhost” we are able to implement a local
testing environment. However, in a production environment, this would be changed to the IP
address of the server where the database is located; i.e. an IP address such as “203.0.113.0”.
67
7. TestingThorough testing will ensure the system is implemented and functions the way it was
designed to. The purpose of this section is to document that each area of functionality in the system
performs its task effectively and in a manner expected of a well-finished product. While ensuring the
system meets requirements is important, it is also vital that any incorrect data, crashes, undesired
functions or general erroneous behaviour is eliminated from the end product.
7.1 Functional tests
Functional testing refers to the determination that the specified functionality of the system
works as documented. This implies that each of the processes that were designed in section 5 will
work effectively in an environment with a user who is unfamiliar with the system. The end result will
be that any elements of the system which are designed for user interaction should work in the way
they are supposed to and without causing faults or crashes (Basili & Selby, 1987).
Initially, test-cases are used to establish typical usage patterns and assess the system based
on these interactions. These test-cases can be found in Appendix 1. These should indicate any errors
or issues the system has and allow them to be fixed.
Through these test-cases several areas of functionality which produced unexpected results
were fixed. One such area was the lack of proper control over absent internet connectivity. In this
case, were the system not allowed internet connectivity it would cause the system to crash without
prompting the user and gave no indication of what was causing the problem. Now, the user is
presented with an error box detailing the problem so that they may provide connectivity and use the
system properly.
Next, tests were performed to document the data readings taken by the sensor for each
finger. These tests assessed the real-world dimensions of the digits on each finger independently
and then compared these to the readings output by the system. The tests were then performed at
optimal range, too close to the sensor and then far way to determine the scale of results depending
on distance to the sensor. The results of these tests can be found in Appendix 2.
68
7.2 Non-functional tests
Non-functional tests are ones which are dependent on factors such as system scaling or
usage constraints which are not directly related to user input. Testing system output compared to
real-world, physical values is an important factor in these tests. The system Is used to determine
height and width of digits on a specified hand.
To compare this set of results, physical measurements were taken of each digit using a ruler.
These results were tabulated as “expected” results and measured against results output by the
system. The results for these tests are shown in Appendix 2.
The results of these tests are put together as a series of graphs in order to visually display
the effect of range on the data taken from the sensor. The first set of results, measuring the width of
digits on the right hand, is shown in Graph 1.
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
50 > 60
50 > 60
50 > 60
50 > 60
50 > 60
60 > 70
60 > 70
60 > 70
60 > 70
60 > 70
70 > 90
70 > 90
70 > 90
70 > 90
70 > 90
00.5
11.5
22.5
33.5
Width - right hand
Expected Actual
Graph 1: Width of digits on right hand
As can be seen the results from this test, system readings vary only slightly at the optimum
range of 50 > 60cm from the sensor. With some outliers, the readings remain consistent until outer
limits of range at ~80cm. Furthermore, as a result of this test it can be determined that most
variance occurs within the recognition of the pinky finger on the right hand. The results of this test
remained more constant than were initially expected to be. Consequently, this adds to the integrity
of the data at all readable ranges from the Kinect sensor.
69
Next tested was the width measurements taken of the left hand. Once again, the physical
measurements of the left hand were taken and compared against the width measurements
established through the system. These were documented in a table which can be found in Appendix
2. The variation of these results is shown on Graph 2.Th
umb
Inde
x
Mid
dle
Ring
Pink
y
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
50 > 60
50 > 60
50 > 60
50 > 60
50 > 60
60 > 70
60 > 70
60 > 70
60 > 70
60 > 70
70 > 90
70 > 90
70 > 90
70 > 90
70 > 90
0123456
Width - left hand
Expected Actual
Graph 2: Width of digits on left hand
Variation in these results is similar to the results shown on the right hand. As can be seen, at
optimal range the data interpreted by the system is very reliable. With the distance between the
sensor and hand increasing, so does the level at which the results fluctuate. Once more, the readings
are most varied in the mind range (60 > 70cm). This was found to be true of the readings for the
widths on the right hand also.
When the variations in the digit widths had been determined, the next step was to compare
the results of the finger height measurements. This was done by measuring the fingers on each hand
in a similar fashion to what the system is intended to do. By taking the centre point of the digit then
using a ruler to go from base to fingertip, a physical, real-world value was determined. This value
was then laid out in a table and used for comparison with system-generated values. The table
holding these values is contained in Appendix 2.
First tested was again the right hand. Similar to the width tests, the hand was placed at
different distances from the sensor to test effects of distance on reading accuracy. The values from
the right hand are shown in Graph 3.
70
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
50 > 60
50 > 60
50 > 60
50 > 60
50 > 60
60 > 70
60 > 70
60 > 70
60 > 70
60 > 70
70 > 90
70 > 90
70 > 90
70 > 90
70 > 90
012345678
Height - right hand
Expected Actual
Graph 3: Height of digits on right hand
While the variance in width measurements remained relatively consistent throughout the
distance tests, finger height measurement showed much more varied results. As seen in these
results; at optimal range, digit height is established very accurately. However, these tests prove that
for digit height, distance is a very important factor.
These tests must be compared with the results of the left hand testing in order to be
substantiated. Graph 4 shows the results of these tests.
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
Thum
b
Inde
x
Mid
dle
Ring
Pink
y
50 > 60
50 > 60
50 > 60
50 > 60
50 > 60
60 > 70
60 > 70
60 > 70
60 > 70
60 > 70
70 > 90
70 > 90
70 > 90
70 > 90
70 > 90
012345678
Height - left hand
Expected Actual
Graph 4: Height of digits on left hand
71
Through this second set of height results, we can determine that the behaviour of the first
test is replicated. As the distance from the sensor increases, the data variance increases. This
variation becomes very high at the extreme ranges such as 70 > 90cm and renders information
invalid.
From these tests we can determine that the optimal range for the Kinect when finding digit
height falls between 50 > 60cm distance from the device.
Figure 28 shows this range in a visual diagram. Within a range of 0 to 50cm, the information
from the Kinect sensor is not efficient enough to rely in for accurate data. At 50cm the data becomes
usable and reliable for accurate readings. However, as the distance from the Kinect increases, the
data becomes less and less accurate; finally resulting in the data being unusable again.
Figure 28: Kinect optimal range
Lastly, the ability for the system to determine individual fingers was tested. The aim of this
test is to ensure that
a) The system can pick up a hand without needing to identify finger elements (i.e. a fist)
b) The system can pick up a hand showing only a sub-set of fingers (i.e. a raised index finger)
To accomplish these tests, the measurement process was run with the right and left hand
testing scenarios. Through this raw data window we can determine when the hand has been
recognised by the system as it overlays a cluster image on the raw data. Furthermore, when a finger
has been recognised, a “fingertip point” is added to the overlay. By monitoring this when the sensor
is presented with a hand clenched into a fist and a hand which presents only the index and middle
72
fingers, we can assess whether the system will work with only partial readings. __ shows this
functionality being tested.
Figure 29: Hand clenched into a fist
It was found that the system worked very efficiently in determining that a hand is present
even without all of the fingers showing. This functionality is utilised in the hand exercise section to
establish the time a user takes to complete tasks. This testing validates the readings returned by the
exercise section of the system.
73
8. EvaluationWith the system completed, an evaluation can be conducted. The level to which the current
system meets the requirements specified in section 3, as well as how the design was realised in
implementation will be assessed.
8.1 Evaluation of functional requirements
The functional requirements documented in chapter 3 are listed below in Figure 30. What
follows is an assessment of how each requirement was met.
ID Functional requirement Success level
1 Determine hand measurements of the patient Succeeded
2 Determine initial joint angles at rest Failed
3 Monitor maximum flexion of joint Failed
4 Monitor maximum extension of joint Failed
5 Assess time taken to perform exercise Succeeded
6 Establish connection to database to save data Succeeded
7 Give real-time feedback of finger dimensions Succeeded
8 Run on Windows-based computers Succeeded
Figure 30: Function requirements successes
One key aspect of the system at the time it was design was the ability to carry out
measurements of joint angles to determine the range of movements the patient is capable of. Due
to limitations in the capabilities of the Kinect sensor it was determined later that this functionality
lay outside of the scope of the system. Instead, attention for this system is focused on the hand
recognition and digit dimension analysis in order to provide a more functional system.
While the system was in progress, it was determined that the facilities intended for
development initially – the Kinect SDK – would prove ineffective at accomplishing the desired goal.
For this reason, the ability to recognise and acknowledge a user’s hand became more of a challenge
than initially thought. With the process of hand recognition completed, the remaining requirements
were met to a high standard. The finger dimensions are recognised at a variety of ranges from the
Kinect sensor which allows for most users to utilise the device without much direction. Furthermore,
implementation of the dimension functionality was constructed in such a manner as to negate the
effects of a majority of positioning problems on the part of the user. By a process of sampling,
74
assessing hand and finger orientation and distance, the system is able to factor these aspects and
provide an accurate reading of the user’s hand.
The time taken to complete exercises as specified by the system was also implemented to
completion. The end result of this requirement is also implemented to a very high level of
functionality. In terms of usability, the process does not require the user to “start” a timer. Instead, a
timer is begun in the event that their hand starts to move. This extends the functionality, making the
system more approachable to the user and more reliable in terms of data for the physician.
All values which are determined by the system are provided to the user in real time as well
as storing in a database, completing the last of the functional requirements set out in chapter 3.2.
8.2 Evaluation of non-functional requirements
The non-functional requirements as specified in chapter 3.3 included Interface,
Performance and Operating requirements. Among these, an important factor was the performance
requirements. It was specified that the system be able to handle a number of adverse conditions
such as poor lighting and range diversity in user location (too close, too far from sensor). In terms of
these requirements the system performs well. Since the Kinect itself works with infra-red light rather
than standard image recognition, it is functional in very dimly lit scenarios. The only adverse
condition in terms of lighting is when the Kinect sensor is introduced to a room with direct sunlight
which can interfere with the infra-red recognition. Similarly, the system has been developed in such
a manner that the range of user motion is accounted for programmatically; handling poor hand
orientation on-the-fly where possible and displaying notices to the user otherwise.
In terms of the operating and performance requirements, the system has been implemented
to a high standard in both of these areas. Many of the key components of these requirements were
involving ease of use. The system has been implemented using HCI best-practices (as noted in
chapter 3.3) and utilises colour schemes and layout which adhere to these practices where possible.
Further, it was determined that the interface design would use custom background and icon sets in
order to enhance to professional and approachable feel of the system. These were created in Adobe
Photoshop and integrated into the system; achieving a high level of success in these requirements.
75
8.3 Summary of evaluationDevelopment of this project presented many obstacles which needed to be overcome in
order to complete the system as it was specified. One aspect of development which presented issues
at the beginning was determining that the Kinect SDK for Windows would be ineffective at
establishing hand and finger recognition in the system. This necessitated the creation of a separate
method of finger and hand recognition to be built into the system, independently of the Kinect SDK.
As a result of this functionality being implemented, the validity of the measurements and exercises is
enforced. The increased level of understanding required by this process led to other sections of the
system being completed to a standard otherwise unlikely to have been achieved.
The system was specified and designed to provide patients and physicians a means of testing
joint dimensions and movements of the hand in patients with rheumatoid arthritis. By combining the
Kinect sensor and a software interface, the main principles of the system would be ease-of-use and a
high level of functionality.
In using the Kinect sensor, an affordable and commercially available device was used to
perform a specialised and highly sensitive task. In doing so, the project succeeded in producing a
solution which would be usable by either health professionals or patients using it in their homes.
8.3 Future work and enhancementsThrough completion of this project, significant investigation into the uses and possible
functions of the Kinect sensor hardware has been conducted. As a result, the possibilities for future
development of the system and proposed additional elements have been formulated.
Firstly, it is recommended that any future work on this project be conducted on and utilise
the Kinect for Windows hardware. This hardware was released during the lifecycle of this project
and has been integrated so that in the presence of the device, the system will perform to a much
higher standard. By utilising the additional processing power and higher accuracies of the device
across a range off distances, the system can have its capabilities increased each time new hardware
iterations are released.
One main feature of the system is that it connects to the database for storing information
via the internet. This allows many aspects of development which may be completed in the future.
Since the user interacting with the system and the Kinect sensor saves their data to the web-based
database server, all of the information stored there is automatically available for any other web-
connected device. Following this, an additional system could be constructed as a web-based portal
giving access to the information stored in the database. The patient or physician would then be able
76
to access their personal data from any internet connected device like smartphones, laptops PCs
among others. This would also eliminate the need for the patient and physician both to have their
own Kinect sensor – they would no longer need the C# system if their objective is to read data only.
An internet-based implementation would also pave the way for a much deeper level of
interaction between the patient and physician. With this set up, a physician would be able to add
exercises or instructions to a database which the system would then pick up and display to the user.
Thus, a physician could view a patient’s information day-to-day and prompt them with specific
actions or tasks in order to highlight key areas of their condition; allowing the physician to build a
much more personalised treatment plan.
Furthermore, this type of implementation revolves around a unified data source in the form
of a web-based database. With this in mind it is quite easily conceived that further developments
could come in the form of any type of internet-connected technology. For example; an app may be
developed which allows the physician instant access to patient records and exercise building
functionality. Similarly, information may be shared between physicians in the form of URLs – though
this would mean dramatically increasing the level of security in the system.
For developments such as these, the database and Kinect system both would need to be
safe guarded in order to protect user information. The database may be encrypted and the
connections with the system secured, providing the patient peace of mind that their information is
safe.
This system computes the dimensions of the digits on a hand in a patient suffering
rheumatoid arthritis. In the case of expanding upon the work already completed, there are some
areas in which it could be made more effective in the treatment of these patients. One such area is
monitoring the resting angles of joints in the hand and comparing to the fully flexed angles of joint
movement. While the system handles patient exercises and provides useful data; this could be
extended to accommodate the treatment plans provided by a physician. The user could perform
joint flexion and extension exercises day to day and over a period of time the physician could build a
model representing their range of movement. This would be greatly beneficial to the patient as it
would result in a full documentation of their condition, rather than the physician taking
measurements weeks apart at different appointments.
77
Appendix 1: Test-Cases
ID Description Test Actions Expected Result Actual Result Date Pass/Fail Comments1 Is the user made
aware that they may have entered incorrect login information?
Attempt log in with invalid details
Error box displayed with notification that combination is invalid
Error box is displayed until details have been updated
28/04/2012 PASS
2 Does the system handle the Kinect not being present?
Attempt to measure hand without Kinect
Error box displayed with notification that Kinect is absent
Error box shown; however, will not appear second time if user clicks and Kinect is still absent
28/04/2012 FAIL Fixed: ensure absence of Kinect will not cause system to crash
3 Will the system handle a lack of internet connection?
Attempt to access login without internet connectivity
Error box displayed to notify user that internet connection is not present
Error is not displayed, system crashes when login is attempted
28/04/2012 FAIL Fixed: system will wait on internet connection before attempting database actions
4 Can user retrieve historical information?
Select date range and click “Show”
GridView control will be populated with results
GridView is shown and contains all results.
28/04/2012 PASS
5 Can user sort retrieved historical data
Click on header in GridView to sort by that property
List will arrange in either alphabetical or numerical order depending on field
Each column in GridView sorts properly to arrange by date, size or hand measured
28/04/2012 PASS
6 Are invalid date ranges excluded from searches?
Select a range where the start is later than the end date
Error box displayed to user to notify them that their selection is invalid
Error box shown until user selects a different range of dates
28/04/2012 PASS
7 Are valid date ranges which have yet to occur excluded from results?
Select a date range in the future
Error box displayed to user to notify them that their date range has not occurred yet
Error box is not shown; instead, no results are returned. This an acceptable result
29/04/2012 PASS Result was unexpected; it still followed a logical process and provided necessary information to the user
78
ID Description Test Actions Expected Result Actual Result Date Pass/Fail Comments8 Can user begin an
exercise without specifying the hand tested?
Click “Begin Exercise” before selecting a hand to be exercised
Error box displayed to user to notify them that they have not specified which hand is being exercised
Error box is displayed. User cannot begin exercise until selection is made
29/04/2012 PASS
9 Can user begin finger dimension measurements without specifying the hand tested?
Click “Begin Measurements” before selecting a hand to be measured
Error box displayed to user to notify them that they have not specified which hand is being measured
Error box is displayed. User cannot have digits measured until selection is made
29/04/2012 PASS
10 Can user clear results from hand tests and rerun?
Click “Clear” when values have been stored from the sampling process
Values clear from interface and are reset in the sampling functions so that hand is retested
Values are cleared and hand is retested
29/04/2012 PASS
11 Can user save their measurements for viewing later?
Click “Save” when values have been stored form sampling process
Values clear from interface and are saved to the database
Values are left displayed on interface until page is navigated away from. Values are stored in database
29/04/2012 PASS Fixed: interface clears after saving results
12 Is user data cleared upon exiting the system
Exit the program User must re-enter log in to ensure security of information
User information is only cleared upon system exit. No logout functionality exists
30/04/2012 PASS Fixed: logout function added to ensure security
79
Appendix 2: Sensor data comparisons
Width - right hand
Width - left hand
Height – right hand
ID Range (cm) Digit Expected Actual1 50 > 60 Thumb 2.6 2.52 50 > 60 Index 2.3 2.553 50 > 60 Middle 2.1 2.44 50 > 60 Ring 1.9 1.795 50 > 60 Pinky 1.7 1.96 60 > 70 Thumb 2.6 2.197 60 > 70 Index 2.3 4.898 60 > 70 Middle 2.1 2.069 60 > 70 Ring 1.9 1.7710 60 > 70 Pinky 1.7 3.2711 70 > 90 Thumb 2.6 1.6612 70 > 90 Index 2.3 3.2213 70 > 90 Middle 2.1 1.6514 70 > 90 Ring 1.9 1.615 70 > 90 Pinky 1.7 1.84
ID Range Digit Expected Actual1 50 > 60 Thumb 2.5 2.92 50 > 60 Index 2.1 2.13 50 > 60 Middle 2.0 2.24 50 > 60 Ring 1.9 2.05 50 > 60 Pinky 1.8 2.66 60 > 70 Thumb 2.5 2.87 60 > 70 Index 2.1 2.248 60 > 70 Middle 2.0 1.99 60 > 70 Ring 1.9 2.1610 60 > 70 Pinky 1.8 3.1211 70 > 90 Thumb 2.5 2.8512 70 > 90 Index 2.1 1.7113 70 > 90 Middle 2.0 1.6414 70 > 90 Ring 1.9 1.6415 70 > 90 Pinky 1.8 2.13
ID Range (cm) Digit Expected Actual1 50 > 60 Thumb 6.8 6.22 50 > 60 Index 7.0 6.483 50 > 60 Middle 7.4 6.84 50 > 60 Ring 7.0 7.05 50 > 60 Pinky 6.1 6.446 60 > 70 Thumb 6.8 5.37 60 > 70 Index 7.0 3.958 60 > 70 Middle 7.4 5.359 60 > 70 Ring 7.0 6.110 60 > 70 Pinky 6.1 5.6311 70 > 90 Thumb 6.8 3.7612 70 > 90 Index 7.0 2.8513 70 > 90 Middle 7.4 3.7114 70 > 90 Ring 7.0 3.6915 70 > 90 Pinky 6.1 3.69
ID Range (cm) Digit Expected Actual1 50 > 60 Thumb 6.7 6.22 50 > 60 Index 7.1 6.73 50 > 60 Middle 7.4 6.84 50 > 60 Ring 7.2 7.05 50 > 60 Pinky 6.0 6.16 60 > 70 Thumb 6.7 5.87 60 > 70 Index 7.1 5.958 60 > 70 Middle 7.4 5.779 60 > 70 Ring 7.2 6.0210 60 > 70 Pinky 6.0 5.5511 70 > 90 Thumb 6.7 3.8112 70 > 90 Index 7.1 3.613 70 > 90 Middle 7.4 3.814 70 > 90 Ring 7.2 4.015 70 > 90 Pinky 6.0 3.35
80
Appendix 3: Source code
XAML interface code
MainWindow.XAML
Login.XAML
<NavigationWindow x:Class="Hand_Recognition_Interface.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:uc="clr-namespace:CCT.NUI.Visual;assembly=CCT.NUI.Visual" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Hand Recognition" Height="725" Width="905" Source="Login.xaml" ResizeMode="NoResize" WindowStartupLocation="CenterScreen" ShowsNavigationUI="False"> <NavigationWindow.Background> <ImageBrush ImageSource="/Hand_Recognition_Interface;component/bin/background.png" /> </NavigationWindow.Background></NavigationWindow >
<Page x:Class="Hand_Recognition_Interface.Login" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" Height="700" Width="900"> <Page.Background> <ImageBrush ImageSource="/Hand_Recognition_Interface;component/bin/splash.png" /> </Page.Background> <Grid Loaded="Grid_Loaded"> <TextBlock Height="23" HorizontalAlignment="Left" Margin="298,201,0,0" Name="textBlock1" Text="Please enter your username and password: " VerticalAlignment="Top" Foreground="White" /> <Button Content="Login" Height="23" HorizontalAlignment="Left" Margin="534,296,0,0" Name="btnLogin" VerticalAlignment="Top" Width="75" Click="btnLogin_Click" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="456,230,0,0" Name="txtUser" VerticalAlignment="Top" Width="153" /> <PasswordBox Height="23" HorizontalAlignment="Left" Margin="456,259,0,0" Name="txtPass" VerticalAlignment="Top" Width="153" PasswordChar="*" /> <TextBlock Height="23" HorizontalAlignment="Left" Margin="289,12,0,0" Name="txtNotice" Text="Please note: the Kinect sensor works best in well lit room away from direct sunlight." VerticalAlignment="Top" FontSize="16" Foreground="#FF003B72" />
82
HistoryPage.XAML
<Page x:Class="Hand_Recognition_Interface.HistoryPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:uc="clr-namespace:CCT.NUI.Visual;assembly=CCT.NUI.Visual" xmlns:wfi="clr-namespace:System.Windows.Forms.Integration;assembly=WindowsFormsIntegration" xmlns:wf="clr-namespace:System.Windows.Forms;assembly=System.Windows.Forms" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="700" Width="900" Title="HistoryPage">
<Grid> <DatePicker Height="25" HorizontalAlignment="Left" Margin="53,100,0,0" Name="dateFrom" VerticalAlignment="Top" Width="115" /> <DatePicker Height="25" HorizontalAlignment="Left" Margin="244,100,0,0" Name="dateTo" VerticalAlignment="Top" Width="115" /> <Label Content="to" Height="28" HorizontalAlignment="Left" Margin="193,100,0,0" Name="label1" VerticalAlignment="Top" /> <Button Content="Exercise hand" Height="43" HorizontalAlignment="Left" Margin="242,33,0,0" Name="btnMeasure" VerticalAlignment="Top" Width="161" Click="btnExercise_Click"/> <Button Content="Take new measurements" Height="43" HorizontalAlignment="Left" Margin="51,33,0,0" Name="btnExercise" VerticalAlignment="Top" Width="161" Click="btnMeasure_Click" /> <Button Content="Show readings" Height="28" Margin="0,100,0,0" Name="btnShow" VerticalAlignment="Top" Click="btnShow_Click" Width="90" /> <wfi:WindowsFormsHost Height="400" Margin="12,145,12,155" Visibility="Hidden" Name="wfhContainer"> <wf:DataGridView x:Name="gridResults" ColumnHeadersHeightSizeMode="AutoSize" AllowUserToAddRows="False" AllowUserToDeleteRows="False" AllowUserToResizeRows="False" AllowUserToResizeColumns="False" AutoSizeColumnsMode="Fill" Visible="True" /> </wfi:WindowsFormsHost> </Grid></Page>
83
ExercisePage.XAML
<Page x:Class="Hand_Recognition_Interface.ExercisePage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:uc="clr-namespace:CCT.NUI.Visual;assembly=CCT.NUI.Visual" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" Height="700" Width="900" Title="Exercise">
<Grid> <uc:WpfVideoControl Name="videoControl" Width="640" Height="480" HorizontalAlignment="Left" Margin="23,160,0,0" VerticalAlignment="Top" BorderBrush="#FF2D2D2D" BorderThickness="1" Background="White" /> <Button Content="Begin Exercise" Height="38" HorizontalAlignment="Left" Margin="462,116,0,0" Name="btnDepth" VerticalAlignment="Top" Width="201" Click="btnDepth_Click" /> <TextBlock Height="23" HorizontalAlignment="Left" Margin="609,16,0,0" Name="txtNotice" Text="" VerticalAlignment="Top" /> <GroupBox Header="Please select a hand to exercise" Height="55" HorizontalAlignment="Left" Margin="33,99,0,0" Name="groupBox1" VerticalAlignment="Top" Width="284" Foreground="White" FontWeight="Bold"> <Grid> <RadioButton Content="Left hand" Height="16" HorizontalAlignment="Left" Margin="6,11,0,0" Name="radioLeft" VerticalAlignment="Top" Foreground="White" /> <RadioButton Content="Right hand" Height="16" HorizontalAlignment="Left" Margin="167,11,0,0" Name="radioRight" VerticalAlignment="Top" Foreground="White" /> </Grid> </GroupBox> <Button Content="History" Height="43" HorizontalAlignment="Left" Margin="51,33,0,0" Name="btnMeasure" VerticalAlignment="Top" Width="161" Click="btnMeasure_Click" /> <Button Content="Take new measurements" Height="43" HorizontalAlignment="Left" Margin="242,33,0,0" Name="btnExercise" VerticalAlignment="Top" Width="161" Click="btnExercise_Click" /> </Grid></Page>
84
DataWindow.XAML
<Page x:Class="Hand_Recognition_Interface.DataWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:uc="clr-namespace:CCT.NUI.Visual;assembly=CCT.NUI.Visual" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="DataWindow" Height="700" Width="900" Foreground="Black"> <Grid> <uc:WpfVideoControl Name="videoControl" Width="640" Height="480" HorizontalAlignment="Left" Margin="23,160,0,0" VerticalAlignment="Top" BorderBrush="#FF2D2D2D" BorderThickness="1" Background="White" /> <Button Content="Begin Test" Height="38" HorizontalAlignment="Left" Margin="462,116,0,0" Name="btnDepth" VerticalAlignment="Top" Width="201" Click="Begin" /> <GroupBox Header="Please select a hand to test" Height="55" HorizontalAlignment="Left" Margin="33,99,0,0" Name="groupBox1" VerticalAlignment="Top" Width="284" Foreground="White" FontWeight="Bold"> <Grid> <RadioButton Content="Left hand" Height="16" HorizontalAlignment="Left" Margin="6,11,0,0" Name="radioLeft" VerticalAlignment="Top" Foreground="White" /> <RadioButton Content="Right hand" Height="16" HorizontalAlignment="Left" Margin="167,11,0,0" Name="radioRight" VerticalAlignment="Top" Foreground="White" /> </Grid> </GroupBox> <Grid Visibility="Visible" Name="resultsGrid"> <Rectangle Height="285" HorizontalAlignment="Left" Margin="681,160,0,0" Name="rectangle1" Stroke="Black" VerticalAlignment="Top" Width="200" Fill="#CDFFFFFF" /> <TextBlock Text="Width (cm)" TextWrapping="Wrap" Height="46" TextAlignment="Right" HorizontalAlignment="Left" Margin="765,188,0,0" Name="labelWidth" VerticalAlignment="Top" FontSize="16" Width="53" /> <TextBlock Text="Height (cm)" TextWrapping="Wrap" Height="46" TextAlignment="Right" HorizontalAlignment="Left" Margin="820,188,0,0" Name="labelHeight" VerticalAlignment="Top" FontSize="16" Width="53" />
<Label Content="Results" Height="36" HorizontalAlignment="Left" Margin="741,160,0,0" Name="labelResults" VerticalAlignment="Top" FontSize="20" FontWeight="Bold" Width="81" /> <Label Content="Thumb: " Height="30" HorizontalAlignment="Left" Margin="689,233,0,0" Name="label1" VerticalAlignment="Top" FontSize="16" /> <Label Content="Index: " Height="30" HorizontalAlignment="Left" Margin="689,273,0,0" Name="label2" VerticalAlignment="Top" FontSize="16" /> <Label Content="Middle: " Height="30" HorizontalAlignment="Left" Margin="689,313,0,0" Name="label3" VerticalAlignment="Top" FontSize="16" /> <Label Content="Ring: " Height="32" HorizontalAlignment="Left" Margin="689,353,0,0" Name="label4" VerticalAlignment="Top" FontSize="16" /> <Label Content="Pinky: " Height="32" HorizontalAlignment="Left" Margin="689,393,0,0" Name="label5" VerticalAlignment="Top" FontSize="16" />
85
<Label Content="" Height="30" HorizontalAlignment="Right" Name="lblThumb" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,233,90,0" /> <Label Content="" Height="30" HorizontalAlignment="Right" Name="lblIndex" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,273,90,0"/> <Label Content="" Height="30" HorizontalAlignment="Right" Name="lblMiddle" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,313,90,0"/> <Label Content="" Height="32" HorizontalAlignment="Right" Name="lblRing" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,353,90,0"/> <Label Content="" Height="32" HorizontalAlignment="Right" Name="lblPinky" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,393,90,0"/>
<Label Content="" Height="30" HorizontalAlignment="Right" Name="lblThumb_height" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,233,30,0" /> <Label Content="" Height="30" HorizontalAlignment="Right" Name="lblIndex_height" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,273,30,0"/> <Label Content="" Height="30" HorizontalAlignment="Right" Name="lblMiddle_height" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,313,30,0"/> <Label Content="" Height="32" HorizontalAlignment="Right" Name="lblRing_height" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,353,30,0"/> <Label Content="" Height="32" HorizontalAlignment="Right" Name="lblPinky_height" VerticalAlignment="Top" FontSize="16" FlowDirection="RightToLeft" Margin="0,393,30,0"/> <TextBlock Height="23" HorizontalAlignment="Left" Margin="474,87,0,0" Name="txtNotice" Text="" VerticalAlignment="Top" /> </Grid> <Button Content="Clear" Height="49" HorizontalAlignment="Left" Margin="677,459,0,0" Name="btnClear" VerticalAlignment="Top" Width="92" Click="btnClear_Click" Visibility="Hidden" /> <Button Content="Save" Height="49" HorizontalAlignment="Left" Margin="785,459,0,0" Name="btnSave" VerticalAlignment="Top" Width="92" Click="btnSave_Click" Visibility="Hidden" /> <Button Content="Exercise hand" Height="43" HorizontalAlignment="Left" Margin="242,33,0,0" Name="btnExercise" VerticalAlignment="Top" Width="161" Click="btnExercise_Click" /> <Button Content="History" Height="43" HorizontalAlignment="Left" Margin="51,33,0,0" Name="btnHistory" VerticalAlignment="Top" Width="161" Click="btnHistory_Click" />
86
C# code
MainWindow.xaml.cs
using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Windows;using System.Windows.Controls;using System.Windows.Data;using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Imaging;using System.Windows.Navigation;using System.Windows.Shapes;
namespace Hand_Recognition_Interface{ /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : NavigationWindow { public MainWindow() { InitializeComponent(); }
}}
87
Login.xaml.cs
using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Windows;using System.Windows.Controls;using System.Windows.Data;using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Imaging;using System.Windows.Navigation;using System.Windows.Shapes;using System.Security.Cryptography;
namespace Hand_Recognition_Interface{ /// <summary> /// Interaction logic for Login.xaml /// </summary> public partial class Login : Page { public Login() { InitializeComponent(); }
private void Grid_Loaded(object sender, RoutedEventArgs e) { DatabaseConnection.ConnectToDB(); }
private void btnLogin_Click(object sender, RoutedEventArgs e) { Array user = DatabaseConnection.FindUser(txtUser.Text);
MD5 md5Hash = MD5.Create(); string hash = GetMd5Hash(md5Hash, txtPass.Password);
if (hash == (String)user.GetValue(2)) { //password is confirmed HistoryPage history = new HistoryPage(); history.user_id = (String)user.GetValue(0); this.NavigationService.Navigate(history); } else { //password is invalid MessageBox.Show("Invalid username and password combination","Notice", MessageBoxButton.OK, MessageBoxImage.Error); }
} static string GetMd5Hash(MD5 md5Hash, string input) {
// Convert the input string to a byte array and compute the hash.
88
byte[] data = md5Hash.ComputeHash(Encoding.UTF8.GetBytes(input));
// Create a new Stringbuilder to collect the bytes // and create a string. StringBuilder sBuilder = new StringBuilder();
// Loop through each byte of the hashed data // and format each one as a hexadecimal string. for (int i = 0; i < data.Length; i++) { sBuilder.Append(data[i].ToString("x2")); }
// Return the hexadecimal string. return sBuilder.ToString(); }
}}
HistoryPage.xaml.cs
using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Windows;using System.Windows.Controls;using System.Windows.Data;using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Imaging;using System.Windows.Navigation;using System.Windows.Shapes;using System.Windows.Forms;using System.Data;using MySql.Data.MySqlClient;
namespace Hand_Recognition_Interface{ /// <summary> /// Interaction logic for HistoryPage.xaml /// </summary> public partial class HistoryPage : Page { public string user_id;
public HistoryPage() { InitializeComponent(); }
private void btnMeasure_Click(object sender, RoutedEventArgs e) { DataWindow dw = new DataWindow();
89
dw.user_id = user_id; this.NavigationService.Navigate(dw); } public String formatDateForMySql(String date) { return date.Substring(6, 4) + "/" + date.Substring(3, 2) + "/" + date.Substring(0, 2) + " " + date.Substring(11, 8); } private void btnShow_Click(object sender, RoutedEventArgs e) { if (dateFrom.SelectedDate <= dateTo.SelectedDate) { // logic to find entries from the database
btnShow.Content = "Add results to table"; btnShow.Width = 115; //convert dates to format readible in mysql String from = formatDateForMySql(dateFrom.SelectedDate.ToString()); String to = formatDateForMySql(dateTo.SelectedDate.ToString());
MySqlDataReader reader = DatabaseConnection.FindReadings(from,to,user_id);
gridResults.ColumnCount = 7; gridResults.Columns[0].HeaderText = "Hand"; gridResults.Columns[1].HeaderText = "Thumb"; gridResults.Columns[2].HeaderText = "Index"; gridResults.Columns[3].HeaderText = "Middle"; gridResults.Columns[4].HeaderText = "Ring"; gridResults.Columns[5].HeaderText = "Pinky"; gridResults.Columns[6].HeaderText = "Time";
while (reader.Read()) { DataGridViewRow newRow = new DataGridViewRow(); newRow.CreateCells(gridResults);
for (int i = 0; i < reader.FieldCount; i++) { newRow.Cells[i].Value = reader.GetValue(i).ToString(); }
gridResults.Rows.Add(newRow); }
wfhContainer.Visibility = Visibility.Visible;
DatabaseConnection.CloseConnection(); } else { System.Windows.MessageBox.Show("Please select a valid date range", "Notice: invalid date selection", MessageBoxButton.OK, MessageBoxImage.Asterisk); } }
private void btnExercise_Click(object sender, RoutedEventArgs e) { ExercisePage ex = new ExercisePage();
90
ex.user_id = user_id; this.NavigationService.Navigate(ex); } }}
ExercisePage.xaml.cs
using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Windows;using System.Windows.Controls;using System.Windows.Data;using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Imaging;using System.Windows.Navigation;using System.Windows.Shapes;using CCT.NUI.Core;using CCT.NUI.Core.OpenNI;using CCT.NUI.Core.Video;using CCT.NUI.Visual;using CCT.NUI.HandTracking;using CCT.NUI.KinectSDK;using CCT.NUI.Core.Clustering;using System.Timers;using System.Diagnostics;
namespace Hand_Recognition_Interface{ /// <summary> /// Interaction logic for ExercisePage.xaml /// </summary> public partial class ExercisePage : Page { private IDataSourceFactory dsFactory; private IClusterDataSource cluster; private IHandDataSource hand; private IImageDataSource rgbImage; private IImageDataSource depthImage; private int handToTest; private Stopwatch timer = new Stopwatch();
public string user_id;
public ExercisePage() { InitializeComponent(); }
private void btnDepth_Click(object sender, RoutedEventArgs e) { try {
if ((radioLeft.IsChecked == true) || (radioRight.IsChecked ==
91
true)) { if (radioLeft.IsChecked == true) handToTest = 1; else if (radioRight.IsChecked == true) handToTest = 0;
dsFactory = new OpenNIDataSourceFactory("config.xml");
cluster = dsFactory.CreateClusterDataSource(new CCT.NUI.Core.Clustering.ClusterDataSourceSettings { MaximumDepthThreshold = 900 }); hand = new HandDataSource(dsFactory.CreateShapeDataSource(cluster, new CCT.NUI.Core.Shape.ShapeDataSourceSettings())); hand.NewDataAvailable += new NewDataHandler<HandCollection>(hand_NewDataAvailable);
ProcessDepth();
hand.Start(); //begin analysing hand outline data
OverlayImageData(); //overlay image processing data
btnDepth.Visibility = Visibility.Hidden; } else { MessageBox.Show("Please select a hand to be tested", "Notice: invalid hand selection", MessageBoxButton.OK, MessageBoxImage.Asterisk); } } catch (OpenNI.GeneralException) { MessageBox.Show("Kinect not connected"); } // this catch statement will ensure Kinect is connected before attempting to use it } void hand_NewDataAvailable(HandCollection data) {
for (int index = 0; index < data.Count; index++) // for each hand do the following { var hand = data.Hands[index]; if (hand.FingerCount == 0) //if user has clenched whole hand { if (timer.IsRunning == true) { timer.Stop(); MessageBox.Show("Time to flex in seconds: "+((float)timer.ElapsedMilliseconds/1000) + "sec");
// use message box later to display time to see if it works ok } } else if (hand.FingerCount == 5) { timer.Reset(); timer.Start(); // Start the timer } } }
92
void Exercise_NewDataAvailable(ImageSource img) { videoControl.Dispatcher.Invoke(new Action(() => { videoControl.ShowImageSource(img); })); } void ProcessDepth() { depthImage = dsFactory.CreateDepthImageDataSource(); depthImage.NewDataAvailable += new NewDataHandler<ImageSource>(Exercise_NewDataAvailable); //run method when new data is found on the camera input depthImage.Start(); //begin analysing depth data } void OverlayImageData() { var layers = new List<IWpfLayer>(); layers.Add(new WpfHandLayer(hand)); //display data showing outline of hand information layers.Add(new WpfClusterLayer(cluster)); //display data showing image point clustering videoControl.Layers = layers; //add this information to the video control }
private void btnMeasure_Click(object sender, RoutedEventArgs e) { HistoryPage hist = new HistoryPage(); hist.user_id = user_id; this.NavigationService.Navigate(hist); }
private void btnExercise_Click(object sender, RoutedEventArgs e) { DataWindow dw = new DataWindow(); dw.user_id = user_id; this.NavigationService.Navigate(dw); } }}
DataWindow.xaml.cs
using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Windows;using System.Windows.Controls;using System.Windows.Data;using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Imaging;using System.Windows.Shapes;using CCT.NUI.Core;using CCT.NUI.Core.OpenNI;
93
using CCT.NUI.Core.Video;using CCT.NUI.Visual;using CCT.NUI.HandTracking;using CCT.NUI.KinectSDK;using CCT.NUI.Core.Clustering;using MySql.Data;using MySql.Data.MySqlClient;
namespace Hand_Recognition_Interface{ /// <summary> /// Interaction logic for DataWindow.xaml /// </summary> public partial class DataWindow : Page { private IDataSourceFactory dsFactory; private IClusterDataSource cluster; private IHandDataSource hand; private IImageDataSource rgbImage; private IImageDataSource depthImage;
public string user_id;
double[] thumb_average = new double[10]; double[] index_average = new double[10]; double[] middle_average = new double[10]; double[] ring_average = new double[10]; double[] pinky_average = new double[10];
double thumb_width; double index_width; double middle_width; double ring_width; double pinky_width;
double thumb_height; double index_height; double middle_height; double ring_height; double pinky_height;
double[] thumb_average_height = new double[10]; double[] index_average_height = new double[10]; double[] middle_average_height = new double[10]; double[] ring_average_height = new double[10]; double[] pinky_average_height = new double[10];
private int handToTest; // define whether hand being tested is "left" or "right" (1, 0)
public DataWindow() { InitializeComponent();
for (int i = 0; i < 10; i++ ) { thumb_average[i] = 0; index_average[i] = 0; middle_average[i] = 0; ring_average[i] = 0; pinky_average[i] = 0;
94
thumb_average_height[i] = 0; index_average_height[i] = 0; middle_average_height[i] = 0; ring_average_height[i] = 0; pinky_average_height[i] = 0; }
}
private void Begin(object sender, RoutedEventArgs e) { try {
if ((radioLeft.IsChecked == true) || (radioRight.IsChecked == true)) { if (radioLeft.IsChecked == true) handToTest = 1; else if (radioRight.IsChecked == true) handToTest = 0;
btnDepth.Visibility = Visibility.Hidden;
dsFactory = new OpenNIDataSourceFactory("config.xml");
cluster = dsFactory.CreateClusterDataSource(new CCT.NUI.Core.Clustering.ClusterDataSourceSettings { MaximumDepthThreshold = 900 }); hand = new HandDataSource(dsFactory.CreateShapeDataSource(cluster, new CCT.NUI.Core.Shape.ShapeDataSourceSettings())); hand.NewDataAvailable += new NewDataHandler<HandCollection>(hand_NewDataAvailable);
ProcessDepth();
hand.Start(); //begin analysing hand outline data
OverlayImageData(); //overlay image processing data } else { MessageBox.Show("Please select a hand to be tested", "Notice: invalid hand selection", MessageBoxButton.OK, MessageBoxImage.Asterisk); } } catch (OpenNI.GeneralException) { MessageBox.Show("Kinect not connected"); } // this catch statement will ensure Kinect is connected before attempting to use it
}
void hand_NewDataAvailable(HandCollection data) { //this is hand frame recognition for (int index = 0; index < data.Count; index++) // for each hand do the following { var hand = data.Hands[index]; IList<FingerPoint> fingers = hand.FingerPoints;
95
FingerPoint thumb = null; FingerPoint indexFinger = null; FingerPoint middle = null; FingerPoint ring = null; FingerPoint pinky = null;
if (hand.FingerCount == 5) { //full hand has been recognised //begin assigning fingers
float most = 0;
if (handToTest == 1){
foreach (var finger in fingers) { if (finger.Fingertip.X > most) { thumb = finger; most = finger.Fingertip.X; } } fingers.Remove(thumb); most = 0;
foreach (var finger in fingers) { if (finger.Fingertip.X > most) { indexFinger = finger; most = finger.Fingertip.X; } } fingers.Remove(indexFinger); most = 0;
foreach (var finger in fingers) { if (finger.Fingertip.X > most) { middle = finger; most = finger.Fingertip.X; } } fingers.Remove(middle); most = 0;
foreach (var finger in fingers) { if (finger.Fingertip.X > most) { ring = finger; most = finger.Fingertip.X; } } fingers.Remove(ring); most = 0;
foreach (var finger in fingers) { if (finger.Fingertip.X > most) {
96
pinky = finger; most = finger.Fingertip.X; } } fingers.Remove(pinky); most = 0; } else if (handToTest == 0) { float rightmost = 10000;
foreach (var finger in fingers) { if (finger.Fingertip.X < rightmost) { thumb = finger; most = finger.Fingertip.X; } } fingers.Remove(thumb); rightmost = 10000;
foreach (var finger in fingers) { if (finger.Fingertip.X < rightmost) { indexFinger = finger; rightmost = finger.Fingertip.X; } } fingers.Remove(indexFinger); rightmost = 10000;
foreach (var finger in fingers) { if (finger.Fingertip.X < rightmost) { middle = finger; rightmost = finger.Fingertip.X; } } fingers.Remove(middle); rightmost = 10000;
foreach (var finger in fingers) { if (finger.Fingertip.X < rightmost) { ring = finger; rightmost = finger.Fingertip.X; } } fingers.Remove(ring); rightmost = 10000;
foreach (var finger in fingers) { if (finger.Fingertip.X < rightmost) { pinky = finger; rightmost = finger.Fingertip.X; } }
97
fingers.Remove(pinky); rightmost = 10000; }
//ensure that hand orientation is within 4 % +/- horizontally if ((pinky.BaseLeft.Z <= (thumb.BaseRight.Z + (thumb.BaseRight.Z * 0.04))) && (pinky.BaseLeft.Z >= (thumb.BaseRight.Z - (thumb.BaseRight.Z * 0.04)))) { ChangeTextBox(txtNotice, "");
thumb_width = FindFingerWidth(thumb); index_width = FindFingerWidth(indexFinger); middle_width = FindFingerWidth(middle); ring_width = FindFingerWidth(ring); pinky_width = FindFingerWidth(pinky);
//testing finger height code
thumb_height = FindFingerHeight(thumb); index_height = FindFingerHeight(indexFinger); middle_height = FindFingerHeight(middle); ring_height = FindFingerHeight(ring); pinky_height = FindFingerHeight(pinky);
//end test
bool filled = false; for (int i = 0; i < 10; i++) { if ((thumb_average[i] == 0) && (filled == false)) { thumb_average[i] = thumb_width; index_average[i] = index_width; middle_average[i] = middle_width; ring_average[i] = ring_width; pinky_average[i] = pinky_width;
thumb_average_height[i] = thumb_height; index_average_height[i] = index_height; middle_average_height[i] = middle_height; ring_average_height[i] = ring_height; pinky_average_height[i] = pinky_height;
filled = true; } }
if (thumb_average[9] != 0) // if sample data has been filled { // jump out of thread to access UI elements
thumb_width = Math.Round((thumb_average.Sum() / 10), 4); index_width = Math.Round((index_average.Sum() / 10), 4); middle_width = Math.Round((middle_average.Sum() /
98
10), 4); ring_width = Math.Round((ring_average.Sum() / 10), 4); pinky_width = Math.Round((pinky_average.Sum() / 10), 4);
thumb_height = Math.Round((thumb_average_height.Sum() / 10), 4); index_height = Math.Round((index_average_height.Sum() / 10), 4); middle_height = Math.Round((middle_average_height.Sum() / 10), 4); ring_height = Math.Round((ring_average_height.Sum() / 10), 4); pinky_height = Math.Round((pinky_average_height.Sum() / 10), 4);
resultsGrid.Dispatcher.Invoke( System.Windows.Threading.DispatcherPriority.Normal, new Action( delegate() { resultsGrid.Visibility = Visibility.Visible; btnClear.Visibility = Visibility.Visible; btnSave.Visibility = Visibility.Visible;
lblThumb.Content = Math.Round((thumb_width / 10), 2); lblIndex.Content = Math.Round((index_width / 10), 2); lblMiddle.Content = Math.Round((middle_width / 10), 2); lblRing.Content = Math.Round((ring_width / 10), 2); lblPinky.Content = Math.Round((pinky_width / 10), 2);
lblThumb_height.Content = Math.Round((thumb_height / 10), 2); lblIndex_height.Content = Math.Round((index_height / 10), 2); lblMiddle_height.Content = Math.Round((middle_height / 10), 2); lblRing_height.Content = Math.Round((ring_height / 10), 2); lblPinky_height.Content = Math.Round((pinky_height / 10), 2); }));
} } else { ChangeTextBox(txtNotice, "Please adjust hand orientation"); }
99
//end finger assigning
} //end if count is 5 } } void ChangeTextBox(TextBlock txt, String text) { if (!txt.Dispatcher.CheckAccess()) { txtNotice.Dispatcher.Invoke( System.Windows.Threading.DispatcherPriority.Normal, new Action( delegate() { txt.Text = text;
} )); } else { txt.Text = text; } } double FindFingerWidth(FingerPoint finger) { double finger_width = 0; float A = finger.BaseRight.X - finger.BaseLeft.X; float B = Math.Abs(finger.BaseLeft.Y - finger.BaseRight.Y);
return finger_width = Math.Sqrt((A * A) + (B * B)); } double FindFingerHeight(FingerPoint finger) { double midpointX = (finger.BaseLeft.X + finger.BaseRight.X) / 2; double midpointY = (finger.BaseLeft.Y + finger.BaseRight.Y) / 2; double A = Math.Abs(midpointX - finger.Fingertip.X); double B = Math.Abs(finger.Fingertip.Y - midpointY); double C = Math.Sqrt( (A * A) + (B * B) );
return C; } void Login_NewDataAvailable(ImageSource img) { videoControl.Dispatcher.Invoke(new Action(() => { videoControl.ShowImageSource(img); })); } void ProcessRGB() { rgbImage = dsFactory.CreateRGBImageDataSource(); rgbImage.NewDataAvailable += new NewDataHandler<ImageSource>(Login_NewDataAvailable); //show RGB image rgbImage.Start(); } void ProcessDepth() { depthImage = dsFactory.CreateDepthImageDataSource(); depthImage.NewDataAvailable += new NewDataHandler<ImageSource>(Login_NewDataAvailable); //run method when new data is found on the camera input depthImage.Start(); //begin analysing depth data
100
} void OverlayImageData() { var layers = new List<IWpfLayer>(); layers.Add(new WpfHandLayer(hand)); //display data showing outline of hand information layers.Add(new WpfClusterLayer(cluster)); //display data showing image point clustering videoControl.Layers = layers; //add this information to the video control }
private void btnClear_Click(object sender, RoutedEventArgs e) { lblThumb.Content = ""; lblIndex.Content = ""; lblMiddle.Content = ""; lblRing.Content = ""; lblPinky.Content = "";
lblThumb_height.Content = ""; lblIndex_height.Content = ""; lblMiddle_height.Content = ""; lblRing_height.Content = ""; lblPinky_height.Content = "";
Array.Clear(thumb_average, 0,thumb_average.Length); Array.Clear(index_average, 0, index_average.Length); Array.Clear(middle_average, 0, middle_average.Length); Array.Clear(ring_average, 0, ring_average.Length); Array.Clear(pinky_average, 0, pinky_average.Length);
Array.Clear(thumb_average_height, 0, thumb_average_height.Length); Array.Clear(index_average_height, 0, index_average_height.Length); Array.Clear(middle_average_height, 0, middle_average_height.Length); Array.Clear(ring_average_height, 0, ring_average_height.Length); Array.Clear(pinky_average_height, 0, pinky_average_height.Length); }
private void btnSave_Click(object sender, RoutedEventArgs e) { String handToTest_string = "";
if (handToTest == 1) { handToTest_string = "left"; } else if (handToTest == 0) { handToTest_string = "right"; }
DatabaseConnection.SaveReadings( Convert.ToInt32(user_id), handToTest_string, thumb_width, index_width, middle_width, ring_width, pinky_width, thumb_height, index_height, middle_height, ring_height, pinky_height ); }
101
private void btnHistory_Click(object sender, RoutedEventArgs e) { HistoryPage hist = new HistoryPage(); this.NavigationService.Navigate(hist); }
private void btnExercise_Click(object sender, RoutedEventArgs e) { ExercisePage ex = new ExercisePage(); this.NavigationService.Navigate(ex); }
}}
102
DatabaseConnection.cs
using System;using System.Collections.Generic;using System.Linq;using System.Text;using MySql.Data;using MySql.Data.MySqlClient;using System.Windows;using System.Windows.Controls;using System.Windows.Data;using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Imaging;using System.Windows.Navigation;using System.Windows.Shapes;
namespace Hand_Recognition_Interface{ class DatabaseConnection { private static MySqlConnection connection;
public static void ConnectToDB() { string MyConString = "SERVER=localhost;" + "DATABASE=hand_tracker;" + "UID=hand_system;" + "PASSWORD=hand_user;";
connection = new MySqlConnection(MyConString);
} public static void ReadData() { MySqlCommand command = connection.CreateCommand(); MySqlDataReader Reader;
command.CommandText = "select * from readings"; connection.Open();
Reader = command.ExecuteReader(); while (Reader.Read()) { string thisrow = ""; for (int i = 0; i < Reader.FieldCount; i++) thisrow += Reader.GetValue(i).ToString() + ","; MessageBox.Show(thisrow); } } public static Array FindUser(String user) { String[] user_credentials = new String[3];
connection.Open(); MySqlCommand command = connection.CreateCommand(); MySqlDataReader Reader;
command.CommandText = "SELECT `user_id`,`username`,`password` FROM `users` WHERE `username` = '" + user + "'";
103
Reader = command.ExecuteReader();
while (Reader.Read()) { // 0, user_id // 1, username // 2, password for (int i = 0; i < Reader.FieldCount; i++) user_credentials[i] = Reader.GetValue(i).ToString(); }
connection.Close();
return user_credentials; } public static MySqlDataReader FindReadings(String from, String to, String user_id) { connection.Open();
MySqlCommand command = connection.CreateCommand(); MySqlDataReader Reader;
command.CommandText = "SELECT `left_or_right`,`thumb`,`index`,`middle`,`ring`,`pinky`,`timestamp` FROM `readings` WHERE `user_id` = '" + user_id + "' AND `timestamp` >= '" + from + "' AND `timestamp` <= '" + to + "'";
Reader = command.ExecuteReader();
return Reader; } public static void SaveReadings(int user_id, String left_or_right, double thumb, double index, double middle, double ring, double pinky, double thumb_height, double index_height, double middle_height, double ring_height, double pinky_height) { connection.Open();
MySqlCommand command = connection.CreateCommand();
command.CommandText = "INSERT INTO `readings` (`thumb`,`index`,`middle`,`ring`,`pinky`,`thumb_height`,`index_height,`middle_height`,`ring_height`,`pinky_height`,`left_or_right`,`user_id`) VALUES ('" + thumb + "','" + index + "','" + middle + "','" + ring + "','" + pinky + "','" + thumb_height + "','" + index_height + "','" + middle_height + "','" + ring_height + "','" + pinky_height + "','" + left_or_right + "','" + user_id + "'" + " )"; command.ExecuteNonQuery(); connection.Close(); } public static void CloseConnection() { connection.Close(); } } }
104
MySQL dump of database-- phpMyAdmin SQL Dump-- version 3.4.5-- http://www.phpmyadmin.net---- Host: localhost-- Generation Time: May 02, 2012 at 10:31 PM-- Server version: 5.5.16-- PHP Version: 5.3.8
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";SET time_zone = "+00:00";
---- Database: `hand_tracker`--
-- --------------------------------------------------------
---- Table structure for table `readings`--
CREATE TABLE IF NOT EXISTS `readings` ( `id` int(21) NOT NULL AUTO_INCREMENT, `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `thumb` double NOT NULL, `index` double NOT NULL, `middle` double NOT NULL, `ring` double NOT NULL, `pinky` double NOT NULL, `thumb_height` double NOT NULL, `index_height` double NOT NULL, `middle_height` double NOT NULL, `ring_height` double NOT NULL, `pinky_height` double NOT NULL, `left_or_right` enum('left','right') NOT NULL, `user_id` int(11) NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10 ;
-- --------------------------------------------------------
-- Table structure for table `users`
CREATE TABLE IF NOT EXISTS `users` ( `user_id` int(11) NOT NULL AUTO_INCREMENT, `username` text NOT NULL, `password` varchar(60) NOT NULL, PRIMARY KEY (`user_id`)) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;
105
ReferencesArthritis: Rheumatoid Arthritis. (2008). Retrieved December 08, 2011, from American Society for
Surgery of the Hand: http://www.assh.org/Public/HandConditions/Pages/ArthritisRheumatoidArthritis.aspx
Handout on Health: Rheumatoid Arthritis. (2009). Retrieved 12 5, 2011, from National Institute of Arthritis and Musculoskeletal and Skin Diseases: http://www.niams.nih.gov/Health_Info/Rheumatic_Disease/default.asp
DAS Booklet - Quick reference guide for Healthcare Professionals. (2010, February). Retrieved December 08, 2011, from National Rheumatoid Arthritis Society: http://www.nras.org.uk/includes/documents/cm_docs/2010/d/das_quick_reference.pdf
GigE Vision for 3D Medical Research. (2010, May 13). Retrieved from Allied Vision Technologies: http://www.alliedvisiontec.com/us/products/applications/application-case-study/article/gige-vision-for-3d-medical-research.html
Limitations of the Kinect. (2010, 12 17). Retrieved from I Heart Robotics: http://www.iheartrobotics.com/2010/12/limitations-of-kinect.html
Rheumatoid: Hand Exam. (2011). Retrieved December 08, 2011, from Clinical Exam: http://clinicalexam.com/pda/r_hand.htm
Ackerman, E. (2011, 6 17). Microsoft Releases Kinect SDK, Roboticists Cackle With Glee. Retrieved from IEEE Spectrum - Automaton: http://spectrum.ieee.org/automaton/robotics/diy/microsoft-releases-kinect-sdk-roboticists-cackle-with-glee
Arnett, F., Edworthy, S., Bloch, D., Mcshane, D., Fries, F., Cooper, N., . . . Hunder, G. (1988). The American Rheumatism Association 1987 Revised Criteria for the Classification of Rheumatoid Arthritis. Arthritis & Rheumatism, 315-324.
Ashton, L., & Myers, S. (2004). Serial Grip Testing- Its role In Assessment Of Wrisy And Hand Disability. The Internet Journal of Surgery, Vol. 5.
Basili, V., & Selby, R. (1987). Comparing the Effectiveness of Software Testing Strategies. IEEE Transactions on Software Engineering, 1278-1296.
Black, S., Kushner, I., & Samols, D. (2004). C-reactive Protein. The Journal of Biological Chemistry, 48487-48490.
Boehm, B. (1988). A Spiral Model for Software Development and Enhancement. IEEE Computer, 61-72.
Bradski, G., & Kaehler, A. (2008). Learning OpenCV: computer vision with the OpenCV library. Sebastapol: O'Reilly Media, Inc.
Carmody, T. (2010, 11 5). How Facial Recognition Works in Xbox Kinect. Retrieved from Wired: http://www.wired.com/gadgetlab/2010/11/how-facial-recognition-works-in-xbox-kinect/
106
Chaczko, Z., & Yeoh, L. (2007). A Preliminary Investigation on Computer Vision for Telemedicine Systems using OpenCV. Swinburne: 2010 Second International Conference on Machine Learning and Computing.
Chen, Y., Cheng, T., & Hsu, S. (2009). Ultrasound in rheumatoid arthritis. Formosan Journal of Rheumatology, 1-7.
Chung, L., Cesar, J., & Sampaio, J. (2009). On Non-Functional Requirements in Software Engineering. Lecture Notes in Computer Science, 363-379.
Condell, J., Curran, K., Quigley, T., Gardiner, P., McNeill, M., Winder, J., . . . Connolly, J. (2010). Finger Movement Measurements in Arthritic Patients Using Wearable Sensor Enabled Gloves. Londonderry: University of Ulster.
Conger, S. (2011). Software Development Life Cycles and Methodologies: Fixing the Old and Adopting the New. International Journal of Information Technologies and Systems Approach, 1-22.
de Kraker, M., Selles, R., Molenaar, T., Schreuders, A., Hovius, S., & Stam, H. (2009). Palmar Abduction Measurements: Reliability and Introduction of Normative Data in Healthy Children. The Journal of Hand Surgery, 1704-1708.
Dipietro, L., Sabatini, A., & Dario, P. (2008). A Survey of Glove-Based Systems and Their Applications. IEEE Transactions On Systems, Man, And Cybernetics—Part C: Applications And Reviews , 461-482.
Eberhardt, K., Malcus-Johnson, P., & Rydgren, L. (1991). The Occurence and Significance of Hand Deformities in Early Rheumatoid Arthritis. British Journal of Rheumatology, 211-213.
Fees, E. (1987). A method for checking Jamar dynamometer calibration. Journal of Hand Therapy, 28=32.
Fess, E. (1995). Documentation: Essential elements of an upper extremity assessment battery. In J. Hunter, E. Mackin, & A. Calahan, Rehabilitation of the hand: Surgery and therapy (4th edn) (pp. 185-214). St. Louis: Mosby.
Fries, J., Spitz, P., Kraines, R., & Holman, H. (1980). Measurement of patient outcome in arthritis. Arthritis & Rheumatism, 137-145.
Hamilton, A., Balnave, R., & Adams, R. (1994). Grip strength testing reliability . Journal of hand Therapy, 163-170.
Majithia, V., & Geraci, A. (2007). Rheumatoid Arthritis: Diagnosis and Management. The American Journal of Medicine, 936-939.
Panayi, G. (2003). What is RA? National Rheumatoid Arthritus Society.
Richards, L., & Palmiter-Thomas, P. (1996). Grip strength measurement: A critical review of tools, methods and clinical utility. Critical reviews in Physical and Rehabilitation Medicine, 87-109.
107
Roman, G. (1985). A Taxonomy of Current Issues in Requirements Engineering. IEEE Computer, 14-21.
Schofield, P., Aveyard, B., & Black, C. (2007). Management of Pain in Older People. Keswick: M&K Publishing.
Tölgyessy, M., & Hubinský, P. (2011). The Kinect Sensor in Robotics Education. Bratislava: Slovak University of Technology.
Wiegers, K. (2003). Software Requirements. Redmond: Microsoft.
Worden, J. (2011). Rheumatoid Arthritis. Retrieved from BBC Health: http://www.bbc.co.uk/health/physical_health/conditions/in_depth/arthritis/aboutarthritis_rheumatoid.shtml
108