devworx>guide to building ultrabook apps
DESCRIPTION
Target Intel Ultrabooks with responsive and Interactive Applications. Introduction Building Applications for Touch Sensor and Location Platform in Windows Designing Gestures Getting your hands dirtyTRANSCRIPT
Guide toBuilding Ultrabook AppsTarget Intel Ultrabooks with responsiveand interactive applications.
Introduction Build Touch apps Sensor and
Location apps Under the hood Getting your
hands dirty
A 9.9 Media Publication
GU
Ide
To
BU
Ild U
lTr
aB
oo
k a
pp
sF
ree With
Digit O
ctober 2012O
CT
OB
ER
2012
Guide toBuilding
Ultrabook AppsTarget Intel Ultrabooks with responsive
and Interactive Applications.
*IntroductionThe latest addition to the bouquet of ultrabooks are complete with gyroscopes, proximity sensors and several others–enabling a fulfilling experience to the end user. 07
01
02*Build Touch Applications>>As a developer, why limit your target audience to smartphone and tablet users? With Windows 8, you can now target Intel Ultrabook users. 11
*Contents
*Sensor and location apps>>With Windows 8 closely supporting various types of sensors, there more to explore than what meets the eye. 37
*Under the hood>>In addition to touch, Intel ultrabooks support severalsensor types. 65
030405*Get your hands dirty>>Here’s the real deal. Four sample apps to learn from. 95
Editorial
Assistant Editor
Nash David
Writers
Kshitij Sobti
with inputs from Intel
dEsign
Sr. Creative Director
Jayan K Narayanan
Sr. Art Director
Anil VK
Associate Art Directors
Atul Deshmukh & Anil T
Sr. Visualisers
Manav Sachdev & Shokeen Saifi
Visualiser
Baiju NV
Contributing Designer
Vijay Padaya
brand
Product Manager
Navneet Miglani
Manager Online
Shauvik Kumar
Manager Product Marketing
Chandan Sisodia
Cover Design
Manav Sachdev
© 9.9 Mediaworx Pvt. Ltd.No part of this book may be reproduced, stored, or transmitted in any form
or by any means without the prior written permission of the publisher.
October 2012Free with Digit. If you have paid to buy this book from any
source other than 9.9 Mediaworx Pvt. Ltd., please write to
[email protected] with details
build Ultrabook apps> Your road to innovation, one line at a time >
Time forsome creativity!We need a new cover photo for ourFacebook page. Can you help us design it? Feel free to use the logos at
dvwx.in/VsQyC5to highlight the devworx identity.Post your creations on the page!
01 The consumer electronics market is witnessing an unprecedented transition in product evolution–from smartphones, laptops, netbooks to tablets. The latest addition is the ultrabook–complete with gyroscopes, proximity sensors and several others–enabling interactive applications to be built around the device giving a fulfilling experience to the end user. As a developer, this book will guide you with the tools needed to enable this.
*Introduction
8
October 2012 | www.devworx.in
here has been an astronomical rise in the use of mobile
computing and our expectations from mobile devices
are constantly increasing. What used to be simple com-
munication devices are now full-fledged computers with
their own software ecosystem. Simply a phone is no longer
enough, it needs to have internet access, a good browser,
multimedia capability, and an app store with a large number of good appli-
cations. You should be able to touch it, shake it, talk to it and it should
understand what you do.
Yet there are still some limitations that apply to smartphones, and even
tablets. The form factor does not allow for many of the heavy computing
tasks that we routinely perform on our laptops and desktops.
On the other hand, even on the more powerful laptops, features such as
touch input, geolocation, ambient light sensors, accelerometers—features
staple of the mobile platform—have traditionally been absent.
Mobiles are becoming increasingly powerful, but they will never be
able to take on all the tasks that people need laptops for. What can happen
however is that as mobile phones become more popular, and people begin
to expect more from their devices, no longer is a simple portable computer
enough, we want a computer that is sleek, light and stylish; can run for
hours without recharging; performs well; and most importantly is always
connected to the internet.
We have come to expect all of this from mobiles and tablets, so the
expectation from other portable devices are also similar. A modern device
needs to be touch and voice capable, and have our favourite applications
at our fingertips. A modern device needs to have access to an App store so
we can discover, purchase and install new software.
The Ultrabook platform, is Intel’s answer to this growing need or ultra-
T
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
9
October 2012 | www.devworx.in
portable devices that can still perform most of the computing tasks that
one can accomplish on a laptop or desktop, while not compromising on
the kind of mobile computing experience that people can get from tablets
and smartphones.
All of these factors have led Intel to design Ultrabooks, which it believes,
is a new category of devices. They have a few key requirements/specifica-
tions for an Ultrabook device:
Thin/light designs: Less than 0.8 inches in thickness; some current
systems are much thinner.
Ultra-fast start-up. Systems must wake from a very deep sleep state
(hibernate) to full use (keyboard interaction) in less than 7 seconds and
wake from “sleep” mode even faster.
Extended battery life: Ultrabook devices offer at least 5 hours of battery
life with many providing 8 hours or more, even in the sleekest form factors.
Security enabled: Ultrabook systems come enabled with such security
features as Intel® Identify Protection technology to provide a more secure
online experience for activities like shopping, banking or gaming online;
and Intel anti-theft technology to deter theft.
Processor: Powered by second and third generation Intel Core Processor
Family for Ultrabook.
Ultrabooks for dozens of manufacturers Acer, Asus, Dell, HP, Lenovo,
LG, Samsung and Toshiba among others are already available and yet
other are coming soon.
Ultrabooks using Intel’s 3rd generation (“Ivy Bridge”) Intel Core proces-
sors are coming soon with support for advanced technologies such as USB
3.0 and Thunderbolt. These devices will also be touch enabled so people
can use touch in addition to keyboard and mouse interaction.
Still further ahead in the future lies “Haswell” which will aim to curtail
microprocessor power consumption to 10-20 watts – half the current figure.
Mobile devices have also had the advantage of offering people a range of
inputs that go beyond just touch, and the standard mouse and keyboards.
Modern mobile devices (tablets and smartphones) usually offer features such
as accelerometers, to detect movement of the device; gyroscopes, to detect
the orientation of the device; digital compasses, to get obtain information
about the direction of the device; GPS sensors, for locating the device on
Earth; and ambient light sensors, to measure ambient lighting conditions.
The presence of these sensors opens a whole new range of interactions
for software; the kind of interactivity that is missing on traditional laptops.
10
October 2012 | www.devworx.in
Ultrabooks will come equipped with
these sensors and developers need to
be aware of what these sensors are
capable of and take advantage of them
while writing software.
Additionally, Ultrabooks will
include a range of new Intel technolo-
gies that improve the experience for
mobile users. These are:
Intel® Rapid Start Technology
returns the Ultrabook™ to full oper-
ational power within seconds. This
ultra-responsive capability gives the
device the power to resume in a flash,
and ultra-low power consumption when on standby.
Intel® Smart Response Technology quickly recognises and stores the most
frequently used files and applications where they can accessed right away.
Intel® Smart Connect Technology keeps email, favourite apps, and social
networks continually and automatically updated even when the system is
asleep. (Available on select systems).
Intel® Anti-Theft Technology (Intel® AT) is smart security hardware that
helps protect data by disabling a lost or stolen Ultrabook™ from anywhere
in the world. When the Ultrabook™ is returned, it can be easily reactivated
without harm to any data or digital content. (Available as an option on
designated Intel® Core™ processor-based Ultrabook™ devices).
Intel® Identity Protection Technology (Intel® IPT) helps protect our iden-
tity and assets online by adding a trusted link to the system, the accounts,
and the favourite online places (available on select systems). A developer
creating applications for Ultrabooks can count on these technologies, and
should build applications accordingly.
Ultrabooks using Intel’s 3rd generation (“Ivy Bridge”) Intel Core processors will also be touch enabled so people can use touch in addition to keyboard and mouse interaction
>>As a developer, why limit your target audience to smartphone and tablet users? With Windows 8, you can now target Intel Ultrabook users
*Build Touch Applications02
12
October 2012 | www.devworx.in October 2012 | www.devworx.in
e’re used to interacting with our smartphones and
tablets using touch. Yet, when we use a desktop or a
notebook, we typically use a keyboard or mouse. With
touch capability becoming available in an increasing
number of devices, it became important to understand
how exactly touch affects users and their productivity.
This is important in understanding if touch is something that is only useful
on mobiles and tablets, or whether it could be of use in more traditional
devices as well. To this end, Intel undertook a research program to better
understand if and how people might use touch capabilities in more tradi-
tional, notebook form-factor devices.
To spoil the ending, the results were positive—very positive, in fact.
Users who were presented with a way to interact with their computers via
touch, keyboard, and mouse found it an extremely natural and fluid way
of working.
Let’s look at the research Intel’s team undertook, and then at how we can
bring the lessons learnt from that information forward into our applications.
Scoping the ProblemIt’s easy enough to be dazzled by the appeal of developing applications for
smart phones or building a new, snazzy website. However, the reality is
that most of the software development that happens in the corporate arena
relates to internal, proprietary, line-of-business applications.
Mirroring the trends of the rest of the world, businesses are increasingly
taking advantage of highly portable notebooks and tablets for using internal
business-centric applications. Being able to optimise existing applications
for touch can realise real benefits, as can being able to engineer new applica-
tions with touch capability in mind.
W
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
13
October 2012 | www.devworx.in October 2012 | www.devworx.in
Who and WhereThe research was led by Daria Loi,
Ph.D. Daria holds the position of
UX Innovation Manager at Intel
and has a specialist interest in how
people use and adapt to new tech-
nologies in user interaction.
Daria’s research took place
with individuals in the United
States, Brazil, Italy, and China. In
each case, she and her team took
pains to reproduce the subject’s
environment as closely as possible.
For example, in China, it’s typical
for users to use their device in bed,
so the test environment was rearranged to simulate a bedroom. The sub-
ject was taken through the same suite of exercises—specifically, changing
settings, creating a Microsoft Office PowerPoint slide, writing an email
message, browsing and searching, online purchasing, video calling, and
cropping an image. Subjects were a mix of those who had experience with
touch on a day-to-day basis (e.g., with a smart phone or tablet) and those
who were new to working with touch.
The team found that users would naturally move among using touch,
mouse, or trackpad entry, or keyboard entry depending on the task at hand.
This finding was really the key takeaway of the research. Unconscious cues
instructed the subject as to the optimal approach on a step-by-step basis.
This behaviour occurs in normal, non-touch scenarios anywhere. Consider
situations in which you might press a keyboard shortcut to save a document
but use the mouse to click an icon on a toolbar. Introducing touch brings in
another vector that users can choose at each step in a complex task. Touch
is a basic, organic thing to do, making it sympathetic to users’ needs. Users
will conceive of a need to activate a given button; they can then simply
reach out and touch it, much as they might an object in the real world, if
that approach suits them at the time.
Remember that although people often talk about touch as “sophisticated,”
it’s not. It’s the most basic of our capabilities as humans. A child may need
to be taught how a mouse works, how moving the mouse moves a small icon
on screen, and what pressing different button on it do. Touch however is not
Device interaction by input type
14
October 2012 | www.devworx.in October 2012 | www.devworx.inOctober 2012 | www.devworx.in
something that one has to learn; it is after all how we interact with the rest
of the world. It is perhaps the most natural way to interact with a computer.
Likely because touch is such a natural thing to use, users reported
that touch transforms the notebook from a work device into a play device.
Daria and her team felt that this was because touch enables people to feel
immersed, transforming a laptop into something delightful to use. (Despite
the implied association with work, they still wanted the keyboard.) The
figure above shows the breakdown among interaction types.
Touch also allows a new form of interaction: “flicking.” Using “momentum
scrolling,” a fast and urgent flick of the finger can scroll a list a great distance
compared to using the wheel on a mouse. Scrolling long distances using
a mouse wheel is often tedious. Using a scroll bar with a trackpad is often
both tedious and fiddly. However, flicking is neither. It’s simple and effective
at moving large distances quickly.
PortabilityImportantly for this class of devices, users reported that although they would
be willing to get rid of their trackpads—and to an extent, mouse devices—
they were not willing to get rid of their keyboards. An on screen keyboard
reduces the available screen area, which in turn reduces productivity. On
the other hand, an screen keyboard also lacks tactile feedback.
Intel took these lessons to heart while developing the Ultrabook devices,
and has been working to make the notebook proposition even more port-
able. Touch helps with this. Rather than having to transport the notebook
as well as a mouse, the user just carries the notebook. This is, of course,
analogous to the way that keyboards on laptops have always worked. Intel’s
researchers had found in the past that users are generally hostile toward
trackpads because of the fussy and unnatural way in which they operate.
Trackpads represent a compromise in notebook form factors, whereas
touch is not. Touch is a first-class input capability, just as the keyboard is.
As part of this element of the work, the team wanted to understand how
subjects would feel about convertibles and, by extension, tablet computers.
The subjects were generally not keen to do away with their keyboards,
even if it was just a temporary disconnection from the device. Convertible
devices, although not directly tested, were considered appropriate by the
test subjects. What was clear is that the subjects generally wanted to keep
the work productivity space, yet have it enriched.
Time forsome creativity!We need a new cover photo for ourFacebook page. Can you help us design it? Feel free to use the logos at
dvwx.in/VsQyC5to highlight the devworx identity.Post your creations on the page!
16
October 2012 | www.devworx.in October 2012 | www.devworx.in
Conventional wisdomConventional wisdom when approaching touch suggests that it’s a failure
right out of the gate because people will find it uncomfortable to keep
reaching up to the screen. The team found that this was actually not the
case, and all of the subjects adapted their behaviour to support this new
input approach by shifting their body around to be comfortable, leaning
on the available work surface, and so on. In fact, because of the duality of
working with touch or the mouse or trackpad, subjects were able to switch
between one mode and the other depending on how they felt. Far from
introducing so-called gorilla arm, users reported dramatically increased
comfort levels working in this way.
Another piece of conventional wisdom that Daria—the conductor of this
research—was keen to explore was the idea that tapping the screen would
tip the whole device back. What the team actually found was that subjects
were careful, yet confident and would touch with appropriate pressure to
register their intent without tipping the device back or pushing the screen
back on its hinges.
Applications in the enterpriseEnterprises generally have a mix of applications that are developed in-house
or purchased from third-party vendors (perhaps with customisation) and
those applications that are web-based intranet / extranet applications or
desktop applications. The ability to optimise the applications for touch
varies by application class. In addition, applications have different audi-
ences within the business who in turn have access to different forms of
hardware. Employees who spend a lot of time on the road usually have
notebooks while those who work from an office typically have desktops.
An important secondary consideration is whether the audience for the
application consists entirely of people who use notebooks or whether there
is a mix of notebook and desktop. If the usage is exclusively notebook based,
you don’t have to consider being able to turn on or off touch optimisation.
Touch optimisation is not necessarily about making something touch-
able, however. For example, when using touch, the user interface (UI)
“targets” (buttons, fields, etc.) need to be larger to compensate for the lack
of accuracy. But, when you make things bigger, you lose real estate, and can
display lesser data and UI elements on screen. In applications that need a
mixed approach to input, it’s desirable to turn on touch optimisation (for
example, make everything bigger) or turn it off.
17
October 2012 | www.devworx.in October 2012 | www.devworx.in
Whether the application is a legacy application or a new engineering
effort has no direct relevance on touch optimisation work. You’ll be able to
construct a quantified business case around touch optimisation regardless
of whether you’re modifying existing software or undertaking a new project.
The next two sections look at ways to introduce touch optimisation to
desktop applications and web applications.
Touch for desktop appsFirst of all, let us clarify desktop applications here doesn’t just mean appli-
cations developed for non-portable traditional desktop machines. What
we mean to convey is that these are applications that are delivered via the
traditional desktop model. The user downloads or otherwise acquires the
software, and then installs it. The software runs directly on the machine
rather than in a web browser. The machine itself could be a desktop, notebook
or netbook. In fact what we are talking about is more relevant to portable
devices with touch screens, not actual desktops.
Looking back at the way the research subjects approached touch capa-
bility, they tended to move between touch and the mouse depending on
subtle factors in addition to conscious awareness. What this tells us is that
the most comfortable mode of operation is not one where we are insisting
that they use one method or another; rather, it is indeterminate. A user
may click a button on a form using the mouse 80 percent of the time and
use touch 20 percent of the time one day and entirely reverse this pattern
the following day.
Whatever you design has to suit either input mode. Do not decide for
the user, or force the user to use only touch for certain tasks and only the
mouse for other task. Of course, if you fail in your design and end up making
the user’s input decision for him or her, frustration will certainly ensue.
Incidentally, the choice of platform and framework used in the applica-
tion’s construction is probably not relevant when talking about touch opti-
misation. For example, Java and Microsoft’s dot NET are equally capable of
supporting developers introducing touch in that neither platform considers
touch a first-class way of providing input into an application. Over time
though, this view is likely to change as touch works its way backwards from
post-PC devices to PC hardware. Microsoft’s Metro for example, is entirely
touch based and focused. Hence, any work you have to do you’ll have to do
yourself, although such work tends not to be onerous.
Really, all you’re trying to do is be sympathetic to the user. After all if
18
October 2012 | www.devworx.in October 2012 | www.devworx.in
they are frustrated by using your application on a touch device they will
look for an alternative that works. And the chances of this—that is a person
using your device on a computer—are only going to increase.
TargetsFor desktop applications, the major consideration is target size. Many
business applications are form based, and such, applications tend to have
a reputation for being tightly packed. Often the interfaces are dense with
information and controls to improve the efficiency for repetitive tasks.
Touch simply does not give the kind of precision that one can get with
a mouse. While a mouse always registers an exact, single-pixel coordinate
under the hotspot of the cursor, the operating system assumes the input
from a touch screen is likely to be inaccurate and compensates accordingly.
The operating system thus has to be a little intuitive and “guess” where you
wanted to click based on where the user elements lie on the user interface,
and the “fuzzy” area covered by the touch of a finger.
For example, if the touch screen reports that you pressed just outside
of a field, the operating system may choose to interpret this as a click in the
field, as you are more likely to want to click in the field than you are to click
outside of the field. Therefore, one way in which you can help the operating
system is to space the fields so that presses outside of fields can be more
reliably mapped as intentions to click in fields. The same applies to buttons.
Another way to help the operating system is to make the fields larger,
for the simple reason that a larger field is easier to hit. Generally though,
the standard presentation of buttons and fields is of an appropriate size for
touch or mouse operation.
Oftentimes, forms get cramped, because software engineers want to
get everything onto a single form; so, it’s easy to find applications that
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
19
October 2012 | www.devworx.in October 2012 | www.devworx.in
have fussy or fiddly forms. When touch-optimising this sort of form, you
may find it appropriate to break out individual functionality into tabs. Of
course, you also have to make the tabs an appropriate size. By default, tabs
are typically too small, probably because designers generally considered
them a secondary UI element. For touch optimisation, you’re turning that
secondary element back into a primary element by making it larger.
Toolbars and menus are another consideration. Menus are usually
managed by the operating system, and as such, you can assume that the
operating system will interpret touch in an optimal way when working
with menus. That said, because it’s likely the user will have a keyboard,
you may choose to make some of your touch optimisation work include
increasing keyboard accelerators for driving the menu, thereby generally
reducing the need to use the menu.
In terms of toolbars, you may find that the buttons are too small. Tool-
bars are necessary, but developers tend to make them as small as possible,
because they are a secondary feature compared to the display of the data
you’re working on.
The bigger problem with toolbars is tooltips. Introduced with Microsoft
Windows 95, these tips allow users to use the mouse to obtain a deeper
understanding of what a button does. Toolbar icons are often cryptic, and
often their meaning is established by convention rather than meaningful
icon. Consider the often-used floppy icon for saving files. To most people
of this generation that icon bears no resemblance to anything they use, for
them it is the save icon rather than the image of a floppy. A new user then
would have to look at a tooltip over this button to understand what it means
rather than by looking at that icon.
The Intel team’s research didn’t include a test as to whether users took
advantage of tooltips, but it’s easy to assume that users wanting to know
what a button did might well hold it down and expect more information to
appear. This functionality is certainly doable, but consider that the user’s
finger and hand will be obscuring the screen around the button, etc. There-
fore, you will need to consider where you display the tooltip. As tooltips
are a standard feature of most toolbar libraries, this change of presentation
may require additional work.
Traditional desktop applications often rely on pop-up forms and dialog
boxes. The only consideration here is that people often want to move and
resize these items. The operating system will, of course, do this for you.
Moreover, the operating system should do a good job by itself of handling
20
October 2012 | www.devworx.in October 2012 | www.devworx.in
touch interaction on captions and borders. Therefore, verify that your
application works adequately in this regard out of the box.
Other applicationsSo much for basic forms applications. What about those applications with
more novel, specialised interfaces?
From time to time, developers build custom interfaces with the expecta-
tion that users will drag items around on their surfaces or scroll around.
Consider an image editing application; you are working on a canvas, where
the image can be moved around to put focus on different areas. You can add
elements to the image, and resize or rotate them. In such a case you need
to ensure that the handles for resizing, and rotation are large enough to be
picked up via touch.
In such a case, you may run into a problem where the operating system
is limited in the help it can give you in interpreting touch placement. When
the operating system is asked to display a form, it has an understanding
of the construction of that form and therefore is able to make decisions
like, “The user clicked in a blank area; I suppose he or she actually wanted
to click this field.” If you custom draw the whole interface, the operating
system just sees a rendering surface without deeper meaning. As such, you
may need to replicate the fuzzy matching that the operating system does
on your own custom controls.
Zooming and ScrollingOperating systems typically interpret scroll intentions properly. A word of
caution, though. If you use third-party control libraries in your application,
you may run into problems. Although the operating system knows that one
of its list box controls is a list box and can infer a scroll intention, that may
not be a given for a third-party control. That said, it should generally be the
case that scrolling works alright, as even third-party controls tend to build
on and extend the regular operating system controls.
One area where you may find yourself working harder is zooming. “Pinch
to zoom” on touch devices is natural, but there’s no built-in support for it
on the standard controls.
Gestures such as pinch-to-zoom aren’t something that can be replicated
with a mouse though, so it is important to not have them as the exclusive
means to perform an action. Otherwise you will leave behind mouse users.
Note: As a developer, if you’re working on touch optimisation, you
21
October 2012 | www.devworx.in October 2012 | www.devworx.in
don’t have the freedom of moving between touch and mouse. It’s a given
that whatever you do will work with a mouse. You need to keep on top of
whether an action works with touch.
Touch Capability in Web ApplicationsWhen considering touch in web applications, generally speaking we’re
talking about the same rules as for desktop applications with regard to
target size and placement. Fields need to be larger and more widely spaced.
From an engineering perspective, however, it’s likely to be easier to make
such modifications, because HTML layout is designed to be more dynamic
and judicious adjustments to the cascading style sheet (CSS) may require
little work to actually implement the change.
Web applications tend to be more simplistic than desktop applications—
at least for the sorts of applications you find in enterprises. More inventive
ways of building interfaces can be found in Web 2.0 applications. Enterprise-
type web applications are typically forms based without terribly sophis-
ticated, complex, custom UIs. As such, after you’ve negotiated optimising
against the target size and placement vector, you should be fairly well set.
There are some wrinkles in these applications, though. A major one is
that it’s unlikely the browser of choice will be touch optimised, particularly
if the organisation has standardised on an old browser for compatibility
reasons. Windows Internet Explorer 10—the default browser that will be
supplied with Windows 8 and Windows RT—will be touch optimised. Older
versions of Internet Explorer are not. Other browsers probably will not be.
This suggests that testing and tweaking any touch optimisation work is
likely to be more difficult with web applications.
You may wonder why a touch optimising a browser is important if the
application itself is what needs to be touch optimised. Remember how we
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
22
October 2012 | www.devworx.in October 2012 | www.devworx.in
said that if you have an application interface you are rendering yourself,
the OS will not be able to understand your intentions? Well, the browser
is such an application. The OS does not know where there is a button in
your web app, where is an image etc. A browser that is designed for touch
will have that intuitive edge, and will try to understand what the user was
trying to do.
Links are a good example of why touch optimisation is needed in browser
as well. The biggest problem with touch-optimising a web application is
that links tend to be too small for users to touch with confidence. In situ-
ations where the user is using a touch-optimised browser, it’s likely the
browser will do a decent job of interpreting the user’s touch intentions. Even
so, a user is likely to be aware that they are trying to reach a small target,
and that affects the user’s confidence in whether their touch will have the
desired effect, this will damage the experience and limit the value from the
work done. Plain links should generally be avoided in web applications
and replaced with buttons. (These buttons don’t necessarily need to be
that much larger. You’re looking to increase confidence primarily. Spacing
remains important, however.) Again, thanks to CSS, links can often be
automatically stylised as buttons.
Dialog boxes and pop-up windows do appear in web applications from
time to time but with less frequency than their desktop cousins. The same
rules apply here, too, though. Make sure that people can move them around
and resize them. Even with a touch-optimised browser, you’re unlikely to get
much help supporting such functionality, but it is easier to make artifacts
such as captions different sizes compared to desktop applications.
ConclusionIt’s becoming increasingly common to find people working in software
development talking about “user experience.” Far from being an idea about
fashion or a sop to allow developers to ignore what users need, increased
understanding of the impact software has on users is a factor that comes
about through maturation of the industry. Developers and project spon-
sors tend to be more involved and sympathetic to how users feel about the
software they are asked to use.
What’s clear from the research that Daria and her team undertook at Intel
is that the user experience can be improved—dramatically—by involving
touch. It turns a work device into a play device, and in a commercial set-
ting, that’s not a bad thing. Instead of thinking about it being play, think
23
October 2012 | www.devworx.in October 2012 | www.devworx.in
Prefer 140 characters?
@devworx<Follow us>
24
October 2012 | www.devworx.in October 2012 | www.devworx.in
about it as being the software getting out of the way and allowing the user’s
creativity to come to the fore.
While research is important, and in this case it shows that touch improves
the experience for users while interacting with a computer, it is only the first
step. It is also important to understand how to actually develop applications
that take advantage of such capabilities. And here we took a look at how
legacy and new applications can be optimised for touch.
Users will tend to switch between touch and mouse operations without
conscious consideration. You can deliver value within the application by
making targets larger and working with space. Remember to keep your
testing efforts focused on touch, trusting that more traditional manipula-
tion with the mouse will work regardless—keep in mind that gestures such
as pinch-to-zoom are exclusive to multi-touch devices though. Either way,
with a bit of effort, you can use touch to deliver better, more engaging, and
more compelling applications to your user base.
Case Study: Enabling Touch in Windows 8 Metro Style AppsWith the next version of Windows, Microsoft is translating some of their
experience with their phone products to their main desktop OS. Microsoft
calls this user experience Metro. The Metro name might be going away
soon due to a dispute, however since the new name isn’t, that is the name
we will use for now.
The Windows 8 desktop will feature two different ways to create applica-
tions and two different user experiences. One will be the classic desktop,
and the other will be Metro. Metro is a complete departure from the classic
Windows UI, and is optimised heavily for touch screens. Touch screens
need big targets, and that isn’t just true of the OS, but of each application
as well. Windows 7 came with support for multitouch screens, but no one
made much of it because neither the Windows UI, nor the UI of most Win-
dows apps was very conducive to a good multitouch experience. Windows
8 rectifies this is with a fresh new tile-based Metro UI.
The Metro style UI isn’t just a departure from the “classic” Windows
style UI, but from most other touch screen UIs as well. Metro tries to do
away with skeuomorphism in favour of a more authentic digital experi-
ence. Skeuomorphism is the practice of retaining design features just for
the sake of familiarity or tradition rather than functionality. A commonly
found example of this is a metal door painted to resemble wood. In the
digital world we find UIs that try to mimic physical objects, such as software
25
October 2012 | www.devworx.in October 2012 | www.devworx.in
buttons that look like physical buttons, or an eBook viewer having actual
pages that the user can turn. Metro aims to be authentic in its approach to
design. You aren’t reading a book, and that isn’t really a button, why fake
it? The resulting UI is thus much cleaner and more digital.
More and more, touch-enabled applications are becoming commonplace
as users demand a “hands on” experience. Windows 8 Metro style apps and
the corresponding Visual Studio (VS) IDE align with these new demands
by providing developers with touch APIs that enable consistent touch
experiences for end users.
If an app is intended to be used on a touchscreen, Metro is the way to go,
although it is certainly possible to create a touch-enabled “classic” app. We
will discuss how we can enable touch for Windows 8 Metro style apps using
the C# programming language in the Visual Studio 11 IDE. The APIs listed
are current as of the Windows 8 Release Preview version of the operating
system and the corresponding Microsoft Visual Studio 2012 RC IDE. In
our case study we will cover how to bind user touch events using both the
Extensible Application Markup Language (XAML) and “code-behind” (C#
code) perspectives to create a rich user experience. A children’s math game
is used as an example use case for touch. The combination of UI element
XAML markup and C# code-behind bridge the user experience with the
magic that occurs behind the scenes.
The source code in this document was written using the Windows 8
Release Preview (8400 build) released May 31st, 2012, Visual Studio Ulti-
mate 2012 RC (11.0.50522.1 RCEL), the NET framework version 4.5.50501.
Handling Tap Events in XAMLLet’s say, for example, that the app at hand should do something whenever
a user taps a UI element—a common enough proposition in the world of
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
26
October 2012 | www.devworx.in October 2012 | www.devworx.in
touch. Further, lets say this UI element happens to be an Image element.
Starting with the XAML declaration of the image, the respective tap event
handler can be defined as follows:
<Image x:Name=”c29” Height=”100” Width=”100” Tapped=”Image _
Tapped” Grid.Row=”1” Grid.Column=”8” />
In this example, we defined an image named “c29”. The attributes attached
to the image are clear enough, it has a width and height, the Grid.Row and
Grid.Column attributes specify its position on a grid. The important bit here
is the tapped attribute though. Note the use of: Tapped=”Image_Tapped”
This essentially indicates that whenever this particular image is tapped,
the method Image_Tapped in the supporting C# code-behind will be invoked.
This is a way of saying that when the image is tapped please run the code
in Image_Tapped, which will perform the action we want to perform as the
result of tapping this image.
Tap Event Handler Code in C#In the previous example code we defined an image in XAML; and in that
code we provided the name of a method (Image_Tapped) which should be
run when that image is tapped. We can actually use the same Image_Tapped
method for multiple such images at the same time.
You might wonder, what is the utility of this. Or whether this just means
that tapping any of those images will just perform the same action. After
all there is little use in having dozens of things on screen doing the same
thing when clicked.
In reality, when the image is tapped, the Image_Tapped function will
actually be passed information about the image that was tapped. We can
use this information to change the what that function does based on the
particular image that has been tapped.
In the following code sample we will find out which image was tapped
by using the variables passed to the function:• //called when one of the game images is clicked• private void Image _ Tapped(object sender, RoutedEventArgs e)• {• //findoutwhichgameobjectwasclicked• FrameworkElement source _ image = e.OriginalSource as FrameworkElement;•
27
October 2012 | www.devworx.in October 2012 | www.devworx.in
• for (int row = 0; row < ROWS; row++)• {• for (int col = 0; col < COLS; col++)• {• //we found a match!• if (source _ image.Name.Equals(images[row, col].Name))• { …//handler logic
Here, we are using an array of images. Once we know which image
was tapped we will look up its name in an array of images to find a match.
In addition to Tapped as a keyword in XAML, DoubleTapped can be
used as well. Other keywords, such as Clicked, are beyond our scope.
Pointer EventsThe Tapped keyword was quite simple to use. Now, it is time to peel the
onion a bit. A Windows 8 user could be using any of a number of pointing
devices, touch is one, but the user could also be using a mouse, or a graphics
tablet. You probably don’t care what the user and using and just want some
way to know when a particular element of your application is clicked so
you can respond to it.
Windows 8 includes APIs that abstract the exact pointer device so you
can focus on what you want to do when that particular element is activated,
rather than worry about whether it is tapped or clicked.
Just like you have the Tapped attribute in XAML as we saw in our code
sample, you have the PointerPressed, PointerReleased, and PointerExited
attributes. Each of these has a distinct meaning, and requires some special
handling in the code. As such it is best to look at the MSDN documentation
to understand how this works. You can access the MSDN documentation
for handling pointers here: http://dvwx.in/PLmPjz
In this article it is important to take a look at how they have handled the
pointer events, especially how it is possible to use the information passed
to the event handling function to discern between touch or mouse input.
It is also important to note a limitation of this API, which is that it can
only work with a single finger, not multi-touch gestures.
Manipulation eventsManipulation Events are what makes things really interesting. They are
what enable multi-touch interactions, and allow us to design custom ges-
28
October 2012 | www.devworx.in October 2012 | www.devworx.in
tures.
Unsurprisingly, Manipulation events have even more XAML keywords.
You can see a list of these keywords in the “Using manipulation events”
section of the page at the link we gave in the previous section.
To simplify things, consider a manipulation event as occurring in the
following event phases:
Phase I: ManipulationStarted. This is initiated at the start of the manipu-
lation. This happens when, say, the user holds a finger down on a UI element.
Phase II: ManipulationDelta. The manipulation continues as the
user continues holding the UI element but is performing a drag. Even
for simple hold-drag-release
sequences, this event can be
invoked many times. This takes
careful code consideration.
Phase III: ManipulationIner-
tiaStarting. This is invoked when
the user finally releases all finger(s)
from the screen, thus sending the
UI element(s) into inertia.
Other manipulation event types
are described in the MSDN link.
Also note that manipulations can
be tested with a mouse as well
when using a scroll wheel.
Refer to the MSDN link for keyword usage in XAML. One interesting
part about this API is that the developer can retrieve the velocity and change
in position of the UI element throughout the manipulation duration. This
allows for manipulating the UI element via matrix transformations, also
described in the link.
Testing Before Deployment: VS EmulatorAfter writing the touch detection glue, it is time to test it. However manu-
ally transferring the app package to a device via wired method after every
code build can get tiresome. There are two preferred methods from a pro-
ductivity perspective either: use the VS Remote Tools (contains Remote
Debugger), or simply deploy the code to the VS Emulator. In this paper,
the latter method is discussed.
Visual Studio lets the developer add breakpoints to the source code. One
Tip:In the Debug menu of VS, select the
“<Project name> Properties” option in
the drop-down bar, and then select the
“Debug” option on the left pane. Here, a
target device can be selected as either
local machine (simulation in VS) or
remote machine (wireless debugging on
a real device via Remote Debugging).
29
October 2012 | www.devworx.in October 2012 | www.devworx.in
debug scenario may be to ensure that the touch event routine satisfies the
following two properties: the routine is never entered when no touch event
occurs, and the routine is always entered whenever a touch event occurs.
Simply add a breakpoint in VS to the start of the routine glue as follows:
After building the project successfully and selecting “Start Debugging”
from the Debug menu in VS (or pressing the F5 key), the app starts running.
Then, upon entry of the Image_Tapped routine, VS highlights the breakpoint
when execution reaches the line:
Example Use Case: Children’s Math GameIn this section, the concepts described in the previous sections are applied to
an example usage scenario: delivering a compelling children’s math game.
The game idea is as follows. The user is given the option to choose a
difficulty level. In every level, the user selects operands (digits) and opera-
tors (+,-,*,/). The goal is to construct an arithmetic sequence that uses both
operators and operands to create a sound mathematical equation. If the
equation is correct, the tiles disappear; otherwise, nothing happens. The
goal is to make all operand tiles disappear. It is possible to get stuck in a
level if equations aren’t constructed properly! Thus, the game requires
thinking ahead.
When the app is launched, the user sees:
If, for instance, the user chooses level 3, the game board XAML is loaded
and play begins:
On this game board, a correct equation could be constructed as follows:
(1+2)-2 = 1
Note that the game does not use the formal order of operations—equa-
tions are processed from left to right. So, here, the user could choose the
following tiles in this order: 1, + , 2, - , 2, 1
The game does not need an equal sign; the game logic tracks the cumula-
30
October 2012 | www.devworx.in October 2012 | www.devworx.in
tive total as the user continues selecting tiles. The following picture shows
the partial selection of all but the final tile. The user simply taps on each
tile to make the selection.
The selected tiles are shaded so they appear darker. Now, when the user
selects “1”, the equation is properly constructed, and the tiles disappear!
There is one problem. The
game has no way to arithmeti-
cally eliminate all remaining
operand tiles. Winning in the
original level structure shown
above is left as an exercise for
the user!
Preview of the Math LogicThe logic for handling equation
construction has two main con-
stituents:
Queue: selected tiles are
pushed on the queue, and the
queue is cleared when needed.
Finite State Machine (FSM): maintains state to determine
when the equation is completed.
Finite state machineThe first element of an equation must obviously be an number—a sensible
equation cannot start with a mathematical operation.
Next, the user must select a mathematical operator.
31
October 2012 | www.devworx.in October 2012 | www.devworx.in
Then, a mathematical operator must be selected once again.
This sequence of number operator number occurs at least once, but
might occur multiple times for more intensive computations.
When the state machine detects that two consecutive numbers have been
selected, it marks the end of the equation construction.
A portion of the related code is now presented:• //after selecting num in Q0, user must now specify an operator • case selection _ state.Q1: • if (v == value.DIV || v == value.MINUS || v == value.MULT || v == value.PLUS) • { • sel _ state = selection _ state.Q2; //move to next state • move _ queue.Enqueue(v); //add last user operation to the queue • //setupdateflagforredraw• updates _ pending[row, col] = true; • } •
• //else, user selected a number, so we just stay in Q1 • else • { • clearSelection(); • move _ queue.Clear(); //wipe the moves the user has made • state[row, col] = SELECTED.YES; • sel _ state = selection _ state.Q1; • move _ queue.Enqueue(v); //add last user operation to the queue •
• //setupdateflagforredraw• updates _ pending[row, col] = true; • } • Break;
In the sample code, Q0 denotes the first state where the very first operand
of the equation is expected. Here we see that in Q1, the code expects the user
to select an operator. If so, the selection is queued, and the tile is marked
for updating. The updating logic is an array that sets a Boolean for each tile
32
October 2012 | www.devworx.in October 2012 | www.devworx.in
so that on redraw, only marked tiles are updated for performance reasons.
Now if the user selected an operand instead, state does not change.
This simply updates the equation such that the operand is now the most
recent selection.
Using Manipulation to Fling UI ElementsThis section is another case study example using the children’s math game
idea. As mentioned in a preview section, this code considers manipulation
to occur in three phases (start, delta, inertia).
The code snippets in this section provide a sample of how a developer
can use a “fling” gesture. This involves the user holding down a game tile
and “flinging it” across the screen with a finger (multi-touch not discussed).
Recall in the previous section where the math game was discussed. We
had an array of images and when an image was tapped the event handler
would look through that array to check which image was tapped. The fol-
lowing code demonstrates how to define manipulations for these images.
• //specify the manipulation events to be invoked for each game tile • for (x = 0; x < images.GetLength(0); x++) • { • for (y = 0; y < images.GetLength(1); y++) • { • images[x, y].ManipulationStarted += manip _start; • images[x, y].ManipulationDelta += manip _delta; • images[x, y].ManipulationInertiaStarting += manip _ complete; • } • }
Here, to the right of += the event handlers are specified. For example,
when manipulation starts on an image, the following routine is invoked:• //invoked when user begins a gesture on game tile • void manip _ start(object sender, ManipulationStartedRout-edEventArgs e) • { • //store the elapsed time to mark the beginning of this event
33
October 2012 | www.devworx.in October 2012 | www.devworx.in
• start _ ms = game _ clock.Elapsed.TotalMilliseconds + (1000 * game _ clock.Elapsed.TotalSeconds); •
• //grab the image (game tile) where manipulation began!!! • img = sender as Image; •
• //retain image location in canvas at start of ges-ture • initial _ x = Canvas.GetLeft(img); • initial _ y = Canvas.GetTop(img); • …
The parameter sender is used to retrieve the image that was touched.
The code also shows a sample of how one can capture the time when the
manipulation started (this helps for velocity calculations when computing
change in position over change in time in a given direction). Further, the
Canvas class is used to capture the x and y coordinates of the touched image
at the beginning of the manipulation. Note, as seen before in the previous
sections, the developer can specify a specific event argument in the routine.
In this case, the sample code uses ManipulationStartedRoutedEventArgs.
The body of the manip_delta routine is not shown here. When the user
releases finger(s) from the screen, the following snippet shows the event
that is invoked and how to compute velocity. Then, it is left to the developer
to decide how to create a friction event, such as linearly dampening the
velocity over time to slow the flung object down.• //invokedwhenuserendsagestureongametile...usefinalvelocity for projection• void manip _ complete(object sender, ManipulationInertia-StartingRoutedEventArgs e)• {• //we will compute the average linear velocity as• //change in position over time of the manipulation interval• //note that velocity is a vector, not a scalar, as it• //has both magnitude and directional components•
• finish _ ms=game _ clock.Elapsed.TotalMilliseconds+(1000 * game _ clock.Elapsed.TotalSeconds);•
34
October 2012 | www.devworx.in October 2012 | www.devworx.in
• vel _ x=e.Cumulative.Translation.X/(finish _ ms-start _ ms);• vel _ y=e.Cumulative.Translation.Y/(finish _ ms-start _ ms);• …
Here, velocity is computed as change in total translation divided by the
change in time. The translation is relative to the starting position of the
image at the time of manipulation invocation. Alternatively, e.Velocities
can be used to directly access the velocity passed into the event. Remember
that unlike speed (a scalar), velocity is a vector that has both magnitude and
direction. The signs vel_x and vel_y clarify this.
SummaryOver the course of this case study we took a look at how a developer can go
about enabling touch in Windows 8 Metro style apps. We started by defining
a touch event for a UI element in XAML, and then provided an overview
of defining the supporting code “glue” (C# code-behind). The discussion
started with the simple case of detecting a single tap.
The Metro style is quite well optimised and convenient for developing
touch apps thanks to its extensive APIs for the purpose. The VS IDE too
is simple to use from both the API library perspective and the runtime
debugging capabilities. This exciting new software bundle has plenty of
headroom for the creative thinker looking to make the most appealing
user experience from both a UI and touch sensor perspective. We showed
an exciting example: using touch APIs to deliver a great learning game for
children. Now, go have fun!
35
October 2012 | www.devworx.in October 2012 | www.devworx.in
Time forsome creativity!We need a new cover photo for ourFacebook page. Can you help us design it? Feel free to use the logos at
dvwx.in/VsQyC5to highlight the devworx identity.Post your creations on the page!
>>With Windows 8 closely supporting various types of sensors, there’s more to explore than what meets the eye.
03*Sensor and location apps
38
October 2012 | www.devworx.in October 2012 | www.devworx.in
e as developers have multiple API choices to program
sensors on Win8. The new touch-friendly app envi-
ronment, called “Metro style apps” can only access the
completely new API library called WinRT. The WinRT
sensor API represents a portion of the overall WinRT
library.
Traditional Win Forms, or MFC-style apps are now called “Desktop
apps” because they run in the Desktop Windows Manager environment.
Desktop apps can either use the native Win32/COM API or a NET-style API
In both cases, these APIs go through a Windows middleware component
called the Windows Sensor Framework. The Windows Sensor Framework
defines the Sensor Object Model. The different APIs “bind” to that object
model in slightly different ways.
Differences in the Desktop and Metro style application development
will be discussed later. For brevity, we will consider only Desktop app
development. Metro style app development is much simpler, and has a
simpler API. If you are familiar with web development, you can even use
JavaScript to build apps.
SensorsThere are many kinds of sensors, but we are interested in the ones required
for Windows 8, namely accelerometers, gyroscopes, ambient light sen-
sors, compass and GPS. Windows 8 represents the physical sensors with
object-oriented abstractions. To manipulate the sensors, programmers use
APIs to interact with the objects. Developers need only worry about these
Metro and Desktop Sensor frameworks in Windows 8
W
39
October 2012 | www.devworx.in October 2012 | www.devworx.in
abstractions and APIs rather than deal with the actual hardware in question.
There are more sensor objects available to developers than actual hard-
ware. This is because Windows defines some “logical sensor” objects which
are pure software, and they often combine information from multiple
physical sensors. This is called “Sensor Fusion.”
Sensor FusionThe physical sensor chips have some inherent natural limitations. For
example:
Accelerometers measure linear acceleration, which is a measurement of
the combined relative motion and the force of Earth’s gravity. If you want to
know the computer’s tilt, you’ll have to do some mathematical calculations.
Magnetometers measure the strength of magnetic fields, which indicate
the location of the Earth’s Magnetic North Pole.
These measurements are subject to an inherent drift problem, which
can be corrected by using raw data from the Gyro. Both measurements are
(scaled) dependent upon the tilt of the computer from level with respect to
the Earth’s surface.
If you really want the computer’s heading with respect to the Earth’s
True North Pole (Magnetic North Pole is in a different position and moves
over time), you need to correct for that.
Sensor Fusion is obtaining raw data from multiple physical sensors, espe-
cially the Accelerometer, Gyro, and Magnetometer, performing mathematical
calculations to correct for natural sensor limitations, computing more
human-usable data, and representing those as logical sensor abstractions.
The application developer has to implement the necessary transforma-
tions required to translate physical sensor data to the abstract sensor data.
If your system design has a SensorHub, the fusion operations will take place
inside the microcontroller firmware. If your system design does not have a
SensorHub, the fusion operations must be done inside one-or-more device
drivers that the IHVs and/or OEMs provide.
Why Build Sensor AppsIt’s important to understand why building sensor-based apps even matters.
So let’s start with the basic question, what is a sensor?
A sensor is any device that can measure an external condition (tempera-
ture, pressure, location) and convert that to measurable data. We as humans
have numerous sensors in our body that give our brain information about
40
October 2012 | www.devworx.in October 2012 | www.devworx.in
external or internal conditions. Our eyes detect light and construct pictures
our of it, our ears detect sound vibrations, our noses detect the presence
of certain chemicals, our skin and feel pressure and temperature. We have
numerous senses. There are other things that we can’t sense, magnetism for
example, a large range of the light spectrum such as infrared, microwave,
ultraviolet and radio waves is invisible to us.
Likewise a computer needs to have some way to access information
about external conditions and react accordingly. We use the keyboard
and mouse to directly input data to the computer, however there are many
other external conditions that the computer could or should know about.
Computers have—from quite some time now—included temperature
sensors on components such as CPUs, hard drives and graphics cards.
These sensors help the computer keep the temperature of these components
under check. If the CPU or GPU gets too hot, the speed of its fan—or other
cooling mechanism—will be increased automatically. If it gets too hot, then
the computer might shut down, or at the very least warn the user so that
unnecessary damage can be avoided.
A number of monitors and televisions tout the ability to detect ambient
light conditions and change the brightness / contrast and other parameters
for the best quality and energy savings. Some headphones can detect external
noise and compensate accordingly to give a clean listening experience.
These were just some common examples, sensors collect a whole range
of data about different external world conditions such as the location of the
device, the orientation of the device, and acceleration of the device among
others. All of the sensors we mentioned here were specialised for a single
use device and single purpose. While that fancy monitor may be measuring
ambient light it is not passing on that information to the computer so an
application to take advantage of it. The temperature sensors in a computer
are attached to the CPUs, hard drives and graphics cards, so they are no
use for giving information about the room temperature.
Currently computers are deprived of the many senses that humans have
even though they are capable of much much more. Newer devices such as
the Intel Ultrabook will feature a number of sensors, allowing for a wide
range of new interactions with the world around the computer that were
never possible before.
An IDE could change the theme of the code editor to adapt to ambient
lighting conditions, or a cookbook application could detect the room tem-
perature and recommend a nice cool cocktail instead of a soup. Sensors
41
October 2012 | www.devworx.in October 2012 | www.devworx.in
Prefer 140 characters?
@devworx<Follow us>
42
October 2012 | www.devworx.in October 2012 | www.devworx.in
such as accelerometers and gyroscopes can make your application respond
in much more natural ways by making your application sensitive to the
movement, and tilt of the device. When you take into account that there are
multiple sensors you can access, the range of interactions increases even
more. The possibilities are limited only by your imagination.
Windows 7 included native support for sensors and had a Sensor API
that allowed standardised access to the sensors attached to the computer.
Standardisation is important, so that each application need not have to
be designed with support for each brand of each type of sensor. With a
standardised API, software can be written in a manner that allows it to use
current and future sensors.
Of course software will still need to be written to take advantage of a
particular sensors, but no longer will it have to worry about the manufac-
turer of that sensor. The Sensor API in Windows 7 has a standard way to
access data from any sensor. It standardises the sensor categories, types and
properties; data formats for the different sensor types, COM interfaces for
windows with sensors, and events for asynchronously accessing sensor data.
All the advantages of sensor and touch based applications disappear
of course, if you know that your application will never run on a computer
than has a touch screen, or any sensors installed. While Windows 7 was
multi-touch-enabled, and had support for sensors, there were few applica-
tions that took advantage of these features and are several reasons for that.
Firstly while Windows 7 was multi-touch capable, its UI isn’t optimised
for touch, which makes it quite infeasible to use it on a tablet—even though
touch-based applications can be developed for it. Secondly, there were very
few computers available with Windows 7 that had a touchscreen, mostly
because of the first reason. So there was little incentive for developers to
develop touch-based apps because the market for such applications was
incredibly small. The story with sensor support is similar, there were few
Windows 7-based computers that actually had any sensors, so again there
was little incentive for developing Windows 7 apps that took advantage
of sensors.
Now that Windows 8 is around the corner, things are quite different.
Windows 8 has a much better touch-focused UI; an entirely new paradigm
for developing touch-based applications. The most important factor though,
is that it will ship on a lot more tablets and touch enabled computers,
including Ultrabooks. Intel’s requirements for Ultrabooks include support
for standard sensors such as accelerometers, gyroscopes, GPS, ambient
43
October 2012 | www.devworx.in October 2012 | www.devworx.in
light sensors, digital compass etc.
Now that there is a reasonable expectation that the computer your Win-
dows application will run on will have a touch screen and sensor—especially
if the application is a metro application—it makes sense to enhance the
features of the application with support for sensors.
If you already have a Windows 7 application, that doesn’t translate to
Metro, you can still take advantage of touch and sensors support, since
those are available to non-metro applications as well.
About the Sensor APITechnically anything that provides data about physical phenomena can be
called a sensor, and while we tend to think of sensors as hardware devices,
sensors can also be logical, emulated via software or firmware. Furthermore,
a single hardware device can contain multiple sensors. A video game con-
troller is a perfect example. The Wii remote includes sensors that can detect
the movement and the position of a user’s hand, and use that position and
movement data to control the game.
The Windows Sensor and Location platform categories sensors in broad
classes of sensor devices, and types, which represent specific kinds of sen-
sors. The previously-mentioned video game remotes, in this case would be
categorised as an “Orientation sensor”, with a type of “3-D Accelerometer”.
Each category and type in Windows can be represented in code by a globally
unique identifier (GUIDs). Many of these are predefined, and device manu-
facturers can create new categories and types by defining and publishing
new GUIDs, when it is required.
Another category would be Location devices. This would include not
only GPS sensors, but also software-based sensors such as mobile phone
triangulation systems which gain location information from the mobile
service provide; or WiFi based positioning which gets location information
from the wireless network, or even geolocation via IP addresses. These are
all location sensors.
The Windows Sensor and Location platform consists of the following
developer and user components:
The DDI enables Windows to provide a standard way for sensor devices
to connect to the computer and to provide data to other subsystems.
The Windows Sensor API provides a set of methods, properties, and
events to work with connected sensors and sensor data.
The Windows Location API, which is built on the Windows Sensor
44
October 2012 | www.devworx.in October 2012 | www.devworx.in
API, provides a set of programming objects, including scripting objects,
for working with location information.
The Location and Other Sensors Control Panel enables computer admin-
istrators to set sensors, including location sensors, for each user.
Identifying sensorsTo manipulate a sensor, you need a system to identify and refer to it. The
Windows Sensor Framework defines a number of categories that sensors
are grouped into. It also defines a large number of specific sensor types:
Accelerometer, Gyro, Compass, and Ambient Light are the required “real/
physical” sensors. Device Orientation and Inclinometer are the required “vir-
tual/fusion” sensors (note that the Compass also includes fusion-enhanced/
tilt-compensated data).
GPS is a required sensor if you have a WWAN radio, otherwise GPS is
optional. Human Proximity is an oft-mentioned possible addition to the
required list, but, for now, it’s not required.
These names of the categories and types are nice, human-readable
forms. However, for programming, you’ll need to know the programming
constants for each type of sensor. All of these constants are actually just
numbers called GUIDs (Globally Unique IDs).
Category: Motion• Win32/COM Constant Name: SENSOR _ CATEGORY _ MOTION• .NET Constant Name: SensorCategories.SensorCategoryMotion• GUID: {CD09DAF1-3B2E-4C3D-B598-B5E5FF93FD46}
You can see a table of different sensor categories, their corresponding
Win32/COM/.NET constants and GUIDs in the appendices to this book.
At first you might think that the GUIDs are silly and tedious, but there
is one good reason for using them: extensibility. Since the APIs don’t care
about the actual sensor names (they just pass GUIDs around), it is possible
for vendors to invent new GUIDs for “value add” sensors.
Microsoft provides a tool in Visual Studio that allows anyone to generate
new GUIDs. All the vendor has to do is publish them, and new function-
ality can be exposed without the need to change the Microsoft APIs or any
operating system code at all.
Sensor Manager ObjectThe Sensor Manager Object can be used to get a list of sensors based
45
October 2012 | www.devworx.in October 2012 | www.devworx.in
on our specified criteria. So if our app needs access to all location sen-
sors, this is what we would use to get a list of all the sensors that provide
location information.
Ask by TypeLet’s say your app asks for a specific type of sensor, such as Gyrometer3D.
The Sensor Manager will consult the list of sensor hardware present on
the computer and return a collection of matching objects bound to that
hardware. It returns a Sensor Collection object, which can have 0, 1, or more
Sensor objects. However, usually it has only one. Below is a C++ code sample
illustrating the use of the Sensor Manager object’s GetSensorsByType
method to search for 3-axis Gyros and return them in a Sensor Collection.
Note that you have to ::CoCreateInstance() the Sensor Manager Object first.
• // Additional includes for sensors • #include <InitGuid.h> • #include <SensorsApi.h> • #include <Sensors.h> • // Create a COM interface to the SensorManager object. • ISensorManager* pSensorManager = NULL; • HRESULT hr = ::CoCreateInstance(CLSID _ Sensor-Manager, NULL, CLSCTX _ INPROC _ SERVER, IID _ PPV _ARGS(&pSensorManager)); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to CoCreateIn-stance() the SensorManager.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • // Get a collection of all motion sensors on the computer. • ISensorCollection* pSensorCollection = NULL; • hr = pSensorManager->GetSensorsByType(SENSOR _ TYPE _GYROMETER _ 3D, &pSensorCollection); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to find any Gyros on the com-puter.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1;
46
October 2012 | www.devworx.in October 2012 | www.devworx.in
• }
Ask by CategoryYour app can ask for sensors by category, such as all motion sensors. The
Sensor Manager consults the list of sensor hardware on the computer and
returns a collection of motion objects bound to that hardware. The Sens-
orCollection may have 0, 1, or more objects in it. On most computers, the
collection will have two motion objects: Accelerometer3D and Gyrometer3D.
The C++ code sample below illustrates the use of the Sensor Manager
object’s GetSensorsByCategory method to search for motion sensors and
return them in a sensor collection.
• // Additional includes for sensors • #include <initguid.h> • #include <sensorsapi.h> • #include <sensors.h> • // Create a COM interface to the SensorManager object. • ISensorManager* pSensorManager = NULL; • HRESULT hr = ::CoCreateInstance(CLSID _ SensorMan-ager, NULL, CLSCTX _ INPROC _ SERVER, • IID _ PPV _ ARGS(&pSensorManager)); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to CoCreateIn-stance() the SensorManager.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • // Get a collection of all sensors on the computer. • ISensorCollection* pSensorCollection = NULL; • hr = pSensorManager->GetSensorsByCategory(SENSOR _ CAT-EGORY _ MOTION, &pSensorCollection); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to find any sen-sors on the computer.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • }
47
October 2012 | www.devworx.in October 2012 | www.devworx.in
Ask by Category “All”In practice, the most efficient way is for your app to ask for all of the sensors
on the computer. The Sensor Manager consults the list of sensor hardware
on the computer and returns a collection of all the objects bound to that
hardware. The Sensor Collection may have 0, 1, or more objects in it. On
most computers, the collection will have seven or more objects.
C++ does not have a GetAllSensors call, so you must use
GetSensorsByCategory(SENSOR_CATEGORY_ALL, …) instead as shown
in the sample code below.
• // Additional includes for sensors • #include <initguid.h> • #include <sensorsapi.h> • #include <sensors.h> • // Create a COM interface to the SensorManager object. • ISensorManager* pSensorManager = NULL; • HRESULT hr = ::CoCreateInstance(CLSID _ SensorManager, NULL, CLSCTX _ INPROC _ SERVER, • IID _ PPV _ ARGS(&pSensorManager)); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to CoCreateInstance() the SensorManager.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICON-ERROR); • return -1; • } • // Get a collection of all 3-axis Gyros on the computer. • ISensorCollection* pSensorCollection = NULL; • hr = pSensorManager->GetSensorsByCategory(SENSOR _ CAT-EGORY _ ALL, &pSensorCollection); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to find any Motion sensors on the computer.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICON-ERROR); • return -1; • }
48
October 2012 | www.devworx.in October 2012 | www.devworx.in
Sensor Life Cycle On Windows, as with most hardware devices, sensors are treated as Plug
and Play devices. This means that you should look out for cases where a
sensor is disconnected or connected while your application is running.
At first you might say, “The sensors are hard-wired on the computer’s
motherboard, why do we have to worry about Plug and Play if they’ll never
be plugged in or unplugged?” There are a few different scenarios where it
occurs: Not all sensors have to be internal. It is possible to have USB-based
sensors external to the system and plugged into a USB port. In this case it
is easily possible that the user might start your app and then plug in the
needed sensor, or disconnect the sensor while the app is running.
Some sensors can even connect using other means, such as over the
ethernet, or by wireless means such as Bluetooth. These are unreliable
connection interfaces, where connects and disconnects happen.
If and when Windows Update upgrades the device driver for the sensors,
they appear to disconnect and then reconnect.
When Windows shuts down (to S4 or S5), the sensors appear to disconnect.
In the context of sensors, a Plug and Play connect is called an Enter
event, and disconnect is called a Leave event. A good resilient application
will need to be able to handle both.
Enter EventYour app may already be running at the time a sensor is plugged in. When
this happens, the Sensor Manager reports the sensor Enter event. Note:
if the sensors are already plugged in when your app starts running, you
will not get Enter events for those sensors. In C++/COM, you must use the
SetEventSink method to hook the callback. The callback cannot simply be a
function, it must be an entire class that inherits from ISensorManagerEvents,
and also implements IUnknown. The ISensorManagerEvents interface
must have callback function implementations for:• STDMETHODIMP OnSensorEnter(ISensor *pSensor, SensorState state);•
• // Hook the SensorManager for any SensorEnter events. • pSensorManagerEventClass = new SensorManagerEventSink(); // create C++ class instance • // get the ISensorManagerEvents COM interface pointer • HRESULT hr = pSensorManagerEventClass-
49
October 2012 | www.devworx.in October 2012 | www.devworx.in
>QueryInterface(IID _ PPV _ ARGS(&pSensorManagerEvents)); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Cannot query ISensorManager-Events interface for our callback class.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICON-ERROR); • return -1; • } • // hook COM interface of our class to SensorManager eventer • hr = pSensorManager->SetEventSink(pSensorManagerEvents); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Cannot SetEventSink on Sen-sorManager to our callback class.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICON-ERROR); • return -1; • }
Code: Hook Callback for Enter event
Below is the C++/COM equivalent of the Enter callback. You would
normally perform all the initialisation steps from your main loop in this
function. In fact, it is more efficient to refactor your code so your main loop
merely calls OnSensorEnter to simulate an Enter event.
• STDMETHODIMP SensorManagerEventSink::OnSensorEnter(ISensor *pSensor, SensorState state) • { • // Examine the SupportsDataField for SENSOR _ DATA _TYPE _ LIGHT _ LEVEL _ LUX. • VARIANT _ BOOL bSupported = VARIANT _ FALSE; • HRESULT hr = pSensor->SupportsDataField(SENSOR _DATA _ TYPE _ LIGHT _ LEVEL _ LUX, &bSupported); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Cannot check SupportsData-Field for SENSOR _ DATA _ TYPE _ LIGHT _ LEVEL _ LUX.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONIN-FORMATION);
50
October 2012 | www.devworx.in October 2012 | www.devworx.in
• return hr; • } • if (bSupported == VARIANT _ FALSE) • { • // This is not the sensor we want. • return -1; • } • ISensor *pAls = pSensor; // It looks like an ALS, mem-orize it. • ::MessageBox(NULL, _ T(“Ambient Light Sensor has entered.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONINFOR-MATION); • . • . • . • return hr; • }
Code: Callback for Enter event
Leave EventThe individual sensor reports when the Leave event happens (not the
Sensor Manager). This code is actually the same as the previous hook
callback for an Enter event.
• // Hook the Sensor for any DataUpdated, Leave, or Stat-eChanged events. • SensorEventSink* pSensorEventClass = new SensorEvent-Sink(); // create C++ class instance • ISensorEvents* pSensorEvents = NULL; • // get the ISensorEvents COM interface pointer • HRESULT hr = pSensorEventClass->QueryInterface(IID _ PPV _ARGS(&pSensorEvents)); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Cannot query ISensorEvents interface for our callback class.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICON-ERROR); • return -1; • }
51
October 2012 | www.devworx.in October 2012 | www.devworx.in
• hr = pSensor->SetEventSink(pSensorEvents); // hook COM interface of our class to Sensor eventer • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Cannot SetEventSink on the Sensor to our callback class.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICON-ERROR); • return -1; • }
Code: Hook Callback for Leave event
The OnLeave event handler receives the ID of the leaving sensor as
an argument.
• STDMETHODIMP SensorEventSink::OnLeave(REFSENSOR _ ID sen-sorID) • { • HRESULT hr = S _ OK; • ::MessageBox(NULL, _ T(“Ambient Light Sensor has left.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONINFOR-MATION); • // Perform any house-keeping tasks for the sensor that is leaving. • // For example, if you have maintained a refer-ence to the sensor, • // release it now and set the pointer to NULL. • return hr; • }
Code: Callback for Leave event
Picking Sensors for Your AppWe care about sensors because of what they tell us. Different types of sen-
sors tell us different things. Microsoft calls these pieces of information
Data Fields, and they are grouped together in a SensorDataReport. Your
computer may (potentially) have more than one type of sensor that can tell
your app the information you care about. Your app probably doesn’t care
which sensor it gets the information from, so long as it can get it.
To ease the task for developers, the Sensor API provides a constant names
52
October 2012 | www.devworx.in October 2012 | www.devworx.in
for commonly-used Data Fields. These are human-readable names for what
are really just big numbers underneath. This provides for extensibility of
Data Fields beyond the “well known” ones Microsoft has pre-defined. There
are many other “well known” IDs for you to explore.
You can look up a table of Data Field identifier constants in the appen-
dices to this book.
One thing that makes Data Field identifiers different from sensor IDs is
the use of a data type called PROPERTYKEY. A PROPERTYKEY consists of
a GUID (similar to what sensors have), plus an extra number called a “PID”
(property ID). You might notice that the GUID part of a PROPERTYKEY
is common for sensors that are in the same category. Data Fields have a
native data type for all of their values, such as Boolean, unsigned char, int,
float, double, and so on.
In Win32/COM, the value of a Data Field is stored in a polymorphic data
type called PROPVARIANT. In NET, there is a CLR (Common Language
Runtime) data type called “object” that does the same thing. You have to query
and/or typecast the polymorphic data type to the “expected”/”documented”
data type.
Use the SupportsDataField() method of the sensor to check the sensors
for the Data Fields of interest. This is the most common programming
idiom that we use to select sensors. Depending on the usage model of your
app, you may only need a subset of the Data Fields, not all of them. Pick
the sensors you want based on whether they support the Data Fields you
need. Note that you also need to use type casting to assign the sub-classed
member variables from the base class sensor.
• ISensor* m _ pAls; • ISensor* m _ pAccel; • ISensor* m _ pTilt; • // Cycle through the collection looking for sensors we care about. • ULONG ulCount = 0; • HRESULT hr = pSensorCollection->GetCount(&ulCount); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to get count of sensors on the computer.”), _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1;
53
October 2012 | www.devworx.in October 2012 | www.devworx.in
• } • for (int i = 0; i < (int)ulCount; i++) • { • hr = pSensorCollection->GetAt(i, &pSensor); • if (SUCCEEDED(hr)) • { • VARIANT _ BOOL bSupported = VARIANT _ FALSE; • hr = pSensor->SupportsDataField(SENSOR _DATA _ TYPE _ LIGHT _ LEVEL _ LUX, &bSupported); • if (SUCCEEDED(hr) && (bSupported == VARIANT _TRUE)) m _ pAls = pSensor; • hr = pSensor->SupportsDataField(SENSOR _DATA _ TYPE _ ACCELERATION _ Z _ G, &bSupported); • if (SUCCEEDED(hr) && (bSupported == VARIANT _TRUE)) m _ pAccel = pSensor; • hr = pSensor->SupportsDataField(SENSOR _DATA _ TYPE _ TILT _ Z _ DEGREES, &bSupported); • if (SUCCEEDED(hr) && (bSupported == VARIANT _TRUE)) m _ pTilt = pSensor; • . • . • . • } • }
Code: Use the SupportsDataField() method of the sensor to check for
supported data field
Sensor PropertiesIn addition to Data Fields, sensors have Properties that can be used for
identification and configuration. Just like Data Fields, Properties have
constant names used by Win32/COM and NET, and those constants are
really PROPERTYKEY numbers underneath. Properties are extensible
by vendors and also have PROPVARIANT polymorphic data types.
Unlike Data Fields that are read-only, Properties have the ability to be
Read/Write. It is up to the individual sensor’s discretion as to whether or
not it rejects Write attempts. As an app developer, you need to perform
write-read-verify because no exception is thrown when a write attempt
fails. A table of commonly used sensor Properties and PIDs is available
in the appendix.
54
October 2012 | www.devworx.in October 2012 | www.devworx.in
Setting Sensor SensitivityThe sensitivity setting is probably the most useful Property of a sensor.
It can be used to assign a threshold that controls or filters the number of
SensorDataReports sent to the host computer. In this way, traffic can be
reduced: only send up those DataUpdated events that are truly worthy
of bothering the host CPU. The way Microsoft has defined the data type
of this Sensitivity property is a little unusual. It is a container type called
IPortableDeviceValues in Win32/COM and SensorPortableDeviceValues
in NET. This container holds a collection of tuples, each of which is a Data
Field PROPERTYKEY followed by the sensitivity value for that Data Field.
The sensitivity always uses the same units of measure and data type as the
matching Data Field.• // Configure sensitivity • // create an IPortableDeviceValues container for holding the <Data Field, Sensitivity> tuples. • IPortableDeviceValues* pInSensitivityValues; • hr = ::CoCreateInstance(CLSID _ PortableDevice-Values, NULL, CLSCTX _ INPROC _ SERVER, IID _ PPV _ARGS(&pInSensitivityValues)); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to CoCreateInstance() a PortableDeviceValues collection.”), _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • // fill in IPortableDeviceValues container contents here: 0.1 G sensitivity in each of X, Y, and Z axes. • PROPVARIANT pv; • PropVariantInit(&pv); • pv.vt = VT _ R8; // COM type for (double) • pv.dblVal = (double)0.1; • pInSensitivityValues->SetValue(SENSOR _ DATA _ TYPE _ACCELERATION _ X _ G, &pv); • pInSensitivityValues->SetValue(SENSOR _ DATA _ TYPE _ACCELERATION _ Y _ G, &pv); • pInSensitivityValues->SetValue(SENSOR _ DATA _ TYPE _ACCELERATION _ Z _ G, &pv); • // create an IPortableDeviceValues container for holding the <SENSOR _ PROPERTY _ CHANGE _ SENSITIVITY, pInSensitivity-
55
October 2012 | www.devworx.in October 2012 | www.devworx.in
Values> tuple. • IPortableDeviceValues* pInValues; • hr = ::CoCreateInstance(CLSID _ PortableDeviceValues, NULL, CLSCTX _ INPROC _ SERVER, IID _ PPV _ ARGS(&pInValues)); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to CoCreateInstance() a PortableDeviceValues collection.”), _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • // fill it in • pInValues->SetIPortableDeviceValuesValue(SENSOR _ PROP-ERTY _ CHANGE _ SENSITIVITY, pInSensitivityValues); • // now actually set the sensitivity • IPortableDeviceValues* pOutValues; • hr = pAls->SetProperties(pInValues, &pOutValues); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to SetProperties() for Sensitivity.”), _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICON-ERROR); • return -1; • } • // check to see if any of the setting requests failed • DWORD dwCount = 0; • hr = pOutValues->GetCount(&dwCount); • if (FAILED(hr) || (dwCount > 0)) • { • ::MessageBox(NULL, _ T(“Failed to set one-or-more Sensitivity values.”), _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • PropVariantClear(&pv);
Requesting permissions for SensorsThe end user may consider the information provided by sensors to be
sensitive, i.e., Personally Identifiable Information (PII). Data Fields such
as the computer’s location (e.g., latitude and longitude), could be used to
track the user. Therefore, before use, Windows forces apps to get end-user
permission to access the sensor. Use the State property of the sensor and
56
October 2012 | www.devworx.in October 2012 | www.devworx.in
the RequestPermissions() method of the SensorManager if needed.
The RequestPermissions() method takes an array of sensors as an argu-
ment, so you can ask for permission for more than one sensor at a time if
you want. The C++/COM code is shown below. Note that you must provide
an (ISensorCollection *) argument to RequestPermissions().• // Get the sensor’s state • SensorState state = SENSOR _ STATE _ ERROR; • HRESULT hr = pSensor->GetState(&state); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to get sensor state.”), _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • // Check for access permissions, request permission if necessary. • if (state == SENSOR _ STATE _ ACCESS _ DENIED) • { • // Make a SensorCollection with only the sensors we want to get permission to access. • ISensorCollection *pSensorCollection = NULL; • hr = ::CoCreateInstance(CLSID _ SensorCol-lection, NULL, CLSCTX _ INPROC _ SERVER, IID _ PPV _ARGS(&pSensorCollection)); • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Unable to CoCreateIn-stance() a SensorCollection.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • pSensorCollection->Clear(); • pSensorCollection->Add(pAls); // add 1 or more sen-sors to request permission for... • // Have the SensorManager prompt the end-user for permission. • hr = m _ pSensorManager->RequestPermissions(NULL, pSensorCollection, TRUE); • if (FAILED(hr))
57
October 2012 | www.devworx.in October 2012 | www.devworx.in
• { • ::MessageBox(NULL, _ T(“No permission to access sensors that we care about.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • }
Sensor Data UpdateSensors report data by throwing an event called a DataUpdated event.
The actual Data Fields are packaged inside a SensorDataReport, which is
passed to any attached DataUpdated event handlers. Your app can obtain
the SensorDataReport by hooking a callback handler to the sensor’s Data-
Updated event. The event occurs in a Windows Sensor Framework thread,
which is a different thread than the message-pump thread used to update
your app’s GUI. Therefore, you will need to do a “hand-off” of the Sensor-
DataReport from the event handler (Als_DataUpdate) to a separate handler
(Als_UpdateGUI) that can execute on the context of the GUI thread. In .NET,
such a handler is called a delegate function.
The example below shows preparation of the delegate function. In C++/
COM, you must use the SetEventSink method to hook the callback. The
callback cannot simply be a function; it must be an entire class that inherits
from ISensorEvents and also implements IUnknown. The ISensorEvents
interface must have callback function implementations for:• STDMETHODIMP OnEvent(ISensor *pSensor, REFGUID eventID, IPortableDeviceValues *pEventData); • STDMETHODIMP OnDataUpdated(ISensor *pSensor, ISensor-DataReport *pNewData); • STDMETHODIMP OnLeave(REFSENSOR _ ID sensorID); • STDMETHODIMP OnStateChanged(ISensor* pSensor, Sensor-State state); • // Hook the Sensor for any DataUpdated, Leave, or Stat-eChanged events. • SensorEventSink* pSensorEventClass = new SensorEvent-Sink(); // create C++ class instance • ISensorEvents* pSensorEvents = NULL; • // get the ISensorEvents COM interface pointer • HRESULT hr = pSensorEventClass->QueryInterface(IID _ PPV _ARGS(&pSensorEvents));
58
October 2012 | www.devworx.in October 2012 | www.devworx.in
• if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Cannot query ISensorEvents interface for our callback class.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • } • hr = pSensor->SetEventSink(pSensorEvents); // hook COM interface of our class to Sensor eventer • if (FAILED(hr)) • { • ::MessageBox(NULL, _ T(“Cannot SetEventSink on the Sensor to our callback class.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • return -1; • }
Code: Set a COM Event Sink for the sensor
The DataUpdated event handler receives the SensorDataReport (and the
sensor that initiated the event) as arguments. It calls the Invoke() method of
the form to post those items to the delegate function. The GUI thread runs
the delegate function posted to its Invoke queue and passes the arguments
to it. The delegate function casts the data type of the SensorDataReport to
the expected subclass, gaining access to its Data Fields. The Data Fields are
extracted using the GetDataField() method of the SensorDataReport object.
Each of the Data Fields has to be typecast to their “expected”/”documented”
data types (from the generic/polymorphic data type returned by the GetDa-
taField() method). The app can then format and display the data in the GUI.
The OnDataUpdated event handler receives the SensorDataReport
(and the sensor what initiated the event) as arguments. The Data Fields
are extracted using the GetSensorValue() method of the SensorDataReport
object. Each of the Data Fields needs to have their PROPVARIANT checked
for their “expected”/”documented” data types. The app can then format and
display the data in the GUI. It is not necessary to use the equivalent of a C#
delegate. This is because all C++ GUI functions (such as ::SetWindowText()
shown here) use Windows message-passing to post the GUI update to the
GUI thread / message-loop (the WndProc of your main window or dialog
box).• STDMETHODIMP SensorEventSink::OnDataUpdated(ISensor *pSensor, ISensorDataReport *pNewData)
59
October 2012 | www.devworx.in October 2012 | www.devworx.in
• { • HRESULT hr = S _ OK; • if ((NULL == pNewData) || (NULL == pSensor)) return E _ INVALIDARG; • float fLux = 0.0f; • PROPVARIANT pv = {}; • hr = pNewData->GetSensorValue(SENSOR _ DATA _TYPE _ LIGHT _ LEVEL _ LUX, &pv); • if (SUCCEEDED(hr)) • { • if (pv.vt == VT _ R4) // make sure the PROP-VARIANT holds a float as we expect • { • // Get the lux value. • fLux = pv.fltVal; • // Update the GUI • wchar _ t *pwszLabelText = (wchar _ t *)malloc(64 * sizeof(wchar _ t)); • swprintf _ s(pwszLabelText, 64, L”Illuminance Lux: %.1f”, fLux); • BOOL bSuccess = ::SetWindowText(m _ hwnd-Label, (LPCWSTR)pwszLabelText); • if (bSuccess == FALSE) • { • ::MessageBox(NULL, _ T(“Cannot SetWin-dowText on label control.”), • _ T(“Sensor C++ Sample”), MB _ OK | MB _ ICONERROR); • } • free(pwszLabelText); • } • } • PropVariantClear(&pv); • return hr; • }
You can just reference properties of the SensorDataReport object to
extract Data Fields from the SensorDataReport. This only works for the
NET API (in the Win32/COM API, you must use the GetDataField method),
and for “well known” or “expected” Data Fields of that particular Sensor-
DataReport subclass. It is possible (using something called “Dynamic Data
Fields”) for the underlying driver/firmware to “piggyback” any “extended/
60
October 2012 | www.devworx.in October 2012 | www.devworx.in
unexpected” Data Fields inside SensorDataReports. To extract those, you
must use the GetDataField method.
Using Sensors in Metro Style AppsFor Metro/WinRT, the API is much simpler. You can use simple methods
to get the sensor object for the sensor of your choice and test for NULL.
For example, if you needed to use an accelerometer in your metro based
application, you would simply use: accelerometer = Accelerometer.
GetDefault();
If this accelerometer object is null, you will know that the sensor is not
present on the system. Other examples are:
als = LightSensor.GetDefault();
simpleorientation = SimpleOrientationSensor.GetDefault();
orientation = OrientationSensor.GetDefault();
inclinometer = Inclinometer.GetDefault();
compass = Compass.GetDefault();
gyrometer = Gyrometer.GetDefault();
Geolocation is a bit different:
geo = new Geolocator();Unlike the Desktop mode, Metro/WinRT Sensor API follows a common
template for each of the sensors:
There is usually a single event called ReadingChanged that calls the
callback with an xxxReadingChangedEventArgs containing a Reading
object holding the actual data. (the accelerometer is an exception; it also
has a Shaken event).
The hardware-bound instance of the sensor class is retrieved using the
GetDefault() method.
Polling can be done with the GetCurrentReading() method.
Metro style apps are typically written either in JavaScript or in C#. There
are different language-bindings to the API, which result in a slightly dif-
ferent capitalisation appearance in the API names and a slightly different
way that events are handled.
SensorManagerPros: There is no SensorManager to deal with. Apps use the GetDefault()
method to get an instance of the sensor class.
61
October 2012 | www.devworx.in October 2012 | www.devworx.in
Cons: It is not possible to search for arbitrary sensor instances. If more than
one of a particular sensor type exists on a computer, you will only see the
“first” one.
It is not possible to search for arbitrary sensor types or categories by
GUID. Vendor value-add extensions are inaccessible.
Events Pros:Apps only worry about the DataUpdated event..
Cons:Apps have no access to Enter, Leave, StatusChanged, or arbitrary event
types. Vendor value-add extensions are inaccessible.
Sensor properties Pros:Apps only worry about the ReportInterval property.
Cons:Apps have no access to the other properties, including the most useful
one: Sensitivity.
Other than manipulating the ReportInterval property, there is no way
for Metro style apps to tune or control the flow rate of Data Reports.
Apps cannot access arbitrary Properties by PROPERTYKEY. Vendor
value-add extensions are inaccessible.
Data Report propertiesPros:Apps only worry about a few, pre-defined Data Fields unique to each
sensor.
Cons:Apps have no access to other Data Fields. If sensors “piggy-back” addi-
tional well-known Data Fields in a Data Report beyond what Metro style
apps expect, the Data Fields are inaccessible.
Apps cannot access arbitrary Data Fields by PROPERTYKEY. Vendor
value-add extensions are inaccessible. Apps have no way to query at run-
time what Data Fields a sensor supports. It can only assume what the API
pre-defines.
62
October 2012 | www.devworx.in October 2012 | www.devworx.inOctober 2012 | www.devworx.in
SummaryWindows 8 APIs provide developers an opportunity to take advantage of
sensors available on different platforms under both the traditional Desktop
mode and the new Metro style app interface. We presented an overview of
the sensor APIs available to developers looking to create applications with
Windows 8, focusing on the APIs and code samples for Desktop mode apps.
Sensors and PrivacyMany sensors are capable of capturing rather sensitive data, and so user
privacy is something that needs to be taken into account while using this
data. Few people will be bothered if the world knows how much light was
detected by their ambient light sensor, but the GPS is capable of giving
away their exact location, and that is something the developers and users
need to be careful about.
The Windows Sensor and Location platform includes privacy settings
to help protect users’ personal information.
The platform helps to ensure that sensor data remains private, when
privacy is required, in the following ways: By default, sensors are off. Because
the platform design presumes that any sensor can provide personal data,
each sensor is disabled until the user provides explicit consent to access
the sensor data.
Windows provides disclosure messages and Help content for the user.
This content helps users understand how sensors can affect the privacy of
their personal data and helps users make informed decisions.
Providing permission for a sensor requires administrator rights.
When it is enabled, a sensor device works for all programs running
under a particular user account (or for all user accounts). This includes
non-interactive users and services, such as ASP, NET or SYSTEM. For
example, if you enable a GPS sensor for your user account, only programs
running under your user account have access to the GPS. If you enable
the GPS for all users, any program running under any user account has
access to the GPS.
Programs that use sensors can call a method to open a system dialog
box that prompts users to enable needed sensor devices. This feature
makes it easy for developers and users to make sure that sensors work
when programs need them, while maintaining user control of disclosure
of sensor data.
Sensor drivers use a special object that processes all I/O requests. This
October 2012 | www.devworx.in
Time forsome creativity!We need a new cover photo for ourFacebook page. Can you help us design it? Feel free to use the logos at
dvwx.in/VsQyC5to highlight the devworx identity.Post your creations on the page!
64
October 2012 | www.devworx.in October 2012 | www.devworx.in
object makes sure that only programs that have user permission can access
sensor data.
Of course while the platform itself takes care of some things, the applica-
tion developer should still give the user complete agency. The user should
be able to control how much if any of their sensor data they want to give to
the application, and if the application is able to share such data with others,
how much and with whom it is shared.
On the desktop platform, which is quite unrestrained, this burden often
lies on the developer rather than on the platform itself.
>>In addition to touch, Intel ultrabooks support several sensor types.
04*Under the hood
66
October 2012 | www.devworx.in October 2012 | www.devworx.in
ncorporating a sensor into the design on an application
isn’t very hard nowadays thanks to standard sensor
APIs available in Windows 8 Metro and desktop. The
more interesting thing is how the sensor is used, and
what affect it has.
There are a number of experiences that are only pos-
sible with the use of sensors, a pedometer app is a good example. A pedom-
eter is a device that can give you a somewhat accurate count of the number
of steps you take while walking. It uses the data from an accelerometer to
detect the movement of the body as it takes each step. As such it would not
be possible without an accelerometer.
The majority of applications however are those that take advantage of
sensors to enhance the experience of the users. Such as shaking a device
to shuffle the playlist, or undoing a change, or resetting a puzzle. These
are enhancements and it is generally not a good idea to have parts of the
functionality of an application dependent solely on the presence of a sensor.
In some cases the interaction of a sensor with your application is simple.
If you are using the orientation APIs to see whether the device is being
held in the landscape or portrait orientation, the interaction is simple, you
change the UI to best suit how the user is holding the device. This isn’t to
say that designing a portrait version of a landscape application is simple,
just that the use case is very clear.
When it comes to many of the other sensors, it isn’t that simple. For
example, if you are developing a calculator application, how do you take
advantage of the GPS? Do you provide a feature to calculate the distance
from your location to other places? Will the users even use this feature?
Will they even look for it in a calculator?
If the user shakes the device what does that mean? Should it pause
playback? Shuffle the playlist? On a phone it might mean something else,
i
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
67
October 2012 | www.devworx.in October 2012 | www.devworx.in
but you cant expect people to shake an Ultrabook! On something like an
Ultrabook the use for an accelerometer is probably more subtle.
Some apps can organically integrate the data from the sensors. For
example if you application previously had an option to switch between
landscape and portrait views, you already know what to do. Similarly if
your kind of application has brightness controls, perhaps some of that can
be automated by using data from the ambient light sensor. If you application
needs to have information about the user’s location at all, the GPS can help
automate that, rather than ask the user.
What we see is quite similar to what happened as the internet become
popular. There were some apps such as email clients that were built around
the internet, other apps that took advantage of the internet to simplify pre-
viously difficult tasks such as collaboration or even software updates. Yet
others simply added a simple share button. Some applications will be built
on the very presence of a sensor—we mentioned a pedometer app—others
will include them wherever they improve the experience or seem natural,
yet others will simply add it because the feature exists.
It is important to understand what the different sensors in devices like
the Ultrabook have to offer so you can make a judgement of what is most
natural in your application. If you intend to incorporate the use of sensors
in your application it is rather important to know what sensors are available
and what they are capable of.
Accelerometer: An accelerometer measures the acceleration of the device
in any direction. We go into much more detail in the next case study chapter.
Gyroscope: Just like an accelerometer can detect linear movement in
any direction, the gyroscope can be used to detect angular movement along
any axis. If you are making a racing game where you want the user to be
able use the phone as a steering wheel to turn the car, the gyroscope will
be your best bet.
GPS: A GPS or Global Positioning System is a sensor that is pretty much
useless by itself. It relies on a global network of satellites surrounding the
Earth, and contacts a number of satellites in order to get a fix on your posi-
tion on Earth. Even then what you get from a GPS is merely a latitude and
longitude (and altitude as well, along with information about the satellites
contacted). You will still need to make use of a service such as Google Maps
to get information such as the civic address of the current location, or get
an actual map of a location.
Digital Compass: While the GPS might give you the exact location, it does
68
October 2012 | www.devworx.in October 2012 | www.devworx.in
not give you information about your orientation—or at least the orientation
of the device—for that you need a digital compass. Like a standard compass,
a digital compass will provide information about the orientation of the
device with respect to the North pole. Like any compass it is susceptible to
stronger magnetic influences such as magnets nearby.
Ambient light sensor: An ambient light sensor can report the level of
lighting in the environment in which the device is being used. This has
some obvious uses. For example you can adapt the colour scheme of the
UI based on the ambient lighting. LCD screens are notorious for having
poor visibility under bright light, and you can help out the user greatly by
increasing the contrast of the UI in such a situation.
Case StudyDesigning an Accelerometer-based AppAccelerometers are hardware sensors embedded in many modern portable
devices, like smartphones and tablets. They measure the acceleration of a
device in weight per unit of mass with respect to the Earth’s gravity at sea
level. They can be widely used for motion tracking, fall detection, gesture
detection, etc. We will cover the use of accelerometers in Windows 8 Metro
style applications and a case study on how to use accelerometers to detect
tapping on the edges of a tablet. In addition, the case study highlights how
to detect the direction of taps that result in slight device movement.
AccelerometerThere are many types of accelerometers, for example optical accelerom-
eter, magnetic induction accelerometer, laser accelerometer, etc. Typically,
modern portable devices have general micro-electro mechanical system
(MEMS)-based accelerometers. We focus on the typical MEMS-based
accelerometers found in portable devices like tablets and smartphones.
This kind of accelerometer has some basic and/or advanced capabilities
depending on the model. In this section we introduce basic and advanced
capabilities and classify several use cases where accelerometers have been
used in mobile apps. But before that let’s get a basic understanding of what
an accelerometer is and how it works.
What is an accelerometer?An accelerometer is a sensor that is intended to measure, as the name sug-
gests, the acceleration experienced by the sensor. The readings an acceler-
69
October 2012 | www.devworx.in October 2012 | www.devworx.in
ometer provides are often relative to the acceleration due to gravity. This
is called the G-force.
One G-force equals the acceleration due to gravity on Earth at sea level,
or 9.81 m/s2. It is the rate of acceleration experienced by any body in free fall
on Earth. If we say that the something is experiencing 2 G-force (2 g), that
would mean that it’s acceleration is twice as much as it would be were it in
free fall on Earth. In other words, it is accelerating at the rate it would on
a planet with the gravity twice as much as that of Earth. The gravitational
acceleration is used as the base here since that is the standard acceleration
everything on Earth is experiencing.
An accelerometer is actually three accelerometers, one along each axis so
that it can detect movement in any direction. If you move the accelerometer
device, it will register the acceleration it is experiencing, along these three
axes, and by looking at the values along these axes you can get a good idea
of the direction in which the device is moving. You can detect patterns such
as if the device is shaking, or if the person is waving or walking.
How it works?Before we understand how to use an accelerometer in an application and
how to detect gestures using it, let us get a basic understanding of how an
accelerometer works.
There are different kinds of accelerometers that work in different ways,
however one basic popular kind of accelerometer is the previously-men-
tioned micro-electro mechanical system (MEMS) type of accelerometer.
As with any measuring instrument, there needs to be some way to
measure the quantity, and as with any electronic measuring instrument
there needs to be a way to convert the measured quantity into electricity.
As the name suggests, such accelerometers are composed of very small
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
70
October 2012 | www.devworx.in October 2012 | www.devworx.in
(micro) mechanical systems, which change when the device is moving. This
change causes a change in the electrical properties, and these changes can
be measured to detect if there is movement in the device.
In this case imagine a spring with a weight attached to it. When it is
stationary, the spring will be a little extended due to gravity. However
when this system is moved, the spring will extend or contract based on
the direction in which it is being moved. In an MEMS accelerometer we
have something equivalent to a spring with a weight attached to it, and we
have a means of measuring the extension in the spring due to movement
or gravity. Add three of these springs in the three axes and you have an
accelerometer. Of course this entire system is so small it could get lost in
your fingernails—you should really cut them.
CapabilitiesThe capabilities we discuss here are the ones supported by the accelerometer
device drivers, as opposed to capabilities provided by the operating system
or user applications.
What we mean to say is that these are capabilities that the actual acceler-
ometer sensor hardware possesses. Often, in order to abstract the hardware
from the software or the user, or to expose a standard API for developers
some of the functionality might be restricted. We will talk about the capa-
bilities of device itself, and not the capabilities that the operating system
or framework decides to expose.This approach is important because all
implementations will be a subset of these capabilities.
Basic CapabilityThe basic capability of a typical MEMS-based accelerometer is to measure
the acceleration with reference to the Earth’s gravity at the sea level. The
key characteristics of the basic capability along with the associated termi-
nologies are as follows:
Number of axes: the number of acceleration axes in Euclidean space that
the accelerometer outputs. For 2-dimensional positioning, two is enough
and generally the accelerometer outputs x and y directional G-force values.
For 3-dimensional positioning, three axes are needed so the accelerometer
outputs x, y, and z directional G-force values.
Maximum range: the range of the maximum and minimum measur-
able G-force values. Generally, it is ±2 g, ±4 g, ±6 g or even up to ±24 g.
Sensitivity: how much change the accelerometer output signal will have
71
October 2012 | www.devworx.in October 2012 | www.devworx.in
for a given change in acceleration. Generally, the more sensitive, the better.
Output data rate or bandwidth: how frequently we can take reliable
readings. It can range from several Hz to several hundreds of Hz.
Operating temperature range: the temperature range that the accel-
erometer can have reliable readings. The MEMS-based accelerometer on
portable devices is normally from -40 degree to 85 degree.
Advanced FeaturesSome accelerometers have an embedded controller. The controller can
read the accelerometer data in a high frequency (several hundreds of Hz)
and analyse the data in order to provide advanced features like low-power
mode, shake detection, tap detection, etc.
Low-power mode: Some accelerometers provide a low-power mode
feature for further reducing the power consumption of accelerometer. It
allows the accelerometer to sleep internally longer and averages the data
less frequently when no activity is detected.
Shake detection: Some accelerometers can detect a fast shake motion.
By analysing the sequence of acceleration changes, they can determine
whether a fast shake motion occurs.
Tap detection: Some accelerometers can detect taps. The tap here is
defined as not only the tapping on the touch screen, but tapping on the edge
of the device. Whenever a user taps an edge of the device, the embedded
controller in the accelerometer detects it and sends an interrupt.
Some accelerometers have more features, for example, freefall detection.
However, additional features make them more expensive. Accelerometers on
typical mobile devices do not have many advanced features. For example, tap
detection is commonly not available on most accelerometers on the market.
While these features can always be added on the software level, having
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
72
October 2012 | www.devworx.in October 2012 | www.devworx.in
hardware support takes is—as you might imagine—more efficient.
Use CasesAccelerometer data can be used in many scenarios, and developers can
find many creative uses for the data. We list several useful use cases here.
Tracking Device MovementAccelerometer data can be used to track device movement. This is usually
done by sensor fusion with the gyroscope, compass, and other sensors
together. The movement tracking is utilised in gesture detection, pedometer,
drawing in space, remote controller, camera station ability, etc.
Detection of Free Fall of DeviceThe accelerometer always measures the G-force experienced by the device.
So quite obviously, the minimum G-force the device should experience is 1,
the force of gravity itself. If the device is in free fall however, the acceleration
magnitude is below this threshold. This is because a free falling device is
accelerating to the ground with Earth’s gravity.
Just like people in a falling elevator or plane will feel weightless, a falling
phone will detect less than a single G-force. As the accelerometer data is
referencing Earth’s gravity, the magnitude of the acceleration is around
zero for a free fall.
Motion DetectionAccelerometers can be used for motion detection like shake, tap, and even
press. This is because each motion triggers a specific accelerometer data
change pattern. Motions are detected when a specific pattern of acceler-
ometer data occurs. The motion detections are utilised in many application
scenarios like adjusting voice volume by tapping, zooming in/out by double
tapping, showing images by shaking, etc.
Development Environment for AccelerometerData AccessHardwareThese experiments were conducted on a typical tablet with an MEMS-based
accelerometer made by STMicroelectronics. This accelerometer can provide
three axis G-force values with respect to the x (lateral), y (longitudinal), and
z (vertical) axes. The sampling rate is 50 Hz or higher—which means that
73
October 2012 | www.devworx.in October 2012 | www.devworx.in
the device can give an accelerometer reading as often as 50 times a second.
SoftwareOperating System and Development ToolWindows 8 release preview is the operating system installed on the target
tablet. We use Visual Studio RC 2012 to develop a Metro style C# app for
the accelerometer case study.
Accelerometer AccessWindows Runtime API provides the access to the supported accelerometer
via an accelerometer class in Windows.Devices.Sensors namespace for
Metro style apps.
First, we declare an accelerometer instance with the Accelerometer class.
Then we set up the minimum report interval for the accelerometer. Note
that the minimum report interval is different from the available sampling
rate of the accelerometer in hardware specification. The minimum report
interval is the minimum reading interval allowed by Windows Runtime for
the accelerometer. The allowed minimum report interval for the accelerom-
eter in Windows Runtime is 16 milliseconds. In other words, you can ask
for a new accelerometer reading once every 16 milliseconds, and no faster.
This limitation is set by Windows, and limits the maximum accelerometer
reading frequency to 62.5 Hz—or 62.5 times per second. This might be
far less than the what the device is capable of, however, consider the fact
that each accelerometer reading using CPU processing power, and an ill
designed application could overload the CPU by just listening for changes
in the accelerometer reading.
We set the allowed minimum report interval as the default value (16
milliseconds) for the accelerometer instance. Then we add the event handler
function for reacting to any accelerometer reading changes. The sample
code is shown below.
• // Initialise accelerometer • _ accelerometer = Accelerometer.GetDefault(); • • if ( _ accelerometer != null) • { • // Initialise accelerometer stable values. Note that it will do automatic
74
October 2012 | www.devworx.in October 2012 | www.devworx.in
• // calibration later. • TapDetector.InitializeReading(0, 0, 0); • • // Establish the report interval for all scenarios. • minReportInterval = _ accelerometer.MinimumReportInt-erval; • reportInterval = minReportInterval > 16 ? minReportInt-erval : 16; • _ accelerometer.ReportInterval = reportInterval; • } • • if ( _ accelerometer != null) • { • _ accelerometer.ReadingChanged += new TypedEventHandler<Accelerometer, • AccelerometerReadingChangedEventArgs>(ReadingChanged); • } • else • { • doneCalibrate = 0; • DisplayAppOutput(“No accelerometer found”); • }
Sample code 1: Initialise accelerometer instance and register accelerom-
eter data change event handler function
Data AcquisitionIn Windows Runtime, whenever the accelerometer data is changed, an accel-
erometer event occurs. What that means is that your software doesn’t need
to constantly check the latest accelerometer reading. The software merely
tells the OS that it is interested in knowing when the reading coming from
the accelerometer has changed by registering an event handler. The even
handler is a function that is to be run when there is something new coming
from the accelerometer. When the reading changes, the OS dispatches an
event, which the registered handler function receives.
This function is passed the accelerometer data readings as arguments.
In the code sample, ReadingChanged works as the event handler function
and receives the arguments. The event handler argument includes the
current G-force values with respect to the x (lateral), y (longitudinal), and z
(vertical) axes and the accelerometer data change timestamp. Here we show
the sample code for obtaining the three-axis accelerometer data.
75
October 2012 | www.devworx.in October 2012 | www.devworx.in
• // Initialise accelerometer • private void ReadingChanged(object sender, Accelerom-eterReadingChangedEventArgs e) • { • Dispatcher.RunAsync(CoreDispatcherPriority.High, () => • { • AccelerometerReading reading = e.Reading; • • AcceleroData _ X.Text = String.Format(“{0,5:0.00}”, reading.AccelerationX); • AcceleroData _ Y.Text = String.Format(“{0,5:0.00}”, reading.AccelerationY); • AcceleroData _ Z.Text = String.Format(“{0,5:0.00}”, reading.AccelerationZ); • AcceleroData _ TimeStamp = reading.TimeStamp; • }
Sample code 2: Retrieve the accelerometer data reading and the
timestamp
Accelerometer Three Axis DataWhat does the accelerometer data look like? The data recieved is the G-force
data with unit g in a pre-defined coordination system. The pre-defined coor-
dinate system used in Windows 8 Runtime is the same as the one in other
Accelerometer data coordinate system in Windows Runtime.
76
October 2012 | www.devworx.in October 2012 | www.devworx.in
Microsoft development family, such as Silverlight, XNA, or the Windows
Phone operating system.
If the device has acceleration directed to the left along the x axis, the
x value accessed through the accelerometer class is positive. Otherwise,
it’s negative. If the device has acceleration directed to the bottom of the
device, the y value from the accelerometer class is positive. Otherwise, it’s
negative. If the device has acceleration directed to the centre of the Earth
and the acceleration is higher than 1 g (”faster” than free fall), the z value
accessed through accelerometer class is positive. Otherwise, it is negative.
Ideally, when the tablet is placed in a stationary status on a flat surface,
the x and y values are 0 and the z value is -1 g as the accelerometer is always
referencing earth’s gravity at sea level.
We obtain the real accelerometer data from Windows Runtime APIs for
a typical tablet in a stationary status on a flat surface and plot them in the
graph, shown in Figure 2. As we can see, the x,y,z values are not the ideal
values (0, 0, -1) as the surface is not 100% flat.
Case StudyTap DetectionWindows Runtime provides access to the typical accelerometer data readings
for Metro apps with the minimum report interval of 16 ms (the maximum
report frequency is 62.5 Hz). With the report frequency at 62.5 Hz, it is
challenging to find patterns for some motions if the motions trigger only
slight accelerometer changes. However, it is still feasible to detect motions
with consistent patterns. Our case study shows the feasibility of using the
low-frequency accelerometer data to detect taps on the edge of the device
and directional taps with slight device movement. Both trigger slight accel-
erometer data changes.
Experimental DataWe experimented with a static tablet on a flat surface facing up with the
camera at the top of the device. As we tapped the edges of the device, we
used the Windows Runtime API to collect the reported accelerometer data
and plotted the data according to the time of accelerometer data reports.
TapsFigure 3 plots the accelerometer data when we tap twice on each edge of
the device (left, right, top, and bottom). We call the taps left, right, top, and
77
October 2012 | www.devworx.in October 2012 | www.devworx.in
bottom taps, respectively, and use different colours of oval cycling to depict
when taps happen.
The x, y, and/or z axes data consistently change when a tap happens.
We can compare the accelerometer data that we get and see how it changes
from when the device is static. Here we notice that when we tap on the
device in different direction, some pulses recorded by the accelerometer
in different axes. The pulses could be used as the criteria to determine
whether a tap occurred.
Directional tapsWe did experiments to find consistent patterns for directional taps. However,
the directional taps without any device movement show inconsistent changes
of accelerometer data with the 62.5 Hz data reading frequency. To detect
the directional taps without any device movement requires a sampling rate
of more than 400 Hz. Therefore, we experimented with the feasibility of
detecting directional taps when the tap resulted in a slight device movement.
For left and right taps on the device, the accelerometer data changes
mainly on the x axis if the device is facing up on a stationary flat surface.
We ignore the z axis in both figures as there is little change on it.
One left tap triggers two consecutive pulses on the x axis. The first
one is a negative pulse followed by a positive pulse. This is because a left
tap slightly moves the device with acceleration to the right first and then
decelerates because of friction.
Similarly, a right tap triggers two consecutive pulses on the x axis. The
first one is the positive pulse followed by a negative pulse. This is because
a right tap makes the device move to the left and then decelerates due to
friction. With the pulse patterns, we can determine whether a left or right
tap happens.
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
78
October 2012 | www.devworx.in October 2012 | www.devworx.in
Experimental data shows that it is feasible to detect left / right taps with
slight device movement by identifying two consecutive pulses on the x axis
and the order of positive / negative pulses. Next, we show that we can use
a state machine to implement the detection process.
State MachineState MachineWe use a simple state machine to identify whether the current state is
in a positive pulse or negative pulse or non-active state. We first define
three states of the accelerometer data as NonActive, PositivePulse, Nega-
tivePulse. NonActive means the change is below a threshold (we define
deltax_threshold in the sample code). The PositivePulse means the change
exceeds the threshold and the x-axis G-force is larger than the one in NonAc-
tive state. The NegativePulse means the change exceeds the threshold and
the x-axis G-force is less than the one in the NonActive state.
Let x be the G-force data on the x-axis reported by the accelerometer.
We define Δx = |x-x_initial|, where x_initial is the average G-force on the x
axis when the device is static. We compare Δx with a pre-defined threshold
(Th) to identify whether the accelerometer data change is big enough to
constitute a tap. The states are transitioned as in Figure 6.
As shown in the following sample code, we first define the DetectTap
class with members.
• public class DetectTap • { • // Accelerometer data state • enum PulseState • { • NonActive = 0, • PositivePulse = 1, • NegativePulse = 2, • UnknownPulse = -1 • } ; • • // Threshold on x, y, z axes for identifying • // whether a tap triggers the accelerometer • // data change exceeds a threshold or not. • private static double deltax _ threshold = 0.01; • private static double deltay _ threshold = 0.02;
79
October 2012 | www.devworx.in October 2012 | www.devworx.in
• private static double deltaz _ threshold = 0.03; • • // Declare the average x, y, z • // accelerometer data when device is static. •
• public double x _ initial; • public double y _ initial; • public double z _ initial; • • // Declare the number of samples to • // calibrate the x _ initial, y _ intial, z _ initial. •
• public int samplecounter _ calibrate; •
• // Declare the maximum number of samples • // for calibration. •
• public int MaxCalibrateInterval = 10; • • // Declare the previous state, current• // state of x axis accelerometer data. •
• private static PulseS-tate PreviousState _ X, CurrentState _ X; • • // Ininitialisation • public DetectTap() • { • samplecounter _ calibrate = 0; • } • }
Sample code 3: Definition of DetectTap class and members
To obtain the G-force values for the device in the NonActive state, we
sample a small amount of accelerometer data and average them. We call
this step calibration. Calibration is done by the user pressing a button when
the device is placed on a flat surface. The calibration function is defined in
the DetectTap class. The average G-force values for the NonActive state are
used to calculate the change on each axis for tap detection.
80
October 2012 | www.devworx.in October 2012 | www.devworx.in
• public class DetectTap • { • // Accelerometer calibration• // for the NonActive state. • public int CalibrateInitialReading(double x, double y, double z) • { • int done = 0; • • // Initialise the variables. • if (samplecounter _ calibrate == 0) • { • x _ initial = 0; • y _ initial = 0; • z _ initial = 0; • } • • // Increment the sample number • // of calibration. • samplecounter _ calibrate++; • • // Skip the first 5 samples and• // then average the rest samplings of • // accelerometer data. The skipping is • // to skip the accelerometer data change• // due to the button press for calibration. • if (samplecounter _ calibrate > 5 • && samplecounter _ calibrate <= MaxCalibrateInterval) • { • x _ initial = (x _ initial (samplecounter _ calibrate - 6) + x) / • (samplecounter _ calibrate - 5); • y _ initial = (y _ initial (samplecounter _ calibrate - 6) + y) / • (samplecounter _ calibrate - 5); • z _ initial = (z _ initial (samplecounter _ calibrate - 6) + z) / • (samplecounter _ calibrate - 5); • } • • if (samplecounter _ calibrate >= MaxCalibrateInterval)
81
October 2012 | www.devworx.in October 2012 | www.devworx.in
• { • done = 1; • } • • return done; • } • }
Sample code 4: Definition of calibration function
Further, the DetectTap class includes the DetectXPulse function to
output the current state of x-axis data based on the changes in the x-axis.
The current state output by the function is used for detecting left/right tap
with slight device movement.
• public class DetectTap • { • • // State machine to detect the • // pulse on x axis accelerometer data. • public int DetectXPulse(double x) • { • double deltax; • deltax = x - x _ initial; • • • if (Math.Abs(deltax) < deltax _ threshold) • { • CurrentState _ X = PulseState.NonActive; • goto Exit; • } • • if (Math.Abs(deltax) > Math.Abs(DeltaxPeak)) • DeltaxPeak = deltax; • • switch (PreviousState _ X) • { • case PulseState.PositivePulse: • if (deltax > 0) • CurrentState _ X = PulseState.PositivePulse; • else • CurrentState _ X = PulseState.NegativePulse;
82
October 2012 | www.devworx.in October 2012 | www.devworx.in
• break; • • case PulseState.NegativePulse: • if (deltax > 0) • CurrentState _ X = PulseState.PositivePulse; • else • CurrentState _ X = PulseState.NegativePulse; • break; • • case PulseState.NonActive: • if (deltax > 0) • CurrentState _ X = PulseState.PositivePulse; • else • CurrentState _ X = PulseState.NegativePulse; • break; • default: • break; • } • • Exit: • • PreviousState _ X = CurrentState _ X; • • return (int)CurrentState _ X; • } • }
Sample code 5: Definition of state machine function
We insert the tap detection code into the accelerometer change event
handler function ReadingChanged.
First, we use the current accelerometer data with the state machine
function DetectXPulse to get the current x-axis state and put it into a first-
in-first-out queue with limited size. The states in the queue will be the data
for identifying the taps. The sample code for saving the current state into
a queue is shown below.• private void ReadingChanged(object sender, Accelerometer-ReadingChangedEventArgs e) • { • Dispatcher.RunAsync(CoreDispatcherPriority.High, () => • { • AccelerometerReading reading = e.Reading;
83
October 2012 | www.devworx.in October 2012 | www.devworx.in
• • int currentXState, currentYState, currentZState; • • if (doneCalibrate != 1) { • //Do calibration • } • else { • • // Only keep small amount of history X, Y, Z data. • if (TapXStateQueue.Count >= MaxQueueLength) TapXState-Queue.Dequeue(); • if (TapYStateQueue.Count >= MaxQueueLength) TapYState-Queue.Dequeue(); • if (TapZStateQueue.Count >= MaxQueueLength) TapZState-Queue.Dequeue(); • • // Use the state machine to detect positive or negative pulse on x, y, z axes. • // Put the current state in a first in first out queue. • currentXState = TapDetector.DetectXPulse((double)(reading.AccelerationX)); • TapXStateQueue.Enqueue(currentXState); • currentYState = TapDetector.DetectYPulse((double)(reading.AccelerationY)); • TapYStateQueue.Enqueue(currentYState); • currentZState = TapDetector.DetectZPulse((double)(reading.AccelerationZ)); • TapZStateQueue.Enqueue(currentZState); • • TapInterval++; • SingleXYZTapInterval++; • …. • }});}
Sample code 6: Save current state into a state queue for tap detection
Detect a TapWhen a tap happens, a pulse is triggered in the x, y, or z axis. Thus, we use
the current state of x, y, and z axis accelerometer data to identify a tap. If
one state is PositivePulse or NegativePulse, we determine a tap is detected.
Then we de-bounce the detection for a small time window. The sample
code is as follows.
84
October 2012 | www.devworx.in October 2012 | www.devworx.in
• // … continue with Dispatcher.RunAsync in function Read-ingChanged • // The below code detects single tap based on the x, y, z direction data • // pulse. If a pulse is detected in x, y, z (in this order), • // we identify a tap is detected. Then we debounce the tap for a time window • //MinSingleTapInterval. • • // Debouncing condition • if (TapXStateQueue.Count >= MaxQueueLength && (SingleX-YZTapInterval >= • MinSingleTapInterval)) { • // We identify x direction pulse by the X state from state • // machine. The sequence only can be NegativePulse, PositivePulse . • if (currentXState == 1 || currentXState == 2 || • currentYState == 1 || currentYState == 2 || • currentZState == 1 || currentZState == 2) { • SingleXYZTapInterval = 0; • } • • // Tap is detected based on the pulse detection on x, y, z direction. • if (SingleXYZTapInterval == 0) { • TapCount++; • NonDTapEventOutput.Text = TapCount.ToString() + “ Tap detected”); • }
Sample code 7: Detect a tap by identifying a pulse
Detect Left/Right Tap with Slight Device Movement
Detect Left/Right Tap with Slight Device Movement
When a left/right tap causes slight device movement on a static device
on a flat surface, two consecutive pulses are triggered on x axis. We use the
x-axis states saved in the queue to detect the left/right taps. The sample
85
October 2012 | www.devworx.in October 2012 | www.devworx.in
code is as follows.
• // … continue with Dispatcher.RunAsync in function Read-ingChanged • // The below code detects directional tap based on the x direction data pulse • //sequence. If a pulse sequence is detected in x, we identify a directional tap is • // detected. Then we debounce the tap for a time window MinTapInterval. • if (TapXStateQueue.Count >= MaxQueueLength && (TapIn-terval >= MinTapInterval)) { • • // We identify x direction negative pulse by the X state. • // The sequence for left tap is a negative pulse fol-lowed by a positive pulse. • // So, the state sequence could be : 2,1 or 2,0,1, or 2,0,0,1. • • if ((TapXStateQueue.ElementAt(MaxQueueLength - 3) == 2 && • TapXStateQueue.ElementAt(MaxQueueLength - 2) == 0 && • TapXStateQueue.ElementAt(MaxQueueLength - 1) == 1) • || (TapXStateQueue.ElementAt(MaxQueueLength - 4) == 2 && • TapXStateQueue.ElementAt(MaxQueueLength - 3) == 0 && • TapXStateQueue.ElementAt(MaxQueueLength - 2) == 0 && • TapXStateQueue.ElementAt(MaxQueueLength - 1) == 1) • || (TapXStateQueue.ElementAt(MaxQueueLength - 2) == 2 && • TapXStateQueue.ElementAt(MaxQueueLength - 1) == 1)) { • LeftTapCount++; • TapEventOutput.Text = LeftTapCount.ToString() + “ Left Tap”; • TapInterval = 0; • DirectionalTap = 1; • } • • // We identify x direction positive pulse by the X state sequence from state • // machine. The sequence for right tap is a positive pulse followed by a negative • // pulse. So, the state sequence could be : 1,2 or 1,0,2,
86
October 2012 | www.devworx.in October 2012 | www.devworx.in
or 1,0,0,2. • if ((TapXStateQueue.ElementAt(MaxQueueLength - 3) == 1 && • TapXStateQueue.ElementAt(MaxQueueLength - 2) == 0 && • TapXStateQueue.ElementAt(MaxQueueLength - 1) == 2) • || (TapXStateQueue.ElementAt(MaxQueueLength - 4) == 1 && • TapXStateQueue.ElementAt(MaxQueueLength - 3) == 0 && • TapXStateQueue.ElementAt(MaxQueueLength - 2) == 0 && • TapXStateQueue.ElementAt(MaxQueueLength - 1) == 2) • || (TapXStateQueue.ElementAt(MaxQueueLength - 2) == 1 && • TapXStateQueue.ElementAt(MaxQueueLength - 1) == 2)) { • RightTapCount++; • TapEventOutput.Text = RightTapCount.ToString() + “ Right Tap”; • TapInterval = 0; • DirectionalTap = 2; • } • • if (TapInterval == 0) • Dispatcher.RunAsync(CoreDispatcherPriority.High, (Dis-patchedHandler)OnAnimation);} • }});} // End of function ReadingChanged
Sample code 8: Detect a left/right tap with slight device movement by
identifying particular pulse sequence.
ConclusionWith this case study, we introduced the capabilities and some use cases
for MEMS-based accelerometers in modern portable devices. We also
introduced the access method in Windows Runtime for Metro style apps.
We show a case study with sample codes showing that it is feasible to detect
tapping on the edges of devices and the left/right direction when taps cause
slight device movement.
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.
87
October 2012 | www.devworx.in October 2012 | www.devworx.in
This is just an example of how you can go about detecting subtle gestures
people can perform while holding the device, and the example here can be
extended to apply to a number of other gestures.
For example, one might want to detect if the person is waving their hand
while holding the device. To do this it is first important to establish what
this gesture would look like, and what kind of readings it would produce
in the accelerometer.
Once that is established, you can develop a system to detect the basic
components of that gesture, while filtering out readings that don’t correspond
to the kind of gesture you wish to detect. After that you can implement a
state machine to see if the the accelerometer readings over a period of time
match up to the readings you expect and in that case we know that the
gesture has been performed.
In our example, we first recorded the accelerometer readings that we get
when the phone is tapped on the right edge, or the left edge. We noticed that
if the phone is tapped on the right edge, we first get a positive pulse on the
x-axis, followed by a a negative pulse. If the phone is tapped on the left edge,
the order is reversed we get a negative pulse, followed by a positive pulse.
So from this we have established that we need some way to detect if
there is a positive pulse that has been detected, or a negative pulse. This is
simply a matter of seeing if there is a sudden positive or negative change in
the accelerometer reading. We take the average of a few readings to get an
understanding to establish a base line, and then we see if there is a sudden
change in readings that is large enough to be considered a pulse.
Finally when we have established that there has been a negative or posi-
tive pulse, we can tell whether the phone has been tapped on the left or the
right based on the order in which these pulses occur.
AppendicesCoordinate SystemThe Windows API reports X, Y, and Z axes in a manner that is compatible
with the HTML5 standard (and Android). It is also called the “ENU” system
because X faces virtual “East”, Y faces virtual “North”, and Z faces “Up.”
To figure out the direction of rotation, use the “Right Hand Rule”:
Point the thumb of your right hand in the direction of one of the axes.
Positive angle rotation around that axis will follow the curve of
your fingers.
These are the X, Y, and Z axes for a tablet form-factor PC, or phone (left)
88
October 2012 | www.devworx.in October 2012 | www.devworx.in
Biometric Electrical Environmental Light
Human Presence Capacitance Atmospheric Pressure Ambient Light
Human Proximity Current Humidity
Touch Electrical Power Temperature
Inductance Wind Direction
Potentio-meter Wind Speed
Resistance
Voltage
Identifier Constant (Win32/COM) Constant (.NET) GUIDCategory “All” SENSOR_CATEGORY_
ALLSensorCategories.SensorCategoryAll
{C317C286-C468-4288-9975-D4C-4587C442C}
CategoryBiometric
SENSOR_CATEGORY_BIOMETRIC
SensorCategories.SensorCategoryBiometric
{CA19690F-A2C7-477D-A99E-99EC6E2B5648}
CategoryElectrical
SENSOR_CATEGORY_ELECTRICAL
SensorCategories.SensorCategoryElectrical
{FB73FCD8-FC4A-483C-AC58-27B691C6BEFF}
CategoryEnvironmental
SENSOR_CATEGORY_ENVIRONMENTAL
SensorCategories.SensorCategoryEnviron-mental
{323439AA-7F66-492B-BA0C-73E9AA0A65D5}
CategoryLight
SENSOR_CATEGORY_LIGHT
SensorCategories.SensorCategoryLight
{17A665C0-9063-4216-B202-5C7A255E18CE}
CategoryLocation
SENSOR_CATEGORY_LOCATION
SensorCategories.SensorCategoryLocation
{BFA794E4-F964-4FDB-90F6-51056BFE4B44}
CategoryMechanical
SENSOR_CATEGORY_MECHANICAL
SensorCategories.Sensor-CategoryMechanical
{8D131D68-8EF7-4656-80B5-CCCB-D93791C5}
Common Sensor GUID
and for a clamshell PC (right). For more esoteric form factors (for example,
a clamshell that is convertible into a tablet), the “standard” orientation is
when it is in the TABLET state.
If you intend to develop a navigation application (e.g., 3D space game),
you need to convert from “ENU” systems in your program. This can be done
easily using matrix multiplication. Graphics libraries such as Direct3D and
OpenGL have APIs for handling this.
Sensor Types and Categories
89
October 2012 | www.devworx.in October 2012 | www.devworx.in
CategoryMotion
SENSOR_CATEGORY_MOTION
SensorCategories.Sensor-CategoryMotion
{CD09DAF1-3B2E-4C3D-B598-B5E5FF93FD46}
CategoryOrientation
SENSOR_CATEGORY_ORIENTATION
SensorCategories.Sensor-CategoryOrientation
{9E6C04B6-96FE-4954-B726-68682A473F69}
CategoryScanner
SENSOR_CATEGORY_SCANNER
SensorCategories.Sensor-CategoryScanner
{B000E77E-F5B5-420F-815D-0270ª726F270}
Type HumanProx-imity
SENSOR_TYPE_HU-MAN_PROXIMITY
SensorTypes.SensorTyp-eHumanProximity
{5220DAE9-3179-4430-9F90-06266D2A34DE}
Type Ambient-Light
SENSOR_TYPE_AMBI-ENT_LIGHT
SensorTypes.Sensor-TypeAmbientLight
{97F115C8-599A-4153-8894-D2D12899918A}
Type Gps SENSOR_TYPE_LOCA-TION_GPS
SensorTypes.Sensor-TypeLocationGps
{{ED4CA589-327A-4FF9-A560-91DA4B48275E}
Type Accelerom-eter3D
SENSOR_TYPE_ACCELEROMETER_3D
SensorTypes.Sensor-TypeAccelerometer3D
{C2FB0F5F-E2D2-4C78-BCD0-352A9582819D}
Type Gyrom-eter3D
SENSOR_TYPE_GYROMETER_3D
SensorTypes.SensorTyp-eGyrometer3D
{09485F5A-759E-42C2-BD4B-A349B75C8643}
Type Compass3D SENSOR_TYPE_COMPASS_3D
SensorTypes.Sensor-TypeCompass3D
{76B5CE0D-17DD-414D-93A1-E127F40BDF6E}
Type Compass3D SENSOR_TYPE_COMPASS_3D
SensorTypes.Sensor-TypeCompass3D
{76B5CE0D-17DD-414D-93A1-E127F40BDF6E}
Type DeviceOri-entation
SENSOR_TYPE_DE-VICE_ORIENTATION
SensorTypes.SensorTy-peDeviceOrientation
{CDB5D8F7-3CFD-41C8-8542-CCE622CF5D6E}
Type Inclinom-eter3D
SENSOR_TYPE_INCLINOMETER_3D
SensorTypes.SensorTy-peInclinometer3D
{B84919FB-EA85-4976-8444-6F6F-5C6D31DB}
90
October 2012 | www.devworx.in October 2012 | www.devworx.in
Identification(Win32/COM) Identification(.NET) PROPERTYKEY (GUID,PID)
SENSOR_PROPERTY_PERSISTENT_UNIQUE_ID
SensorID {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},5
WPD_FUNCTIONAL_OBJECT_CATEGORY CategoryID {8F052D93-ABCA-4FC5-A5AC-B01DF4DBE598},2
SENSOR_PROPERTY_TYPE TypeID {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},2
SENSOR_PROPERTY_STATE State {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},3
SENSOR_PROPERTY_MANUFACTURER SensorManufac-turer
7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},6
SENSOR_PROPERTY_MODEL SensorModel {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},7
SENSOR_PROPERTY_SERIAL_NUMBER SensorSerialNum-ber
(7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},8
SENSOR_PROPERTY_FRIENDLY_NAME FriendlyName {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},9
SENSOR_PROPERTY_DESCRIPTION SensorDescription {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},10
SENSOR_PROPERTY_MIN_REPORT_IN-TERVAL
MinReportInterval {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},12
SENSOR_PROPERTY_CONNECTION_TYPE SensorConnec-tionType
{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},11
SENSOR_PROPERTY_DEVICE_ID SensorDevicePath {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},15
SENSOR_PROPERTY_RANGE_MAXIMUM SensorRangeMaxi-mum
{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},21
SENSOR_PROPERTY_RANGE_MINIMUM SensorRangeMi-nimum
{7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},20
SENSOR_PROPERTY_ACCURACY SensorAccuracy {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},17
SENSOR_PROPERTY_RESOLUTION SensorResolution {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},18
Common Sensor Properties and PIDs
Configuration(Win32/COM) Identification(.NET) PROPERTYKEY (GUID,PID)
SENSOR_PROPERTY_CURRENT_REPORT_INTERVAL
ReportInterval {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},13
SENSOR_PROPERTY_CHANGE_SENSITIVITY ChangeSensitivity {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},14
SENSOR_PROPERTY_REPORTING_STATE ReportingState {7F8383EC-D3EC-495C-A8CF-B8BBE85C2920},27
91
October 2012 | www.devworx.in October 2012 | www.devworx.in
Constant (Win32/COM) Constant (.NET) PROPERTYKEY (GUID,PID)
SENSOR_DATA_TYPE_TIMESTAMP SensorDataTypeTimestamp {DB5E0CF2-CF1F-4C18-B46C-D86011D62150},2
SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX
SensorDataTypeLightLevelLux {E4C77CE2-DCB7-46E9-8439-4FEC548833A6},2
SENSOR_DATA_TYPE_ACCELERATION_X_G
SensorDataTypeAccelerationXG {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},2
SENSOR_DATA_TYPE_ACCELERATION_Y_G
SensorDataTypeAccelerationYG {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},3
SENSOR_DATA_TYPE_ACCELERATION_Z_G
SensorDataTypeAccelerationZG {3F8A69A2-07C5-4E48-A965-CD797AAB56D5},4
SENSOR_DATA_TYPE_ANGU-LAR_VELOCITY_X_DEG REES_PER_SECOND
SensorDataTypeAngularVeloci-tyXDegreesPerSecond
{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},10
SENSOR_DATA_TYPE_ANGU-LAR_VELOCITY_X_DE GREES_PER_SECOND
SensorDataTypeAngularVeloci-tyXDegreesPerSecond
{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},10
SENSOR_DATA_TYPE_ANGU-LAR_VELOCITY_Y_DE GREES_PER_SECOND
SensorDataTypeAngularVeloci-tyYDegreesPerSecond
{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},11
SENSOR_DATA_TYPE_ANGU-LAR_VELOCITY_Y_DE GREES_PER_SECOND
SensorDataTypeAngularVeloci-tyYDegreesPerSecond
{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},11
SENSOR_DATA_TYPE_ANGU-LAR_VELOCITY_Z_DE GREES_PER_SECOND
SensorDataTypeAngularVeloci-tyZDegreesPerSecond
{3F8A69A2-07C5-4E48-A965-CD797AAB56D5},12
SENSOR_DATA_TYPE_TILT_X_DEGREES
SensorDataTypeTiltXDegrees {1637D8A2-4248-4275-865D-558DE-84AEDFD},2
SENSOR_DATA_TYPE_TILT_Y_DE-GREES
SensorDataTypeTiltYDegrees {1637D8A2-4248-4275-865D-558DE-84AEDFD},3
SENSOR_DATA_TYPE_TILT_Z_DE-GREES
SensorDataTypeTiltZDegrees {1637D8A2-4248-4275-865D-558DE84AEDFD},4
SENSOR_DATA_TYPE_MAGNETIC_HEADING_COM PENSATED_MAGNETIC_NORTH_DEGREES
SensorDataTypeMagneticHead-ingCompensated TrueNorthDegrees
{1637D8A2-4248-4275-865D-558DE84AEDFD},11
SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH _X_MILLIGAUSS
SensorDataTypeMagneticField-StrengthXMilligauss
{1637D8A2-4248-4275-865D-558DE84AEDFD},19
SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH _Y_MILLIGAUSS
SensorDataTypeMagneticField-StrengthYMilligauss
{1637D8A2-4248-4275-865D-558DE84AEDFD},20
Data Field Identifiers
92
October 2012 | www.devworx.in October 2012 | www.devworx.in
SENSOR_DATA_TYPE_MAGNETIC_FIELD_STRENGTH _Z_MILLIGAUSS
SensorDataTypeMagneticField-StrengthZMilligauss
{1637D8A2-4248-4275-865D-558DE84AEDFD},21
SENSOR_DATA_TYPE_QUATER-NION
SensorDataTypeQuaternion {1637D8A2-4248-4275-865D-558DE84AEDFD},17
SENSOR_DATA_TYPE_QUATER-NION
SensorDataTypeQuaternion {1637D8A2-4248-4275-865D-558DE84AEDFD},17
SENSOR_DATA_TYPE_ROTA-TION_MATRIX
SensorDataTypeRotationMatrix {1637D8A2-4248-4275-865D-558DE84AEDFD},16
SENSOR_DATA_TYPE_LATITUDE_DEGREES
SensorDataTypeLatitudeDegrees {055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},2
SENSOR_DATA_TYPE_LONGI-TUDE_DEGREES
SensorDataTypeLongitudeDe-grees
{055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},3
SENSOR_DATA_TYPE_ALTI-TUDE_ELLIPSOID_METERS
SensorDataTypeAltitudeEllip-soidMeters
{055C74D8-CA6F-47D6-95C6-1ED3637A0FF4},5
“All”
93
October 2012 | www.devworx.in October 2012 | www.devworx.in
Prefer 140 characters?
@devworx<Follow us>
>>Here’s the real deal. Four sample apps to learn from.
05*Get your hands dirty
96
October 2012 | www.devworx.in October 2012 | www.devworx.inOctober 2012 | www.devworx.in
ow that you’re familiar with the underlying architec-
ture and framework needed for building apps for the
Ultrabook, we have four interesting sample apps that
you could analyse and use to model your next best idea.
DesktopTouch This is a desktop photo application where we’ve implemented a few touch
gestures such as Zoom, Pan, Flick and Rotate. This application gives a great
start for anyone who wants to understand how to use touch functionality
in their application. Go to http://dvwx.in/QNzUKk.
Accelerometer In this sample you’ll learn how to use the accelerometer API in your applica-
tions and understand how X, Y, Z values of the accelerometer are changed
based on user inputs. Go to http://dvwx.in/Qalvur.
MapStylesSample With the Bing Maps application, you’ll learn how to use the Compass API.
This is a simple map application that takes input from user based on device
rotation and updates map data accordingly. For example, if you tilt this
application in either side, the application will update map data according
to user tilt action. Go to http://dvwx.in/UHeT7I
RunGesture With Run Gesture, you’ll learn accelerometer vibration property. This
sample application allows user to tap anywhere near the device and updates
its data accordingly. Go to http://dvwx.in/NL5XwR
N
Join the discussion on Facebook and stay updated on latest news and features. Scan the QR code using your smartphone, now!
Prefer 140 characters? Follow us: @devworx.