ets
DESCRIPTION
ETS News JulyTRANSCRIPT
#ETS
2 Emerging Technology News
FOREWORDT H E S U M M E R O F C O D E
3 May. Bistritz.--Left Munich at 8:35 P.M., on 1 st May,
arriving at Vienna early next morning; should have arrived at
6:46, but train was an hour late. Buda-Pesth seems a
wonderful place, from the glimpse which I got of it from the
train and the little I could walk through the streets. I feared to
go very far from the station, as we had arrived late and would
start as near the correct time as possible.
I find that the district he named is in the extreme east of the
country, just on the borders of three states, Transylvania,
Moldavia, and Bukovina, in the midst of the Carpathian
mountains; one of the wildest and least known portions of
Europe.
I was not able to light on any map or work giving the exact
locality of the Castle Dracula, as there are no maps of this
country as yet to compare with our own Ordance Survey
Maps; but I found that Bistritz, the post town named by Count
Dracula, is a fairly well-known place. I shall enter here some
ofmy notes, as they may refresh my memory when I talk over
my travels with Mina.
The women looked pretty, except when you got near them,
but they were very clumsy about the waist. They had all full
white sleeves of some kind or other, and most of them had big
belts with a lot of strips of something fluttering from them
like the dresses in a ballet, but of course there were petticoats
4 3
3 All About Mind ControlKevin Brown on headsets that monitor brain activity and
other things about interactive control devices.
All About Mind Control3
UX Insight5Kevin Brown on headsets that monitor brain activity and
other things about interactive control devices.
Smarter Planet7Kevin Brown on headsets that monitor brain activity and
other things about interactive control devices.
Streams Processing8Kevin Brown on headsets that monitor brain activity and
other things about interactive control devices.
Emerging Technology Archive10Kevin Brown on headsets that monitor brain activity and
other things about interactive control devices.
Q&A12Kevin Brown on headsets that monitor brain activity and
other things about interactive control devices.
ET
SIN
NU
MB
ER
S 117Customers through the lab.
22Papers published.
137
4 Emerging Technology News
UX NotesG O I N G A G A I N S T T H E R U L E S
"We're looking to fix the interface and
sort of make it a bit more flashy" is the
kind of introduction I often receive
when joining a new project. I'm not a
graphic designer, nor any kind of
qualified user interface expert. I'm not
even really a proper front end
developer, but over the years I've built
up a reputation for working on web
based user interfaces. The whole area
tends to be considered as something of a
black art, something that cannot be
learned. While taste may be innate and
there is little substitute for experience,
there are tricks that can be used to
create decent user interfaces. Be it in
terms of usability or aesthetic appeal,
using these patterns is how I’m able to
cheat at designing user interfaces. The
most interesting projects, however, are
the ones where the patterns do not fit,
where genuine innovation and ingenuity
has to be used to create the interface.
The sort of projects that Emerging
Technology Services work on.
Often it’s the patterns and experience
from other websites that help to inform
the design of new ones. Blogs, for
example, all tend to follow a similar
interface design: a vertically scrolling
list of date ordered posts; links to tags;
links to other blogs; search bar. It’s a
sensible design, almost a de-facto
standard, which users are familiar with.
It’s not surprising that many sites follow
a similar theme. The same is true in
other web application niches, be it
photo sharing, news publications, help
forums or social network sites. The
majority of websites I’ve worked on are
variations one of these existing classes,
but occasionally you get to work on
something completely different, where
there is no existing pattern to fall back
on, that was the case with meedan.net.
Meedan are a US non profit
organisation whose aim is to improve
tolerance and understanding between
the English and Arabic speaking parts of
the world. I was part of an ETS team
that spent two years working with them
to get their application up and running.
There were many technical and cultural
challenges on the project, but it was the
user interface that generated the biggest
technical challenges, the most heated
discussion and required the most effort
from IBM and Meedan.
The meedan.net application needed to
do two things, act as a matchmaker,
bringing English and Arabic speaking
people together and then translating
between the two languages for them.
We knew that just randomly bringing
people together wouldn’t work, they
need to have some context and basis to
have a discussion in. We wanted to use
world events to be this spark, be it
internationally important news, or
things with a much more local and
personal significance to the people
using the site. From the user interface
side we needed a way to represent these
events and allow our users to find them.
Geography was such an important part
of the whole idea of Meedan that dots
on a map seemed like an obvious place
to start. In fact Meedan’s original logo
was made up ofmap dots.
If you were using the standard patterns
for an event site, or maybe a photo
sharing site, dots on a map would be
your starting point. They represent the
data and people understand what they
mean. So that’s what we did, the frontpa
ge had a large Google based map, with
dots representing the events. Clicking
through would take you to more detail
about that event where discussion could
take place. The problem was the pattern.
People came to the the website and it
looked like an event site. The sort of
place you might find where a band are
playing or where a conference is.
Meedan was using events, but it wasn’t
really what the site was about. In
usability terms, the Google map didn’t
offer the right affordances, it was not
something users saw as a place to start
a discussion. It wasn’t necessarily using
a map that was the problem, it was that
we were using the most commonly used
map on the Internet.
Meedan decided that they wanted to
look at using a 3D maps, largely as a
way of differentiating from the
thousands of other map based web sites.
The "spiny globe problem” as it was
affectionately known, became a
contentious issue. While it looked great
and would have attracted a lot of
attention, it's actually less usable than a
2D map. You can't see all of the world
at the same time on a globe, it's harder
to make visual comparisons and how
ever slick you make the controls, they
can be fiddly to manipulate. The
technical implications also meant that
users accessing the site with older
hardware or limited bandwidth
(something particularly important when
attracting people from some areas of
the Middle East) would have had a
reduced experience. Reluctantly we
dropped the 3D map and compromised
to a custom 2D map that would allow
us to control the look and feel.
Our early alpha testers highlightedanoth
er problem with our use of maps in the
interface. All the events they wanted to
discuss tended to have a geographic
location, but not one that could be
represented by a simple dot. How, for
example, do you mark “climate
change”, or “middle east conflict” with
a dot on a map? Where would you put
User interface design often follows standard design patterns,
but they're not always right for every project. Darren Shaw talks
about the difficulties with maps and translation on a
multilingual and mulitcultural project.
How do you mark “climate change”, or
“middle east conflict” with a dot on a
map? Where do you put it?
5Emerging Technology News
it? We also had different aspects of
location that seemed important. Where
the event is, where physically the user
discussing it is and where they consider
themselves from. They are subtle, but
important differences and different users
took their location to mean different
things. Add to this we also had concerns
about revealing a user’s location.
We wanted the people that use Meedan
to feel as free as possible to have the
discussion they want to have. In some
countries, showing their location on a
map could be reckless. It’s not good
enough just to mark them with some
random error factor. A fuzzy dot with 20
miles of error may be good enough to
anonymise someone in London, but not
if that person is in the middle of
nowhere with the only Internet
connection for miles. An extreme case
perhaps, but something the project could
not risk.
Despite these problems, we wanted to
make use of user location because it
would allow us to do things like show
the views on an event from a certain
region, compared with those from
another. In developing the user
interface, there’s often a balance to be
had between functionality and
simplicity. Adding more features tends
to increase the complexity of the
interface. We went for the simplest
approach of allowing users to define
their own location (rather than any
automated system).
Users of the system should be allowed
to define their location however they
want, so we provided a free text entry
box where they could type their
location. It might be that they entered a
country, a city, a specific street address,
or none at all. We did not mandate the
format or the language they had to use.
This free form text location was
displayed on their profile page. Behind
the scenes we developed an algorithm to
read the free text location and try and
resolve it to a specific geographic
location. Anything that the machine
could not resolve was sent to a human
administrator to do. The interface
showed the user this resolved location,
indicating where on the map their
content would be shown from.
The cross culture nature of the site did
What's your role?I build web applications. Recently I've had more of a focus on front
end user interfaces and data visualisation, but over the ten years in
the job I've worked on the backend side too, developing the core
application logic and databases.
What are you working on now?I'm developing a dashboard for monitoring data coming in from
sensors. Its part of a research project to do with the management of
sensor networks, developing middleware that will allow people to
make the most efficient use of their sensors. It's aimed at the
miliatary initially, but the ideas and the technology is applicable to
all kinds of different fields.
I'm also playing with some ideas around a virtual mirror, using
augmented reality to let people try clothes on, without the hassle of
going to a changing room. It's been tried before, but never
completely successfully.
What's been the best thing you've done in ETS?I was part of the team that built meedan.net from the ground up. I
was at the first meeting where Meedan was a single person
organisation with a CEO and nothing else. We spent two years
building up the technology, but also the business with them. I still
see discussions and ideas built on things we came up with in the
original meeting. Meedan was staffed by technologists from the
startups of San Francisco. The culture clash of them coming together
with IBM and it's ways of doing things was at times draining, but we
made it work and ultimately all gained from the experience. The
project really showed the best of IBM when it puts a good team
together and invests in an idea.
E X P E R T V I E W b y D a r r e n S h a w
6 Emerging Technology News
present some interesting political
problems relating to location, geography
being at the heart of many conflicts.
Disputed territories around Palestine
and Israel, for example, go under
different names depending on which
side of the argument you are on.
Allowing users to set the text that
represents their location solves this to
some extent, but we were still left with
the accepted names on maps. There is
also the problem of translation, some
smaller towns and villages have no
known English or Arabic versions of
their names. When these locations were
used, a request was sent to Meedan’s
team of human translators to provide
one.
One of the things that we learned
through this project was that the clever
algorithms don’t need to be 100%
perfect in terms of geolocation. If they
can cope with 99% of cases, there’s
nothing wrong with having some human
input to fix and improve things. This
was an approach that we took both to
location and translation.
Maps and location were important to
Meedan, but the real difference with the
organisation and with the interface was
in language translation. There are many
multilingual websites, but the pattern
they generally follow is to allow the
user to set the display language, so the
site could be in English, or in Arabic,
for example. Meedan wanted to show
that both languages where at its core
and that translation was what the
website was really about. We wanted
this to shine through in the actual user
interface. We decided to show much of
the English and Arabic text alongside
each other, rather than showing one or
the other. It was a big decision to make,
effectively reducing the usable screen
space by half. Showing information that
a user does not need would normally be
considered a mistake in terms of
usability, but this is another example of
where the application we were building
didn’t fit in with the established user
interface patterns.
Showing both languages together really
showed off what the site was about and
feedback from users was positive. Even
those who couldn’t read Arabic thought
that it added to the atmosphere of the
site and that the characters were
attractive to look at. It helped set the
tone for the website and set it apart from
its competitors. After further
development we slightly toned down the
dual language displays. Some of the
pages became too cluttered, but we
always kept Arabic and English in the
header and the main conversation
screens remained dual language.
In the first version of the site we didn’t
indicate whether text was original, or
that it had been translated. Initially,
when the standard of Machine
Translation wasn’t that high, it wasn’t a
problem. Users could tell by reading the
text if it was as originally written or a
machine translation. As the translation
accuracy improved it wasn’t always so
obvious. Often a sentence would be
grammatically correct and would sound
right, but a subtle (or not so subtle)
meaning had been lost in translation.
Sometimes a sentence could be
translated and mean the opposite of
what had been written, which does not
do anything to improve Arabic-English
relations. The research team working on
the project even questioned whether the
source of many conflicts could be in
translation, even an expert human
translator finds some words and
concepts difficult to translate directly.
To negate the problem, we added
information about whether any text was
original or if it had been translated and
if so, by whom. This came in to it’s own
when we allowed human translators to
fix machine translations as it provided
attribution (and thereby credit) for their
work.
Since launching the interface has
undergone several iterations. It has been
simplified and a lot of work has gone in
to allowing users to correct other user’s
translations, but many of the concepts
and ideas we came up with remain. It
was a difficult and at times frustrating
project, but one that was ultimately
successful. I find it hard to look at the
site and not see all the things I still
wished we had done differently, but the
site is flourishing and the experience
gained from building it and the new user
interface patterns we developed are
unique.
7Emerging Technology News
8 Emerging Technology News
MIND CONTROLC O N T R O L I N G D E V I C E S W I T H Y O U R B R A I N
The computer in front of you is made up
of technology that would not have been
feasible just ten years ago, yet the main
device you use to interact with it was
designed before the first powered flight
took place. The QWERTY keyboard
was invented in 1873, 65 years before
the ball point pen. It is still the most
efficient way we have to enter text. The
mouse was developed in the 1960s.
Even touchpads have been on laptops
since the 1990s. All of these devices
have been massively successful, but
they have relied on humans to adapt to
the way that they work.
Staff at IBM's Emerging Technology lab
in Hursley are working on how we
might use and interact with computers
in the future. Equipment that is capable
of monitoring, sensing and and
measuring people is not new, but
importantly, the hardware has reached a
level of development where it has
become economically viable to use in a
wider range of applications.
Increasingly, through the likes of the
iPhone, Wii and Kinect more natural
human oriented interfaces are working
their way in to the home. By default all
of these devices act independently, with
each manufacturer focussed on
designing and marketing their own
technology. The Emerging Technology
lab takes a different approach, looking
at how these devices can be integrated
and used in combination to produce an
effect greater than the sum of their
parts.
Kevin Brown first came across the
Emotiv headsets last year when he read
that researchers were exploring how
they might be used to control avatars in
virtual worlds. The Emotiv headset
looks like something from science
fiction. It's a small, fist sized device
with sensors spidering out and attaching
to different parts of the skull. The
headset detects the tiny electrical
signals emitted by the brain in order to
pick out changing facial expressions,
emotions and even feelings. It's
impressive, though not quite the mind
reading magic it initially sounds like.
The system can't read your mind, but it
can be trained to recognise the specific
electrical activity that occurs with
certain thoughts. Kevin has used this to
allow certain thoughts to be tied to
actions in the lab. He has a toy remote
control car that can be driven by
thought alone. Kevin explained that
what he is really interested in is the way
in which the device has been connected
in to the lab.
“We have a fast turnover
of new technology
coming in to the lab and
what we really try to
show is how they can be
used together. We don't
know what devices are
coming along 18 months
from now and we can't
afford to build large custom solutions to
integrate each new piece of technology.
So we use WebSphere middleware as in
integration layer and treat each new
piece of technology as just another
sensor. We might have to write a small
bit of code to bridge the new device in
to our sensor messaging layer, but that
will only take a few hours and then that
new technology is fully integrated with
the rest of the lab.”
It's this groundwork that allows Kevin
and the rest of the ETS team to come up
with innovative ways of combining the
different sensors in the lab. They don't
need to spend their effort on the low
level device to device communication,
so they can concentrate on developing
the higher level integration which is
really where the benefits lie. ETS walk
customers through this story to show
what the Smarter Planet marketing
means in real, practical terms. The
BBC were so taken by the technology
that Kevin, with his ETS colleague Nick
O'Leary, worked with them to produce
an episode of Bang Goes The Theory, in
which the same system was used to
control a full sized taxi.
The technology has wider applications
as Kevin discovered when his wife
Sarah, an Occupational Therapist, was
working with a stroke patient suffering
from Locked-In Syndrome. The
patient's brain was working perfectly,
but his body was completely paralysed.
He could only communicate with his
eyes, (up for yes, down for no) having
someone point at each letter in a chart
one by one to help him spell out a word.
Kevin saw an opportunity to use the
headset and the patient, being a bit of a
techie, was keen to try it out. Initial
training with the device went well and
he was able to make the device to
recognise two different thoughts,
enough to be tied to two different
control actions in software. This
allowed him to replicate the process of
spelling out words, but without the aid
of another person. It wasn't without
challenges, controlling by thought takes
a lot of concentration and mental effort
and, as you get tired, it becomes more
difficult to accurately control your
thoughts. The hope is that the more
people get used to using the brain for
controlling things, the more natural it
will become and the less mental effort
they will need to exert.
The Emotiv headsets and their kind are
certainly clever and make for attractive
demos. Thought is a more natural
interface than a mouse or keyboard, but
that alone does not mean that such
devices will really take off. They
struggle with some of the same
drawbacks as 3D glasses in that the
hardware is clumsy and awkward to
wear. It takes time to setup and each
person involved needs a device to
themselves. Use with the stroke patient
also showed that the mental effort and
concentration that is needed to use
these systems, particularly over a long
period of time, can be exhausting.
To some degree, the physical object that
is the Emotiv headset itself gets in the
way of the natural interaction. Thought
may be a natural way of controlling
actions, but not necessarily when it
flows via a cumbersome device. This is
where the gesture interfaces that
Kevin's team work on really come in to
their own. Touch gestures are widely
used in the current generation of smart
phones and tablets. They certainly
present a more natural human interface
and the Emerging Technology group are
looking in to how the next generation of
The QWERTY keyboard
was invented in 1873, 65
years before the ball point
pen.
10 Emerging Technology News
touch devices might be used.
One of the group's interests is in how
teams collaborate together and a
drawback of standard touch interfaces is
that they only work with one person at a
time. They may allow you to use
several fingers to form a gesture, but
they will not cope with different
people's hands at the same time.
Computationally, this is a difficult
problem. When there are multiple
hands touching a surface, how do you
work out which touches belong to
which hand of which person? Multi
touch displays solve this by using
multiple cameras and sophisticated
image processing algorithms. This
means that several people can interact
with the display (which is normally
configured as a table) at the same time,
something which doesn't happen with
traditional computer interfaces. Kevin
and the team are interested in how these
multi user interfaces might be used in
planning activities, specifically for the
military. With paper based maps a
group of soldiers can gather round,
discuss options, point and draw on the
map. A computer based map might
have many advantages, but only one
user can be in control of it at once, the
interface does not encourage team
collaboration. A multi touch surface
with a group of people gathered around
might do just that.
The Emerging Technology group have
begun using a
multi touch device
in their lab. The
cameras it uses to
detect gestures can
also be put to other
uses. They are
configured behind
a semi transparent
screen so that they
can 'see' what is on
the surface of the
display and just
above it. Normally
used for detecting hands and fingers,
these cameras can also be programmed
to detect objects that are placed on the
screen. This allows the development of
applications that use a mixture of hand
gestures and objects to trigger actions.
The real advantage of touch interfaces is
that they make the most ofwhat humans
have evolved to be good at, the touch
gestures are as natural as the ones we
use to manipulate objects in the real
world.
The same is true for sensors that are
able to detect full body motion, such as
Microsoft's Kinect. The Kinect
hardware was developed as a control
mechanism for their Xbox console, but
published APIs allow developers to use
the devices with other systems. The
sensors are able to detect the positions
of the major joints in the human body,
allowing a live skeletal wireframe to be
calculated. This is used in games to
allow the motion of the players body to
control the on screen action, but can
also be used outside of games. It can
be programmed to control anything.
Trivial examples such as scrolling
through a web page by waving your
hands are common, but there are more
compelling uses of the system, which
take advantage of the detailed body
position it provides. ETS are keen to
try using the technology as a virtual
mirror. A shopper would be able to
walk up to a full height display which
“reflects” back their image. The Kinect
sensor would be used to map their body
position and the data used to show the
customer what they would look like in
an item of clothing. They would
rapidly be able to scroll though
different items and virtually try them
on, all without a trip to the changing
room. The same technology could even
be used from the shoppers home,
completely changing the online
experience of fashion retailers.
We are just at the early stages of these
natural gesture and body control
interfaces, but the hardware costs are
coming down rapidly and people like
Kevin spend their time thinking not just
how we replicate existing interfaces
with the new ones, but how they can be
used in a completely new way or even
lead to a whole new type of application.
A computer based map might
have many advantages, but
only one user can be in control
of it at once, the interface does
not encourage team
collaboration.
3 May. Bistritz.--Left Munich at 8:35 P.M., on 1 st May, arriving at Vienna early next morning;
should have arrived at 6:46, but train was an hour late. Buda-Pesth seems a wonderful place,
from the glimpse which I got of it from the train and the little I could walk through the streets. I
feared to go very far from the station, as we had arrived late and would start as near the correct
time as possible.
3 May. Bistritz.--Left Munich at 8:35 P.M., on 1 st May, arriving at Vienna early next morning;
should have arrived at 6:46, but train was an hour late. Buda-Pesth seems a wonderful place, from the glimpse which I got of
it from the train and the little I could walk through the streets. I feared to go very far from the station, as we had arrived late
and would start as near the correct time as possible.
I was not able to light on any map or work giving the exact locality of the Castle Dracula, as there are no maps of this
country as yet to compare with our own Ordance Survey Maps; but I found that Bistritz, the post town named by Count
Dracula, is a fairly well-known place. I shall enter here some ofmy notes, as they may refresh my memory when I talk over
E X P E R T V I E W b y K e v i n B r o w n
11Emerging Technology News