actual topic

49
Seminar report Microsoft surface: multi-touch technology 1. Introduction 1.1 Introduction: Microsoft Surface is an interactive table top that can do everything a network computer can do plus more without using a keyboard or a mouse. There are four key features: direct interaction, multi-touch ability, multi-user ability and object recognition. Direct interaction allows you to touch or grab digital information with your hands and use natural gestures to open, grasp, and command virtual objects, pages and images. The multi-touch feature enables the Surface to recognize many points of contact simultaneously so you can enlarge an image by touching the opposite corners and dragging those outwards. Along with the multi-touch feature, the shape and design of the Surface allows for multi- users at once, therefore, the user sitting across from you can be doing something completely different or independent of you. The last key feature, object recognition, enables the system to identify physical objects just by setting them on the Surface and to respond by displaying the appropriate software related to that item. Currently, Microsoft Surface is being marketed and sold directly to large scale leisure, entertainment and retail companies, such as AT&T in various cities, Rio in Las Vegas, and Sheraton Hotels in various cities. 1.2 Aim of the seminar: To learn about Microsoft surface Multi-touch technology Multi-touch devices How gesture recognition is performed Human-computer interface Application using an example Bhoj reddy engineering college for women 1 out of 34

Upload: sankar-susarla

Post on 23-May-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Actual Topic

Seminar report Microsoft surface: multi-touch technology

1. Introduction

1.1 Introduction:

Microsoft Surface is an interactive table top that can do everything a network computer can do plus more

without using a keyboard or a mouse. There are four key features: direct interaction, multi-touch ability,

multi-user ability and object recognition. Direct interaction allows you to touch or grab digital information with

your hands and use natural gestures to open, grasp, and command virtual objects, pages and images. The

multi-touch feature enables the Surface to recognize many points of contact simultaneously so you can

enlarge an image by touching the opposite corners and dragging those outwards. Along with the multi-touch

feature, the shape and design of the Surface allows for multi-users at once, therefore, the user sitting across

from you can be doing something completely different or independent of you. The last key feature, object

recognition, enables the system to identify physical objects just by setting them on the Surface and to

respond by displaying the appropriate software related to that item. Currently, Microsoft Surface is being

marketed and sold directly to large scale leisure, entertainment and retail companies, such as AT&T in

various cities, Rio in Las Vegas, and Sheraton Hotels in various cities.

1.2 Aim of the seminar:

To learn about

Microsoft surface

Multi-touch technology

Multi-touch devices

How gesture recognition is performed

Human-computer interface

Application using an example

1.3 Motivation of the seminar:

Multi-user is a benefit of multi-touch—several people can orient themselves on different sides of

the surface to interact with an application simultaneously. Unlike most touchscreens, surface

computer can respond to more than one touch at a time. Today’s computers allow you to have

multiple applications in multiple windows but they probably only have one keyboard and mouse

which means only one person can operate at a time. These Surfaces engage the senses, improve

collaboration, and empower the students by having everything available to them at their finger tips.

Having studied about this cocept in newspaper made me eager and enthusiastic to know about this

topic and collect information about it.

Bhoj reddy engineering college for women 1 out of 34

Page 2: Actual Topic

Seminar report Microsoft surface: multi-touch technology

1.4 Literature survey:

http://www.multitouchtechnology.com/

http://www.microsoft.com/surface/index.html

http://www2.smarttech.com/st/en-US/Products/SMART+Table/

http://www.engadget.com/2008/10/23/kids-on-with-the-smart-table/

http://blogs.msdn.com/surface/archive/2008/11/04/surface-your-end-users-and-you.aspx

http://download.microsoft.com/download/d/9/1/d91f9fb0-c42c-47a5-8c08-

6bd80587c002/MSSurfaceOrderForm-PDC.pdf3.3665

1.5 Applications:

• Interactive Classrooms: The multi-touch surface computers will encourage the students to

interact with content and each other promoting group work and team building skills.

• Students would have custom built hardware where they can create their assignments and

teachers may be able to see it instantly and help the students.

• Students sitting around the table may open a file, push it across, drag it, modify it, let another

student add or delete information and then save the document.

• In a photography class, the students could share their images instantly.

• In an art class, one student could be painting with a paint brush while another is drawing with her

finger. Both the paint brush and the finger would be recognized.

• In Business classes, specifically accounting, having access to a computer right at your finger tips

will help the students learn faster and comprehend on a higher level I believe.

It’s a lot easier to follow along on an Excel spreadsheet when you can highlight the cell and see for

yourself what the formula is or where that amount came from. Allowing students the ability to

actively participate while teaching them about constructing a balance sheet will make it easier for

the students to not only comprehend the material but also retain the material in my opinion.

• In a geography class each student could find a specific location and the maps could be displayed

instantly.

• Teachers would not have to worry about finding space in a computer lab in order for the students

to create projects or conduct research.

• Students could share podcasts or other information related to a certain project that they have

saved to their flash drive just by laying the device on the surface.

Bhoj reddy engineering college for women 2 out of 34

Page 3: Actual Topic

Seminar report Microsoft surface: multi-touch technology

2. MICROSOFT SURFACE

2.1 What is a Microsoft surface:

Microsoft Surface (codename Milan) is a multi-touch product from Microsoft which is developed

as software and hardware combination technology that allows a user, or multiple users, to

manipulate digital content by the use of gesture recognition. This could involve the motion of hands

or physical objects. It was announced on May 29, 2007 at the D5 conference. Targeted customers

are in the hospitality businesses, such as restaurants, hotels, retail, public entertainment venues and

the military for tactical overviews. The preliminary launch was on April 17, 2008, when Surface

became available for customer use in AT&T stores. The Surface was used by MSNBC during its

coverage of the 2008 US presidential election; and is also used by Disneyland’s future home

exhibits, as well as various hotels and casinos. The Surface was also featured in the CBS

series CSI: Miami and EXTRA! Entertainment news. As of March 2009, Microsoft had 120

partners in 11 countries that are developing applications for Surface's interface. On January 6,

2011,Microsoft previewed the latest version of Microsoft Surface at Consumer Electronics Show

(CES) 2011, simply named Microsoft Surface 2.0, which was built in partnership with Samsung.

Microsoft Surface is a surface computing platform that responds to natural hand gestures and real

world objects. It has a 360-degree user interface, a 30 in (76 cm) reflective surface with a XGA

DLP projector underneath the surface which projects an image onto its underside, while five

cameras in the machine's housing record reflections of infrared light from objects and human

fingertips on the surface. The surface is capable of object recognition, object/finger orientation

recognition and tracking, and is multi-touch and is multi-user. Users can interact with the machine

by touching or dragging their fingertips and objects such as paintbrushes across the screen, or by

placing and moving placed objects. This paradigm of interaction with computers is known as

a natural user interface (NUI).

Surface has been optimized to respond to 52 touches at a time. During a demonstration with a

reporter, Mark Bolger, the Surface Computing group's marketing director, "dipped" his finger in an

on-screen paint palette, then dragged it across the screen to draw a smiley face. Then he used all 10

fingers at once to give the face a full head of hair.

Using the specially-designed barcode-style "Surface tags" on objects, Microsoft Surface can offer a

variety of features, for example automatically offering additional wine choices tailored to the

dinner being eaten based on the type of wine set on the Surface, or in conjunction with a password,

offering user authentication.

Bhoj reddy engineering college for women 3 out of 34

Page 4: Actual Topic

Seminar report Microsoft surface: multi-touch technology

A commercial Microsoft Surface unit is $12,500 (unit only), whereas a developer Microsoft

Surface unit costs $15,000 and includes a developer unit, five seats and support.

Partner companies use the Surface in their hotels, restaurants, and retail stores. The Surface is used

to choose meals at restaurants, plan vacations and spots to visit from the hotel room. Starwood

Hotels plan to allow users to drop a credit card on the table to pay for music, books, and other

amenities offered at the resort. In AT&T stores, use of the Surface include interactive presentations

of plans, coverage, and phone features, in addition to dropping two different phones on the table

and having the customer be able to view and compare prices, features, and plans. MSNBC's

coverage of the 2008 US presidential election used Surface to share with viewers information and

analysis of the race leading up to the election. The anchor analyzes polling and election results,

views trends and demographic information and explores county maps to determine voting patterns

and predict outcomes, all with the flick of his finger. In some hotels and casinos, users can do a

range of things, such as watch videos, view maps, order drinks, play games, and chat and flirt with

people between Surface tables.

2.2 History:

The product idea for Surface was initially conceptualized in 2001 by Steven Bathiche of Microsoft

Hardware and Andy Wilson of Microsoft Research. In October 2001, DJ Kurlander, Michael

Kim, Joel Dehlin, Bathiche and Wilson formed a virtual team to bring the idea to the next stage of

development. In 2003, the team presented the idea to the Microsoft Chairman Bill Gates, in a group

review. Later, the virtual team was expanded and a prototype nicknamed T1 was produced within a

month. The prototype was based on an IKEA table with a hole cut in the top and a sheet of

architect vellum used as a diffuser. The team also developed some applications, including pinball, a

photo browser and a video puzzle. Over the next year, Microsoft built more than 85 early

prototypes for Surface. The final hardware design was completed in 2005.

A similar concept was used in the 2002 science fiction movie Minority Report. As noted in the

DVD commentary, the director Steven Spielberg stated the concept of the device came from

consultation with Microsoft during the making of the movie. One of the film's technology

consultant's associates from MIT later joined Microsoft to work on the Surface project.

Surface was unveiled by Microsoft CEO Steve Ballmer on May 30, 2007 at The Wall Street

Journal's 'D: All Things Digital' conference inCarlsbad, California. Surface Computing is part of

Microsoft's Productivity and Extended Consumer Experiences Group, which is within the

Entertainment & Devices division. The first few companies to deploy Surface will include Harrah's

Bhoj reddy engineering college for women 4 out of 34

Page 5: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Entertainment, Starwood Hotels & Resorts Worldwide, T-Mobile and a distributor, International

Game Technology. April 17, 2008, AT&T became the first retail location to launch Surface. In

June 2008 Harrah’s Entertainment launched Microsoft Surface at Rio iBar and Disneyland

launched it in Tomorrowland, Innoventions Dream Home. On August 13, 2008 Sheraton Hotels

introduced Surface in their hotel lobbies at 5 locations. On September 8th, 2008 MSNBC began

using the Surface to work with election maps for the 2008 US Presidential Election on air.

MSNBC's political director, Chuck Todd, was placed at the helm.

2.3 Technical aspects/features:

These all have the same basic framework using cameras to sense objects, hand gestures, and

touch. The user input is then processed and displayed on the surface using rear projection. The

following is a diagram of the Microsoft Surface (Figure B) and an explanation of the parts.

1) Screen: The Surface has an acrylic tabletop which a diffuser makes capable of

processing multiple inputs from multiple users. Objects can also be recognized by their

shapes or reading coded tags.

2) Infrared: Infrared light is projected onto the underside of the diffuser. Objects or fingers

are visible through the diffuser by series of infrared-sensitive cameras which are

positioned underneath the surface of the tabletop.

3) CPU – This is similar to a regular desktop. The underlying operating system is a

modified version of Microsoft Vista.

4) Projector – The Surface uses the same DLP light engine in many rear-projection tvs.

Bhoj reddy engineering college for women 5 out of 34

Page 6: Actual Topic

Seminar report Microsoft surface: multi-touch technology

. Fig 2.1 Microsoft surface

2.4 Features of Microsoft surface computing:

Microsoft surface computing has four main components being important in Surface's interface:

direct interaction, multi-touch contact, a multi-user experience, and object recognition.

Direct interaction refers to the user's ability to simply reach out and touch the interface of an

application in order to interact with it, without the need for a mouse or keyboard. Multi-touch

contact refers to the ability to have multiple contact points with an interface, unlike with a mouse,

where there is only one cursor. Multi-user is a benefit of multi-touch several people can orient

themselves on different sides of the surface to interact with an application simultaneously. Object

recognition refers to the device's ability to recognize the presence and orientation of tagged objects

placed on top of it.

The technology allows non-digital objects to be used as input devices. In one example, a normal

paint brush was used to create a digital painting in the software. This is made possible by the fact

that, in using cameras for input, the system does not rely on restrictive properties required of

conventional touchscreen or touchpad devices such as the capacitance, electrical resistance, or

temperature of the tool used (see Touchscreen).

The computer's "vision" is created by a near-infrared, 850-nanometer-wavelength LED light source

aimed at the surface. When an object touches the tabletop, the light is reflected to multiple infrared

Bhoj reddy engineering college for women 6 out of 34

Page 7: Actual Topic

Seminar report Microsoft surface: multi-touch technology

cameras with a net resolution of 1024 x 768, allowing it to sense, and react to items touching the

tabletop.

Surface will ship with basic applications, including photos, music, virtual concierge, and games,

that can be customized for the customers.

A unique feature that comes preinstalled with Surface is the pond effect "Attract" application.

Simply, it is a "picture" of water with leaves and rocks within it (a lot like Microsoft Surface

Lagoon, included in the

Surface Touch Pack). By

touching the screen, users

can create ripples in the

water, much like a real

stream. Additionally, the

Fig 2.2 object recognition

pressure of touch alters the size of the ripple created, and objects placed into the water create a

barrier that ripples bounce off, just as they would in real life.

2.5 Specifications of surface:

Surface is a 30-inch (76 cm) display in a table-like form factor, 22 inches (56 cm) high, 21 inches

(53 cm) deep, and 42 inches (107 cm) wide. The Surface tabletop is acrylic, and its interior frame is

powder-coated steel. The software platform runs on a custom version ofWindows Vista and has

wired Ethernet 10/100, wireless 802.11 b/g, and Bluetooth 2.0 connectivity. Surface applications

are written using either Windows Presentation Foundation or Microsoft XNA technology.

At Microsoft's MSDN Conference, Bill Gates told developers of "Maximum" setup the Microsoft

Surface was going to have:

Intel Core 2 Quad Xeon "Woodcrest" @ 2.66 GHz with a custom motherboard form factor

about the size of two ATX motherboards.

4GB DDR2-1066 RAM

1TB 7200RPM Hard Drive

The discontinued (as of 6 January 2011) commercially available version had the following

specifications[17]:

Bhoj reddy engineering college for women 7 out of 34

Page 8: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Intel Core 2 Duo @ 2.13 GHz

2GB DDR2 RAM

250GB SATA Hard Drive

Bhoj reddy engineering college for women 8 out of 34

Page 9: Actual Topic

Seminar report Microsoft surface: multi-touch technology

3. Multi-touch technology

3.1 What is multi-touch:

In computing, multi-touch refers to a touch sensing surface's (trackpad or touchscreen) ability to recognize

the presence of two or more points of contact with the surface. This plural-point awareness is often used to

implement advanced functionality such as pinch to zoom or activating predefined programs.

In an effort of disambiguation or marketing classification some companies further breakdown the various

definitions of multi-touch. An example of this is 3M defining multi-touch as a touch-screen's ability to register

three or more distinct positions.

3.2 History:

Multi-touch technology is used to develop a new type of human machine interface (HMI) for the

control room of the Super Proton Synchrotron particle accelerator. The use

of touchscreen technology to control electronic devices pre-dates multi-touch technology and the

personal computer. Early synthesizer and electronic instrument builders like Hugh Le

Caine and Bob Moog experimented with using touch-sensitive capacitance sensors to control the

sounds made by their instruments. IBM began building the first touch screens in the late 1960s,

and, in 1972,Control Data released the PLATO IV computer, a terminal used for educational

purposes that employed single-touch points in a 16x16 array as its user interface.

Fig 3.1 The prototypes of the x-y mutual capacitance multi-touch screens (left) developed at CERN

One of the early implementations of mutual capacitance touchscreen technology was developed

at CERN in 1977 based on their capacitance touch screens developed in 1972 by Danish electronics

engineer Bent Stumpe. This technology was

Bhoj reddy engineering college for women 9 out of 34

Page 10: Actual Topic

Seminar report Microsoft surface: multi-touch technology

In a handwritten note dated 11 March 1972, Stumpe presented his proposed solution – a

capacitative touch screen with a fixed number of programmable buttons presented on a display. The

screen was to consist of a set of capacitors etched into a film of copper on a sheet of glass, each

capacitor being constructed so that a nearby flat conductor, such as the surface of a finger, would

increase the capacity by a significant amount. The capacitors were to consist of fine lines etched in

copper on a sheet of glass – fine enough (80 μm) and sufficiently far apart (80 μm) to be invisible

(CERN Courier April 1974 p117). In the final device, a simple lacquer coating prevented the

fingers from actually touching the capacitors.

Multi-touch technology began in 1982, when the University of Toronto's Input Research Group

developed the first human-input multi-touch system. The system used a frosted-glass panel with a

camera placed behind the glass. When a finger or several fingers pressed on the glass, the camera

would detect the action as one or more black spots on an otherwise white background, allowing it

to be registered as an input. Since the size of a dot was dependent on pressure (how hard the person

was pressing on the glass), the system was somewhat pressure-sensitive as well.

In 1983, Bell Labs at Murray Hill published a comprehensive discussion of touch-screen based

interfaces.[6] In 1984, Bell Labs engineered a touch screen that could change images with more than

one hand. In 1985, the University of Toronto group including Bill Buxtondeveloped a multi-touch

tablet that used capacitance rather than bulky camera-based optical sensing systems.

A breakthrough occurred in 1991, when Pierre Wellner published a paper on his multi-touch

“Digital Desk”, which supported multi-finger and pinching motions.

Various companies expanded upon these inventions in the beginning of the twenty-first century.

The company Fingerworks developed various multi-touch technologies between 1999 and 2005,

including Touchstream keyboards and the iGesture Pad. Several studies of this technology were

published in the early 2000s by Alan Hedge, professor of human factors and ergonomics at Cornell

University. Apple acquired Fingerworks and its multi-touch technology in 2005. Mainstream

exposure to multi-touch technology occurred in 2007 when the iPhone gained popularity, with

Apple stating they 'invented multi touch' as part of the iPhone announcement, however both the

function and the term predate the announcement or patent requests, except for such area of

application as capacitive mobile screens, which did not exist before Fingerworks/Apple's

technology (Apple filed patents for in 2005-2007 and was awarded with in 2009-2010). Publication

and demonstration using the term Multi-touch by Jefferson Y. Han in 2005 predates these, but

Apple did give multi-touch wider exposure through its association with their new product and were

Bhoj reddy engineering college for women 10 out of 34

Page 11: Actual Topic

Seminar report Microsoft surface: multi-touch technology

the first to introduce multi-touch on a mobile device. Microsoft's table-top touch

platform Microsoft Surface, which started development in 2001, interacts with both the users touch

and their electronic devices. Similarly, in 2001, Mitsubishi Electric Research Laboratories (MERL)

began development of a multi-touch, multi-user system called DiamondTouch, also based on

capacitance but able to differentiate between multiple simultaneous users (or rather, the chairs in

which each user is seated or the floorpad the user is standing on); the Diamondtouch became a

commercial product in 2008.

Small-scale touch devices are rapidly becoming commonplace, with the number of touch screen

telephones expected to increase from 200,000 shipped in 2006 to 21 million in 2012.

3.3 Brands and manufacturers:

Fig 3.2 A virtual keyboard on an iPad

Apple has retailed and distributed numerous products using multi-touch technology; most

prominently including its iPhone smartphone and iPad tablet. Additionally, Apple also holds

several patents related to the implementation of multi-touch in user interfaces. Apple additionally

attempted to register "Multi-touch" as a trademark in the United States — however its request was

denied by the United States Patent and Trademark Office because it considered the term generic.

Multi-touch sensing and processing occurs via an ASIC sensor that is attached to the touch surface.

Usually, separate companies make the ASIC and screen that combine into a touch screen;

conversely, a trackpad's surface and ASIC are usually manufactured by the same company. There

have been large companies in recent years that have expanded into the growing multi-touch

industry, with systems designed for everything from the casual user to multinational organizations.

It is now common for laptop manufacturers include multi-touch trackpads on their laptops,

and tablet computers respond to touch input rather than traditional stylus input and it is supported

by many recent operating systems.

Bhoj reddy engineering college for women 11 out of 34

Page 12: Actual Topic

Seminar report Microsoft surface: multi-touch technology

A few companies are focusing on large-scale surface computing rather than personal electronics,

either large multi-touch tables or wall surfaces. These systems are generally used by government

organizations, museums, and companies as a means of information or exhibit display.

3.4 Implementations:

Multi-touch has been implemented in several different ways, depending on the size and type of

interface. The most popular form are mobile devices, tablets, touchtables and walls. Both

touchtables and touch walls project an image through acrylic or glass, and then back-light the

image with LEDs.

Types

Multitouch Capacitive Technology

Surface Capacitive Technology

Projected Capacitive Touch  (PST)

In-cell: Capacitive

Touch Resistive Technology

Analog Resistive

Digital Resistive  or In-Cell: Resistive

Multitouch Optical technologies

Optical Imaging  or Infrared technology

Rear Diffused Illumination (DI)

Infrared Grid Technology  (opto-matrix) or Digital Waveguide Touch (DWT)™

or Infrared Optical Waveguide

Frustrated Total Internal Reflection  (FTIR) or Diffused Surface Illumination (DSI)

Dispersive Signal Touch (DST)

Kinect

In-Cell: Optical

Touch Wave Technologies

Surface Acoustic Wave  (SAW)

Bending Wave Touch  (BWT)

Force-Based Sensing  or Near Field Imaging (NFI)

The optical touch technology functions when a finger or an object touches the surface, causing the

light to scatter, the reflection is caught with sensors or cameras that send the data to software which

Bhoj reddy engineering college for women 12 out of 34

Page 13: Actual Topic

Seminar report Microsoft surface: multi-touch technology

dictates response to the touch, depending on the type of reflection measured. Touch surfaces can

also be made pressure-sensitive by the addition of a pressure-sensitive coating that flexes

differently depending on how firmly it is pressed, altering the reflection. Handheld technologies use

a panel that carries an electrical charge. When a finger touches the screen, the touch disrupts the

panel's electrical field. The disruption is registered and sent to the software, which then initiates a

response to the gesture.

In the past few years, several companies have released products that use multi-touch. In an attempt

to make the expensive technology more accessible, hobbyists have also published methods of

constructing DIY touchscreens.

3.5 List of multi-touch computers and monitors:

The following is a list of multi-touch computers and monitors that use multi-touch technology built

into the screen, rather than, or in addition to, the trackpad or mouse.

Table 3.1 list of multi-touch computers and monitors

Make Model

Form

Facto

r

Operating

System

Numbe

r of

Touch

Points

Screen

Size

Resolutio

nPrice

Availabilit

y

AcerAspire

AS5738PGLaptop Windows 7 2 15.6 inch 1366 × 768 $799.99 22/10/2009

AcerAspire Z5610-

U9072

All-in-

OneWindows 7 2 23 inch 1920 × 1080 $899.99 12/2009

Acer Aspire 1820PT

Ultra-

thin

Tablet

Windows 7

Home

Premium

2 11.6 inch 1366 × 768 $1599.99 15/11/2009

Acer T230H Monitor N/A 2 23 inch 1920 × 1080 $189.00 - 08/2011

Bhoj reddy engineering college for women 13 out of 34

Page 14: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Make Model

Form

Facto

r

Operating

System

Numbe

r of

Touch

Points

Screen

Size

Resolutio

nPrice

Availabilit

y

$355.00

AcerAspire

AS5738PGLaptop Windows 7 2 15.6 inch 1366 × 768 $799.99 22/10/2009

Acer Iconia Laptop Windows 7 10Two times

14 inch1366 × 768 $1,199.99 1/4/2011

AcerAspire Z5610-

U9072

All-in-

OneWindows 7 2 23 inch 1920 × 1080 $899.99 12/2009

Apple iPadAll-in-

OneiOS 11

9.7 inch

(diagonally

)

1024 × 768$499 -

$829April 2010

Blackberry

Playbook

RIM Multi-

Touch Display

LCD

DisplayBlackberry OS 20 7 inch

$551.74 to

$1549.00

September

2010

Cyberdyne

Inc.Tacto

All-in-

OneLinux Unlimited 46 inch 1920 × 1080 10/2010

HPHP TouchSmart

600

All-in-

OneWindows 7 (2)? 23 inch 1080p $1,049.99 22/10/2009

HPHP TouchSmart

tx2Tablet Windows 7 9 12.1 inch 1280 × 800 $799.99 22/10/2009

Bhoj reddy engineering college for women 14 out of 34

Page 15: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Make Model

Form

Facto

r

Operating

System

Numbe

r of

Touch

Points

Screen

Size

Resolutio

nPrice

Availabilit

y

HPHP TouchSmart

9100

All-in-

OneWindows 7 (2)? 23 inch 1920 × 1080 $1,299.99 22/10/2009

HP HP LD4200tmLCD

Display(2)? 42 inch 1920 × 1080 $2,799.99 12/2009

HP HP L2105tmLCD

Display2 21.5 inch 1920 × 1080 $299.00 10/2009

Fujitsu

LifeBook

T5010 Tablet

PC

Tablet Windows 7 13.3 inch $1,759.00 12/2009

Fujitsu

LifeBook

T4310 Tablet

PC

Tablet Windows 7 12.1 inch $1,149.00 12/2009

FujitsuLifeBook

UH900

Handheld

PCWindows 7 5.6 inch $ 12/2009

Gateway One ZX6800-01All-in-

OneWindows 7 23 inch 1920 × 1080 $879.99 11/2009

Gigabyte T1000PNetbook/

TabletWindows 7 10 inch

WXGA HD

1366x768

LED

backlight

$699 03/2010

Bhoj reddy engineering college for women 15 out of 34

Page 16: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Make Model

Form

Facto

r

Operating

System

Numbe

r of

Touch

Points

Screen

Size

Resolutio

nPrice

Availabilit

y

Globus -

Multitouch

Solution

Group

Multitouch

Globe /

GLOBUS -

Spherical(dome

d) multitouch

device

All in Windows 7 unlimited1m Ø

(Diameter)1050 × 1050 31/10/2010

MicrosoftMicrosoft

Surface

All-in-

One

Customized

Windows

Vista With

Surface Shell

52 30 inch 1024 × 768

$12,000-

16,000 +

Commerci

al Tax ID

12/2009

MultiTouc

h

MultiTouch

Cell 467

Advanced

LCD

cube

Windows

XP/7, Linux,

OS X

Unlimited 46 inch 1920 x 1080

Approx

$15,000 +

Commerci

al Tax ID

9/2010

MultiTouc

h

MultiTouch

Cell 460/465

LCD

cube

Windows

XP/7, Linux,

OS X

Unlimited 46 inch 1920 x 1080

$11,500-

15,600 +

Commerci

al Tax ID

9/2008

MultiTouc

h

MultiTouch

Cell 320/325

LCD

cube

Windows

XP/7, Linux,

OS X

Unlimited 32 inch 1920 x 1080

$6,500-

8,000 +

Commerci

al Tax ID

3/2009

Bhoj reddy engineering college for women 16 out of 34

Page 17: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Make Model

Form

Facto

r

Operating

System

Numbe

r of

Touch

Points

Screen

Size

Resolutio

nPrice

Availabilit

y

Motion

Computin

g

J3400LCD

Display(2)? 12.1 inch WXGA $2,299.99 12/2009

N-Touch

Neprash

Technology N-

Touch

MultiTouch

Device

All-in-

OneWindows 7 32

32, 40, 42,

46, 52, 55,

57, 70, 82,

100 and

200 inches

as well as

custom

screen

sizes

1920 * 1080 January 2010

PQ Labs iTableAll-in-

One

Windows

XP/Vista/7,M

ac

32 42 inch 1920 * 1080

$2,399

(Multi-

Touch G2,

32 inches,

screen

only)

$10,000–

12,500

(full 30-

inch table)

June 2010

SamsungSamsung

Galaxy Tab

All In

OneAndroid 5

7 inch

(diagonally

)

1024x600

MYR2,699

= (USD

$876.86)

Varies by

Region

Sony L Series All In Windows 7 (2)? 24 inch 1920 × 1080 $1,299.99 12/2009

Bhoj reddy engineering college for women 17 out of 34

Page 18: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Make Model

Form

Facto

r

Operating

System

Numbe

r of

Touch

Points

Screen

Size

Resolutio

nPrice

Availabilit

y

One

ToshibaSatellite U505

TouchLaptop Windows 7 13.3 inch $950.00 22/10/2009

4. Gesture recognition

Fig 4.1 A child being sensed by a simple gesture recognition algorithm detecting hand location and movement

Gesture recognition is a topic in computer science and language technology with the goal of

interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily

motion or state but commonly originate from the face or hand. Current focuses in the field include

emotion recognition from the face and hand gesture recognition. Many approaches have been made

using cameras and computer vision algorithms to interpret sign language. However, the

identification and recognition of posture, gait, proxemics, and human behaviors is also the subject

of gesture recognition techniques.

Gesture recognition can be seen as a way for computers to begin to understand human body

language, this building a richer bridge between machines and humans than primitive text user

interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to

keyboard and mouse.

Bhoj reddy engineering college for women 18 out of 34

Page 19: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Gesture recognition enables humans to interface with the machine (HMI) and interact naturally

without any mechanical devices. Using the concept of gesture recognition, it is possible to point a

finger at the computer screen so that the cursor will move accordingly. This could potentially make

conventional input devices such as mouse, keyboards and even touch-screens redundant.

Gesture recognition can be conducted with techniques from computer vision and image processing.

The literature includes ongoing work in the computer vision field on capturing gestures or more

general human pose and movements by cameras connected to a computer.

4.1 Gesture recognition and pen computing:

The term gesture recognition has been used to refer more narrowly to non-text-input handwriting

symbols, such as inking on a graphics tablet, multi-touch gestures, and mouse gesture recognition.

4.2 Gesture types:

This is computer interaction through the drawing of symbols with a pointing device cursor which is

referred as pen computing.

In computer interfaces, two types of gestures are distinguished:

Offline gestures: Those gestures that are processed after the user interaction with the object.

An example is the gesture to activate a menu.

Online gestures: Direct manipulation gestures. They are used to scale or rotate a tangible

object.

4.3 Uses:

Gesture recognition is useful for processing information from humans which is not conveyed

through speech or type. As well, there are various types of gestures which can be identified by

computers.

4.3.1 Sign language recognition: Just as speech recognition can transcribe speech to text,

certain types of gesture recognition software can transcribe the symbols represented

through sign language into text.

Bhoj reddy engineering college for women 19 out of 34

Page 20: Actual Topic

Seminar report Microsoft surface: multi-touch technology

4.3.2 For socially assistive robotics: By using proper sensors (accelerometers and gyros)

worn on the body of a patient and by reading the values from those sensors, robots can assist in

patient rehabilitation. The best example can be stroke rehabilitation.

4.3.3 Directional indication through pointing: Pointing has a very specific purpose in

our society, to reference an object or location based on its position relative to ourselves. The

use of gesture recognition to determine where a person is pointing is useful for identifying the

context of statements or instructions. This application is of particular interest in the field

of robotics.

4.3.4 Control through facial gestures: Controlling a computer through facial gestures is a

useful application of gesture recognition for users who may not physically be able to use a

mouse or keyboard. Eye tracking in particular may be of use for controlling cursor motion or

focusing on elements of a display.

4.3.5 Alternative computer interfaces: Foregoing the traditional keyboard and mouse

setup to interact with a computer, strong gesture recognition could allow users to accomplish

frequent or common tasks using hand or face gestures to a camera.

4.3.6 Immersive game technology: Gestures can be used to control interactions within

video games to try and make the game player's experience more interactive or immersive.

4.3.7 Virtual controllers: For systems where the act of finding or acquiring a physical

controller could require too much time, gestures can be used as an alternative control

mechanism. Controlling secondary devices in a car, or controlling a television set are examples

of such usage.

4.3.8 Affective computing: In affective computing, gesture recognition is used in the

process of identifying emotional expression through computer systems.

4.3.9 Remote control: Through the use of gesture recognition, "remote control with the

wave of a hand" of various devices is possible. The signal must not only indicate the desired

response, but also which device to be controlled.

4.4 Input devices:

The ability to track a person's movements and determine what gestures they may be performing can

be achieved through various tools. Although there is a large amount of research done in

image/video based gesture recognition, there is some variation within the tools and environments

used between implementations.

Bhoj reddy engineering college for women 20 out of 34

Page 21: Actual Topic

Seminar report Microsoft surface: multi-touch technology

4.4.1 Wired gloves: These can provide input to the computer about the position and rotation of the

hands using magnetic or inertial tracking devices. Furthermore, some gloves can detect finger

bending with a high degree of accuracy (5-10 degrees), or even provide haptic feedback to the user,

which is a simulation of the sense of touch. The first commercially available hand-tracking glove-

type device was the DataGlove, a glove-type device which could detect hand position, movement

and finger bending. This uses fiber optic cables running down the back of the hand. Light pulses

are created and when the fingers are bent, light leaks through small cracks and the loss is

registered, giving an approximation of the hand pose.

4.4.2 Depth-aware cameras: Using specialized cameras such as time-of-flight cameras, one can

generate a depth map of what is being seen through the camera at a short range, and use this data to

approximate a 3d representation of what is being seen. These can be effective for detection of hand

gestures due to their short range capabilities.

4.4.3 Stereo cameras: Using two cameras whose relations to one another are known, a 3d

representation can be approximated by the output of the cameras. To get the cameras' relations, one

can use a positioning reference such as a lexian-stripe or infraredemitters. In combination with

direct motion measurement (6D-Vision) gestures can directly be detected.

4.4.4 Controller-based gestures: These controllers act as an extension of the body so that when

gestures are performed, some of their motion can be conveniently captured by software. Mouse

gestures are one such example, where the motion of the mouse is correlated to a symbol being

drawn by a person's hand, as is the Wii Remote, which can study changes in acceleration over time

to represent gestures.[22][23][24] Devices such as the LG Electronics Magic Wand, the Loop and the

Scoop use Hillcrest Labs' Freespace technology, which uses MEMS accelerometers, gyroscopes

and other sensors to translate gestures into cursor movement. The software also compensates for

human tremor and inadvertent movement.

4.4.5 Single camera: A normal camera can be used for gesture recognition where the

resources/environment would not be convenient for other forms of image-based recognition.

Although not necessarily as effective as stereo or depth aware cameras, using a single camera

allows a greater possibility of accessibility to a wider audience.

4.5 Algorithms:

Bhoj reddy engineering college for women 21 out of 34

Page 22: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Fig 4.5.1 Different ways of tracking and analyzing gestures exist, and some basic layout is given is in the diagram

above. For example, volumetric models convey the necessary information required for an elaborate analysis, however

they prove to be very intensive in terms of computational power and require further technological developments in

order to be implemented for real-time analysis. On the other hand, appearance-based models are easier to process but

usually lack the generality required for Human-Computer Interaction.

Depending on the type of the input data, the approach for interpreting a gesture could be done in

different ways. However, most of the techniques rely on key pointers represented in a 3D

coordinate system. Based on the relative motion of these, the gesture can be detected with a high

accuracy, depending of the quality of the input and the algorithm’s approach.

In order to interpret movements of the body, one has to classify them according to common

properties and the message the movements may express. For example, in sign language each

gesture represents a word or phrase. The taxonomy that seems very appropriate for Human-

Computer Interaction has been proposed by Quek in “Toward a Vision-Based Hand Gesture

Interface”. He presents several interactive gesture systems in order to capture the whole space of

the gestures: 1. Manipulative; 2. Semaphoric; 3. Conversational.

Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and

an appearance-based. The foremost method makes use of 3D information of key elements of the

body parts in order to obtain several important parameters, like palm position or joint angles. On

the other hand, Appearance-based systems use images or videos for direct interpretation.

4.5.1 3D model-based algorithms:

Bhoj reddy engineering college for women 22 out of 34

Page 23: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Fig 4.5.2 A read hand (left) is interpreted as a collection of vertices and lines in the 3D mesh version (right), and the

software uses their relative position and interaction in order to infer the gesture.

The 3D model approach can use volumetric or skeletal models, or even a combination of the two.

Volumetric approaches have been heavily used in computer animation industry and for computer

vision purposes. The models are generally created of complicated 3D surfaces, like NURBS or

polygon meshes.

The drawback of this method is that is very computational intensive, and systems for live analysis

are still to be developed. For the moment, a more interesting approach would be to map simple

primitive objects to the person’s most important body parts ( for example cylinders for the arms and

neck, sphere for the head) and analyse the way these interact with each other. Furthermore, some

abstract structures like super-quadrics and generalised cylinders may be even more suitable for

approximating the body parts. Very exciting about this approach is that the parameters for these

objects are quite simple. In order to better model the relation between these, we make use of

constraints and hierarchies between our objects.

4.5.2 Skeletal-based algorithms:

Fig 4.5.3 The skeletal version (right) is effectively modelling the hand (left). This has less parameters than the

volumetric version and it's easier to compute, making it suitable for real-time gesture analysis systems.

Instead of using intensive processing of the 3D models and dealing with a lot of parameters, one

can just use a simplified version of joint angle parameters along with segment lengths. This is

known as a skeletal representation of the body, where a virtual skeleton of the person is computed

and parts of the body are mapped to certain segments. The analysis here is done using the position

and orientation of these segments and the relation between each one of them( for example the angle

between the joints and the relative position or orientation)

Advantages of using skeletal models:

Algorithms are faster because only key parameters are analyzed.

Pattern matching against a template database is possible.

Bhoj reddy engineering college for women 23 out of 34

Page 24: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Using key points allows the detection program to focus on the significant parts of the body.

4.5.3 Appearance-based models:

Fig 4.5.4 These binary silhouette(left) or contour(right) images represent typical input for appearance-based algorithms.

They are compared with different hand templates and if they match, the correspondent gesture is inferred.

These models don’t use a spatial representation of the body anymore, because they derive the

parameters directly from the images or videos using a template database. Some are based on the

deformable 2D templates of the human parts of the body, particularly hands. Deformable templates

are sets of points on the outline of an object, used as interpolation nodes for the object’s outline

approximation. One of the simplest interpolation function is linear, which performs an average

shape from point sets , point variability parameters and external deformators. These template-based

models are mostly used for hand-tracking , but could also be of use for simple gesture

classification.

A second approach in gesture detecting using appearance-based models uses image sequences as

gesture templates. Parameters for this method are either the images themselves, or certain features

derived from these. Most of the time, only one ( monoscopic) or two ( stereoscopic ) views are

used.

4.6 Challenges:

There are many challenges associated with the accuracy and usefulness of gesture recognition

software. For image-based gesture recognition there are limitations on the equipment used

and image noise. Images or video may not be under consistent lighting, or in the same location.

Items in the background or distinct features of the users may make recognition more difficult.

The variety of implementations for image-based gesture recognition may also cause issue for

viability of the technology to general usage. For example, an algorithm calibrated for one camera

may not work for a different camera. The amount of background noise also causes tracking and

Bhoj reddy engineering college for women 24 out of 34

Page 25: Actual Topic

Seminar report Microsoft surface: multi-touch technology

recognition difficulties, especially when occlusions (partial and full) occur. Furthermore, the

distance from the camera, and the camera's resolution and quality, also cause variations in

recognition accuracy.

In order to capture human gestures by visual sensors, robust computer vision methods are also

required, for example for hand tracking and hand posture recognition or for capturing movements

of the head, facial expressions or gaze direction.

5. Human-computer interaction

Human–computer Interaction (HCI) involves the study, planning, and design of the interaction

between people (users) and computers. It is often regarded as the intersection of computer

science, behavioral sciences, design and several other fields of study. The term was coined by Card,

Moran, and Newell in their germinal book, "The Psychology of Human-Computer Interaction." The

term connotes that, unlike other tools with only limited uses (such as a hammer, useful for driving

nails, but not much else), a computer has many affordances for use and this takes place in a sort of

open-ended dialog between the user and the computer.

Interaction between users and computers occurs at the user interface (or simplyinterface), which

includes both software and hardware; for example, characters or objects displayed by software on a

personal computer's monitor, input received from users via hardware peripherals such

as keyboards and mouses, and other user interactions with large-scale computerized systems such

as aircraft and power plants. The Association for Computing Machinery defines human-computer

interaction as "a discipline concerned with the design, evaluation and implementation of interactive

computing systems for human use and with the study of major phenomena surrounding them." An

often-sought facet of HCI is the securing of user satisfaction , although user satisfaction is not the

same thing as user performance by most meaningful metrics.

Because human-computer interaction studies a human and a machine in conjunction, it draws from

supporting knowledge on both the machine and the human side. On the machine side, techniques

Bhoj reddy engineering college for women 25 out of 34

Page 26: Actual Topic

Seminar report Microsoft surface: multi-touch technology

in computer graphics, operating systems, programming languages, and development environments

are relevant. On the human side, communication theory, graphic and industrial design

disciplines, linguistics,social sciences, cognitive psychology, and human factors such as computer

user satisfaction are relevant. Engineering and design methods are also relevant. Due to the

multidisciplinary nature of HCI, people with different backgrounds contribute to its success. HCI is

also sometimes referred to as man–machine interaction (MMI) or computer–human interaction

(CHI).

Attention to human-machine interaction is important, because poorly designed human-machine

interfaces can lead to many unexpected problems. A classic example of this is the Three Mile

Island accident where investigations concluded that the design of the human-machine interface was

at least partially responsible for the disaster. Similarly, accidents in aviation have resulted from

manufacturers' decisions to use non-standard flight instrument and/or throttle quadrant layouts:

even though the new designs were proposed to be superior in regards to basic human-machine

interaction, pilots had already ingrained the "standard" layout and thus the conceptually good idea

actually had undesirable results.

5.1 Pen computing:

Pen computing refers to a computer user-interface using a pen (or stylus) and tablet, rather than

devices such as a keyboard, joysticks or a mouse.

Pen computing is also used to refer to the usage of mobile devices such as wireless tablet personal

computers, PDAs and GPS receivers. The term has been used to refer to the usage of any product

allowing for mobile communication. An indication of such a device is a stylus, generally used to

press upon a graphics tablet or touchscreen, as opposed to using a more traditional interface such as

a keyboard,keypad, mouse or touchpad.

Historically, pen computing (defined as a computer system employing a user-interface using a

pointing device plus handwriting recognition as the primary means for interactive user input)

predates the use of a mouse and graphical display by at least two decades, starting with the

Stylator  and RAND tablet systems of the 1950s and early 1960s.

Bhoj reddy engineering college for women 26 out of 34

Page 27: Actual Topic

Seminar report Microsoft surface: multi-touch technology

6. Multi-touch devices

Multi-touch gestures are employed by some touchscreen devices to perform various actions. A

gesture refers to a motion used to interact with multipoint touch screen interfaces.

6.1 Apple devices:

Multi-Touch works on devices that run the iOS operating system such as the iPhone, iPad, and iPod

touch, as well as on the built-intrackpads of the MacBook family. Multi-Touch is also fully

integrated into Apple's Magic Mouse and Magic Trackpad products.

Bhoj reddy engineering college for women 27 out of 34

Page 28: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Fig 6.1 New MacBooks, Improved Multi-Touch Trackpad

The latest MacBook and MacBook Pros (late 2008 models) both have a new “buttonless” trackpad

which is bigger and made of a touch-friendly, and wear-resistant glass. The entire trackpad has

been completely redesigned and it’s also one large button so it’s clickable everywhere on the

surface. No separate button means there’s more room for additional multi-touch gestures and your

fingers can move with ease on the smooth and silky glass surface.

Bhoj reddy engineering college for women 28 out of 34

Page 29: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Fig 6.2 devices using multi-touch technology

6.2 Kids-on with the smart table:

Fig 6.3 kids with the smart table

We got to play around with a SMART Table in a classroom full of lucky kids at Haines Elementary

School in Chicago this morning, and we came away impressed with how much they loved it. The

multitouch table is built on the same basic idea and hardware as Microsoft Surface -- Vista PC,

XGA projector, infrared camera -- but it's a custom patented SMART design, not Surface lite or

anything like that. That said, the multitouch system isn't quite as responsive as Surface, and the kid-

Bhoj reddy engineering college for women 29 out of 34

Page 30: Actual Topic

Seminar report Microsoft surface: multi-touch technology

proof plastic screen felt a little weird, but it certainly works well enough -- the Table recognizes up

to 40 touches and we saw some interesting demos, ranging from the standard rotate / zoom photo

app to painting and puzzle games. Teachers get admin access with a special USB key that enables

them to manage apps, and there's an SDK in the works, so hopefully there'll be quite a few to

manage. SMART says the Table should start shipping next spring for somewhere between $7,000

to $8,000 each -- obviously the company will be targeting school systems with its extensive

SMART Board sales network, but well-off parents will be able to score one for their darling

children as well. Check a few vids of the table in action after the break.

7. Applications

The following is an example of a possible application using Microsoft Surface:

1) On the left you have your device which has stored your information.

2) On the right you have your friend’s device which has stored his/her information.

3) In the center it’s showing how you can pull the information needed from each device and

compile it to complete the final project.

Bhoj reddy engineering college for women 30 out of 34

Page 31: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Fig 7.1 the use of multi-touch for grabbing information

8. Advantages and disadvantages

8.1 Advantages of Multi-touch Technology:The administration of a classroom can be improved by reducing the amount of time a

teacher spends fulfilling paperwork requirements alone, such as test taking and scoring.

The tests could be included in each student’s desktop and automatically recorded and

scored.

The teacher's desktop could have the ability to look at each student's desktop from their

desk and take control if necessary. This can be used to help a student having trouble or to

verify that the student is staying on task.

Also, teachers would have the ability to send presentations to any or all desktops

eliminating the need for print outs and copies.

A chat system like IM could be set up so that the teacher could send a private note to a

student during a class exercise without bringing attention to the student whether it is

positive or negative.

If a problem occurred on one Surface, that student could move to another student’s desk

and work along with them until theirs was fixed.

By engaging the students and combining both the audio and visual aspects in every lesson

plan, we have a better chance of reaching every student and increasing the percentage of

information retained.

Students will be able to work in groups at one desktop Surface. This would make the

construction of projects easier. Also, students will be able to work on class assignments

Bhoj reddy engineering college for women 31 out of 34

Page 32: Actual Topic

Seminar report Microsoft surface: multi-touch technology

together or help each other and sometimes students are able to learn and understand better

when the information is delivered or reiterated from their peers in a more creative fashion

8.2 Disadvantages of Multi-touch Technology: The technology is currently expensive and just beginning to gain some recognition out in

the marketplace.

If these tables have the ability to have 4 students to each one, privacy becomes an issue

which will need to be addressed especially during test taking times. Also, you wouldn’t

want one student to be able to reach over and delete another student’s work. The issue of

personal space and boundaries would need to be addressed.

Another disadvantage would be that technology is unreliable and if a problem occurred

with an application class would be disrupted even if only for a short period of time.

Bhoj reddy engineering college for women 32 out of 34

Page 33: Actual Topic

Seminar report Microsoft surface: multi-touch technology

9. Conclusion There wouldn’t be a surprise if each student’s desk top be replaced by a multi-touch

technology similar to the Microsoft Surface. Each classroom and teacher would have their

Surface applications customized to fit their specific curriculum. These devices offer various

ways of visualizing the information in order to improve understanding which enables our

students to excel. I feel we need to find ways to keep up with the rapidly growing world of

technology and integrate it into our classrooms or our students are going to surpass us and figure

out ways to do things better and faster at home on their own personal computer. With Microsoft

Surface the opportunities are endless with the ability to create custom applications for specific

businesses or educational purposes or building packaged applications for use across a range of

industries or schools.

Bhoj reddy engineering college for women 33 out of 34

Page 34: Actual Topic

Seminar report Microsoft surface: multi-touch technology

Bhoj reddy engineering college for women 34 out of 34