the ux of tomorrow: designing for the unknown by jeff feddersen
Post on 08-Aug-2015
234 Views
Preview:
TRANSCRIPT
The UX of Tomorrow: Designing for the UnknownMIT Enterprise Forum of NYC June 4, 2015 Jeff Feddersen fddrsn.net
Background: three alternate UX projects • Li Ning Sport Challenge • HBO “Superwall” • Target StyleScape
Physical Computing @ NYU • What pcomp is • How it is taught • Example projects
Large body-controlled interactive game (pre Kinect)
Li Ning Reactive Wall
With: Ziba, AV&C Photo: Ziba
Interactive touch UI integrated with large video wall, computer vision system, and SMS.
HBO Superwall
With: BLT, Apologue, AV&C
UI UI UI UI
•Video playback, texture layer, and variable compositing mask on Vista Spyder and Watchout systems. •4 independant instances of a java-based UI, running 2160x1920 @ 60fps (separate HD UI and alpha channels) •Crowd sensing cameras •Participant surveilance photo cameras
With: Mother NYC, AV&C, Brooklyn Research
Target StyleScape120’ LED cinemagraph with mixed interactives along entire length combining tangible, computer-vision, mobile, and human-directed moments
Photo: Mother NYC
1. Interactive Overview
Gizmos Eyes Mobile HumansSimple hardware sensors strategically located throughout the space
Video/Depth streams processed to support interaction
Guests use their devices Event staff in the mix
Four broad categories of interaction tech have distinct infrastructure, execution, and cost implications. Any of the four can be mixed together, and each can be scaled from small+targeted to broad+comprehensive integration with the cinemagraph.
Design document for Stylescape
Gizmos
Capture Board
CPU Data to cinemagraph
Display wallProximity sensor
Floor pad
Switch
ButtonMotion detector
In this scenario, the space will have small, simple sensors custom-built into the environment or integrated with props. A single central computer reads the state of each sensor, filters the data, and sends triggers to the video system.
Design document for Stylescape
Eyes
CPU
Data to cinemagraph
Display Wall
CPU CPU CPU CPU CPU CPU
Cameras - either 2- or 3D, and used singly or in an array - watch the crowd. Computers (approximately 1 per image stream) process the data into triggers for the video system.
Design document for Stylescape
Mixed
Capture
CPU
Data to cinemagraph
Display wall
CPU CPU
The four categories can be mixed together to best support specific interactions. However, cost and effort are cumulative because there is almost no infrastructure overlap.
CPU
Design document for Stylescape
Summary
Gizmos Eyes Mobile HumansSimple, scalable, many possibilities from the same components
Could be cool and subtle. Can cover large space
Familiarity, contact beyond event
Flexible, open ended, resilient
Needs integration into props, lots of cabling breakable
Needs lots of processing to extract smart triggers. Optical cameras: lighting
Common Requires staffing, training, management
Low cost (scalable) High cost (1:1 CPU camera) All costs in software/campaign
Low cost
Medium effort (scalable) High effort Medium to High effort Low
Design document for Stylescape
3. Plan
IR RE CC CC RE IRIR
IR Mic Custom or stock control surface
CPU With mic in
ADC 6-12 channels
CC
* *
*
* These might also be accomplished with sensors.
IR
RE
CC
Mic
Infrared, motion, or similar
Rotary encoder or similar
Contact closure
Microphone
Design document for Stylescape
Common attributes:
• Heterogenous systems with distinct boundaries
• Components joined by “network glue” • (Typically UDP/OSC in my case)
• Concept precedes solution
From: https://itp.nyu.edu/physcomp/ WHAT IS PHYSICAL COMPUTING?Physical Computing is an approach to computer-human interaction design that starts by considering how humans express themselves physically. Computer interface design instruction often takes the computer hardware for given — namely, that there is a keyboard, a screen, speakers, and a mouse or trackpad or touchscreen — and concentrates on teaching the software necessary to design within those boundaries. In physical computing, we take the human body and its capabilities as the starting point, and attempt to design interfaces, both software and hardware, that can sense and respond to what humans can physically do.
Requires thinking about • 1-bit
• digital I/O e.g. button, LED • Many-bits
• analog I/O e.g. knob, fading LED • Ways to transduce aspects of the physical world
to varying electrical properties (typically changing resistance->changing voltage)
Handle messy “real-world” inputs
Derive meaning from input: what did user do vs. what did user want
Reconnect to meaningful output
Learn communication protocols like… • Asynchronous Serial • I2C • SPI
…so you can connect to other “smart” components such as: • Accelerometers • GPS • Display drivers • just about anything else…
Jonathan Han and Yuhang Jedy Chen, 2014
http://jedychen.com/category/equilibrium/
2 person “synchronization” game
top related