nyc uxpa: wud 2013 - peter mitchell (part 1)
Post on 16-Apr-2017
265 Views
Preview:
TRANSCRIPT
PowerPoint Presentation
Peter P. Mitchell, Ph.D., CHFPErgo Research, LLC
I wonder if the People + Technology contract is changingThe analogy: John Locke and the social contractIn both, one gives up some of their individualism
But, let us talk about patient safety
Who Should Be In Charge?
The concept: People can be shielded from harm through a well-conceived deal with technology. Alarms.
These are good, but are reactive. Thinking conceptually, what would be the next step?
I would like to offer an example.
An intrathecal infusion pump.
Shielded from Harm, Are Patients?
The Pump is Surgically Implanted
Drug Flow Rate is Controlled By:
When delivered from factory saline in the tubing.
User sets a very high flow rate: 1,000 units per hour get all of the saline out.
After approximately 20 minutes change to the patient dosage, 3 units per hour (never higher than 20).
A doctor designated to change flow, all of his attention resources = on surgery. Did not make the change.
Need to Drain Saline Solution
The result: patient gets 1,000 units per hour, not 3 units.
In the Root Cause Analysis that was done, focus was on the people part of the equation.
Wait, this is simple. We are making it harder than it is.
The machine knows 1,000 units per hour is wrong. Use your (anthropomorphism) head do something.Patient Receives 1,000 Units/Hr.
So our social contract: People + Machines = changing.
Technology is saying: Not only can I address things that occurred previously, I am now going to look into the future.
What if medical devices knew of errors in the future?Its a New Dawn Anticipate Future
Better Error Avoidance than Medicine
So this machine sees errors two cars ahead.
Medical devices that see into the future?
What is the implication for human factors and the user experience?
And what does this say about the social contract between machines & humans?Is This Right?
top related