A couple of weeks ago, I was participating in one of CSC’s technology hangouts. The topic was around the shift to digital health. Femi Ladega and Dan Hushon did a great job explaining the engagement between healthcare provider and patient on the care journey and the digital transformation occurring in this exciting field.
After the hangout, the conversation moved onto crowdchat where some awesome questions got posed and everyone did their best to make worthwhile contributions. In a conversation about whether patients would choose a physician based on digital capabilities, such as analytics for diagnosis, I raised a hypothetical voice instruction: “Hey Siri, take me to the best hospital for leg fractures.”
This spurred a discussion about the effects of stress on the digital personality. In the example above, the user’s focus changes from a wide-angle lens on life to a myopic view on the actions and core human instincts: “How do I get my leg fixed? Who has the kids? Did I leave the burner on the hob?” If so, does that also mean that to remain useful, the device in your pocket not only needs to understand your typical day, but something very atypical – and adapt to this new context?
Today, there is not sufficient open integration between healthcare systems, objective result analysis, digital transportation APIs and personal preferences to provide a personalised answer to the question I posed, but this will come in the near future. The data is already there. Maybe it’s not joined up yet, and has some information security hoops to jump through, but it will get there.
At the Apple World Wide Developer Conference, the keynote outlined the upcoming version of the iPhone operating system (iOS 10) in conjunction with the latest Apple WatchOS 3, which will take the first combined steps into the emergency context. A long-press of the physical button will initiate a call and send messages to ICE (in case of emergency) contacts, which takes away the burden of remembering if the local emergency number is 911, 112 or 999.
The idea that my phone might react differently if it determines that I am stressed is welcome in a lot of ways. Maybe the display would change to pastel shades, play concertos rather than drum and bass or order me a decaf rather than an espresso. But in other ways it provides yet another metric by which judgment, correlations and potential discrimination can be founded, fuelling the continuing battle between features and privacy.
Scott Hanselman demonstrates a great mock-up of this kind of integration approach in this video.
Feedback is always welcome, so feel free to get in touch @glennaugustus.
comments powered by Disqus