User experience is so key and Danielle Reid presented a couple of things which I found really intriguing. Firstly the idea of being able to use a device (phone, watch, clock etc) without actually touching it. There has been some work on this at Google under project Soli.
The possibilities with Soli are really intriguing but how they take the next step is the biggest question for me. Event, such a click etc are easy to define to be used within an application, how complex gestures can be turned into something similar for developers to work with is going to be tricky to give apps freedom but for users to be able to find them intuitive to use.
Also there was a revisit to something which has been around a long time – text to speech. Where as before the output of such processes sounded very robotic, increasingly these are sounding more life like. This combined with the increasing trend where devices have a personality make people think that the person is real – examples such as Siri on iPhone, Cortana on Windows or Alexis for Amazon. With this there are many possibilities to engage people, such as the idea of a personalised radio station.
Taking this personalisation further where you can interact with a system to get jobs done such as shopping etc is inevitable. Facebook is known to be working on a digital assistant and so in the future you can just ask for whatever you want and it will turn up to you door, a similar service to Magic+ which used a combinations of humans and intelligent software currently but with time I expect the intelligent software will do more and more of the workload.