At the start of the year, Facebook CEO Mark Zuckerberg set himself the challenge of building an artificially intelligent (AI) assistant to help him manage his work and home life. Professor mc shraefel, Head of the University’s Agents Interaction Complexity (AIC) Group, talks about how feasible this challenge is, and whether in a few years we could all have an AI assistant of our own.
Mark Zuckerberg’s challenge is to build an AI assistant ‘a bit like J.A.R.V.I.S. in Iron Man’, to help at home – using his voice to control his environment, face recognition to answer the door to friends, checking on his daughter when he’s not with her, and visualising data in virtual reality for his work. Much of this is possible now, although without the special effects. One of the less ‘sexy’ parts of this interaction is to have the data about us, our plans and the world around us, in a form that the AI assistant can coordinate and use. For example, most of us have a mobile phone with photos, calendars, contacts and GPS. When this information is pooled together, an agent – software that runs on a computer we access – could see that I have a meeting at a certain GPS location, which is close to a grocery store, and by using some simple rules could infer that there’s an opportunity for me to buy the milk that’s currently on my to-do list. It could then just ask me: “Do you want to do this now, or should I ask again later?”
Right now, there is a wide range of reminder apps on various app stores. For most of them, we have to manually enter our to-do list. Some apps are starting to use manually entered data over time to detect patterns of behaviour, and based on that, make suggestions more automatically. What these systems observe at the moment is pretty weak. The science behind how to coordinate our data is largely there, but the infrastructure around it is less well developed to support it at the moment. And one of the biggest bottlenecks is around the accessibility and quality of the data we can access.
On the one hand, technically, data about us can be very messy; extracting meaning from free text is a big issue. On the other hand, we have very organised data that we capture into our address books, phones, photos, GPS locations, calendars, heart rate monitors, steps walked, purchases made, movies watched, and so on that when brought together provide a raft of information about us. Just watching how we surf the Web, which cookies help make happen for advertisers, reveals a lot about us.
The big question is not really when will we have Zuckerberg’s vision – as most of what he wants is available in some form right now – but what’s the cost to us?
Whose data is it?
Data on our phones about where we are, who we see and what we are like is all packaged into apps that are isolated from each other, and often a third party holds that data. It’s not really easily accessible by us. You can’t, for instance, use a branded step activity monitor (eg a Fitbit) to upload your steps data to your own hard drive; you have to go through the company and give it to them first.
There are ways to connect this data together so that having an AI assistant that will prompt you to buy milk from the grocery store is quite feasible. However, these work by us letting many other third parties who offer various services have access to our data.
Take Mark Zuckerberg’s desire to monitor his child. At the moment you can watch your baby monitor via a webcam connected to a service. But there is a risk of this being hacked (as happened recently in the news) because your data is not just going from a webcam in your house to your phone; it’s all mediated through a third party. If that company is sold, all the images they have collected for you become part of what they can sell on: check those terms and conditions of use.
There are alternatives to this approach. We can create services that provide versions of data as ‘open’ data – anonymised data that is available for re-use. This could lead to valuable resources for research to better understand how people engage with activity so we could make our cities and communities and workplaces healthier.
Open data is something that we as a university are very passionate about; the key will be for companies creating apps to follow suit and allow their data to be shared, so that the AI assistants of the future can help us individually and socially.
Designing the perfect AI assistant
Here in the AIC Group, our research focus is on the design of systems to help improve people’s performance and quality of life. Working with Health Sciences, we are currently looking at the challenge of designing the perfect AI assistant for frail elderly people who need help and support around the home. We are looking at the interaction between the person and the computer and how it can help with their health and wellbeing. Rather than thinking about the average 24-year-old with a smartphone, I think it is useful to look at the more challenging requirements of elderly people, who may have mobility, sensory or cognitive impairments, so that we can push ourselves to solve more challenging problems that can benefit everyone.
For me, the ideal AI assistant is not a single robot or app, but an entire environment working to support a person, with their input to determine the level of support that they need. For example, it could notice that the elderly person hasn’t visited the kitchen all day and check whether they need help and want to eat something. It could also coach them with exercises to help them get fitter from the comfort of their favourite armchair, and make suggestions based on what they have already done, like advise they take a break if they have already over-exerted themselves. So, in other words, it could take a holistic view of the person that takes into account their behaviour in terms of eating, exercise and sleeping, and how they can all interact to help them have a better quality of life. This environment-level coordination is all part of the ‘internet of things’, which is a field we are pushing forward here at Southampton.
This project will bring together many different areas of expertise we have here at the University, including the health sciences, sensors, machine-based learning, human factors and user interfaces. What we are hoping to reach is a gestalt: where the sum of all these parts is far greater than one. The overall aim is to create a system that could be rolled out and used to support elderly people in their homes, to address the issue that there are not enough care workers to help everyone who needs this kind of support.
So could we all have an AI assistant in our homes in the next 10 years? If the funding is there from research councils to pursue this research and the benefits are recognised, then yes we could.
For more information about the AIC group’s work, visit www.aic.ecs.soton.ac.uk or to follow mc on Twitter, search @mcphoo