In short, current-gen digital assistants have a social IQ of 0. They’re increasingly aware of every detail of our digital and physical lives, able to understand what we’re doing and provide contextual help and relevant information without prompting — but they’re completely blind as to the social context in which our activities occur. They’ll eagerly offer up the most private details of your life to anyone who manages to access your device(s), legitimately or otherwise.
Sometimes the result is funny-tragic, like Windows 10 helpfully turning someone’s porn stash into a screensaver, but it’s not difficult to imagine scenarios where it could be dangerous, like turning scans of passports or personal documents into a screensaver, or exposing your contact list to a competitor or stalker. In general, it’s not safe at the moment to hand access to your computer or phone to anyone you don’t completely trust, because your device’s concept of “you” doesn’t really exist. There’s just “what’s typically done on this device” and current-gen digital assistants will helpfully remind and search and suggest based on what’s typically done, even if the results could be embarrassing, awkward, or dangerous.
Since we’re getting to the point where digital assistants are viable thanks to machine learning revolutions, we seriously need to start thinking about the social contexts. Our devices are increasingly full of sensors that can detect bits and pieces of the real world, but there’s little effort right now to build software that incorporates that information into a social context — to build software that can:
- understand the difference between an audience (physically or virtually) of “me,” “me and my partner,” “me and my friends,” “me and my colleagues”, “someone who isn’t me”, etc., and
- determine whether a piece of information (documents, applications, history, etc.) is appropriate for that audience.
Until operating systems, applications, and assistants are smart enough to understand that they shouldn’t display personal contact lists when screensharing on Skype, or that they shouldn’t passively include stashed porn in screensavers, or that they should hide banking information or travel schedules if a stranger enters the room while it’s visible on the screen, or a thousand other scenarios that require some sense of social context — they’re potentially dangerous.
Right now, these primitive AIs all operate as if every device is your private, personal device which is never accessible or visible to anyone else, and until they gain some degree of social intelligence, I won’t be using them. I’m cautiously optimistic that we’ll be able to resolve these issues in the next decade or two and I’ll happily jump on board, but it will require paying attention to the fact that it’s an issue to begin with — one that, to date, has had very little high-profile discussion and needs much more.