Doreen Lorenzo was having dinner with friends a couple weeks ago when the topic turned to gadgets like the Amazon Echo. They said two things that stuck with her: “Alexa is the first person we talk to in the morning,” and “She’s very accurate.”
As director of UT Austin’s Center for Integrated Design, Lorenzo (pictured left) can’t help unpacking statements like this about intelligent interfaces in consumer tech, including Siri, Google Now and Microsoft’s Cortana.
The fact that her friends interact with Alexa during one of the most disorienting and vulnerable moments of their day, combined with the personal pronoun they use to describe her, indicates that the general public is getting more cozy with artificial intelligence than ever before.
Anecdotally, at least. Elsewhere, there’s no shortage of handwringing over the prospect of machines rising up to take our jobs. And with the presence of IBM’s Watson, machine learning companies and Lorenzo’s program at the university, Austin is positioned to either be a source of light in a bright future, or — depending on your perspective — ground zero for our annihilation.
Mark Rolston predicts we’ll end up somewhere in the middle, with machines making decisions and humans doing the labor.
Sounds crazy, right?
“Uber is already there,” said Rolston (pictured right), explaining how the company’s navigation app discourages drivers from deviating from its directions. “On Amazon’s homepage, a very human act of laying out information isn’t done by a product catalog manager, but by algorithms that choose the products I see. As we look ahead, the choice of content and the taxonomy of where it sits and ultimately the layout of that content — the artistic decisions that become my UX — are being done by software.”
Rolston also explained how machines are better suited to test and build, iterating more rapidly and based on more feedback that humans can accommodate.
This point hits home for Rolston, whose company, Argodesign, develops premium UI for startups in exchange for equity instead of a typical fee structure. In a recent post on Rolston’s blog for Fast Co., he suggested AI can even be applied to creative work like his own.
“While we as humans are willing to try out a range of ideas, a computer can try out thousands of ideas,” he said. “We have a sense that its judgment is probably crap, but thousands of ideas tried out against hundreds of millions of people means that any issues of judgment get quickly tried out against those reactions.”
Rolston predicts that, before long, this may be considered the most practical and efficient approach to design work.
“Imagine a computer redesigning Facebook. It may make it terrible at first, but with feedback it could quickly adapt it,” he said. “There may be some shouting from people looking out for their well being, but that doesn’t make this wrong. I’m not making that judgment call. If you asked me, I’d say it’s wrong.”
Doctors in your pocket
Elsewhere in Austin, the combination of AI and cloud infrastructure is making inroads with healthcare and research, crunching data on a large scale to learn about the way patients really live and suggest behaviors to improve it.
Take CognitiveScale, for example, a client of Rolston’s that is using cloud-based machine learning to help teens with type 1 diabetes adjust to adult life with a companion mobile app that learns and adapts to their habits and lifestyle in real time.
Or UnaliWear, the Austin startup behind the Kanega smart watch for seniors (pictured above) that processes lifestyle data in the cloud while users sleep so it can give them intelligent help when they need it most, like offering directions home when it detects wandering.
“We’ve got so much data today, we’re drowning in it,” said CEO Jean Anne Booth (pictured below right). “But the real value of the data is what it means, and AI today doesn’t tell you that.”
Booth said healthcare AI is an especially tough nut to crack, because great tech introduces regulatory hurdles.
If you’re too smart in understanding what the data means, then the FDA will classify it as diagnostic, which means the application must go through the FDA trial process,“ she said. ”That takes time and money, both of which are in short supply for startups."
Health tech data is also challenged by iffy input at the source. According to Booth, consumer devices can be more than 50 percent inaccurate in actual sensor data. That adds an additional level of complexity to being able to do something meaningful with health data.
Robot co-workers are already here
Of course, not every AI innovation is so complicated. Last fall, we covered the launch of Howdy, an Austin-built bot that lives in your team’s Slack channels and intelligently coordinates everything from status meetings to ordering lunch, all in the form of casual chit-chat.
At the time, Howdy co-founder Ben Brown (pictured below) said the trend of interacting with software through messaging is growing so fast, its given way to MXD, or messaging experience design.
“Ask yourself, what are the things in my day that reliably waste my time? What do I do over and over again,” he asked in a post on Medium. “Can it be automated?”
Increasingly, the answer seems to be yes.
Just as lightweight health tech applications on mobile phones and wearables are benefiting from the convergence of cloud infrastructure and AI, messaging bots are getting smarter by mixing AI with natural language processing, another technology contributing to AI’s momentum.
That requires a degree of trust on the user’s part. As Rolston puts it, technology requires less user input as it grows smarter. Consequently, Rolston and his peers are designing interfaces with fewer controls.
“With a black box — those algorithms being inside the computer and us not touching them — we’re designing the few knobs that remain,” he said. “It’s difficult to engender trust because there are so few controls left. You get fewer answers back.”
Delivering on the dream
As most iOS users know and even Microsoft CEO Satya Nadella can attest after an agonizing live demonstration of Cortana’s voice commands backfired on him in front of an audience, streamlined input can make assistants like Siri pretty hit and miss.
But that’s the thing about machine learning: It’s always improving. And like Lorenzo’s friends said of their positive daily experiences with their Amazon Echo, AI isn’t a threat on the horizon as much as it’s a present-day companion — whether we see it or not.
The biggest question remaining is how to best seize the opportunities at hand, and who will take the lead.
Lorenzo is betting on Austin.
“One thing Austin brings to the table is a tech legacy of helping advance the state of AI, and also making it more human-centric through all the creative and design prowess here,” she said. “Multidisciplinary talent draws a lot of people here.”
Rolston, ever the pragmatist, thinks it may get worse before it gets better.
“These companies are at risk of the inevitable trough of disillusionment with AI, which I’m sure will happen,” Rolston said. "[But] that’s only because the promise is so great. It’s like someone promising you a red sun and you only getting a yellow one.
“But it’s still humanity shifting in a big, notable way. So yeah, I think Austin has quite a bit to say about it. I don’t have a market-side lens on whether it’s ahead of anybody, but I know it’s notable, for sure. It’s kept us busy as hell.”