Zhou Yu recognized for work with multimodal analysis in AI
Assistant Professor Zhou Yu of the Department of Computer Science was named one of Forbes’ “30 under 30” in the field of science for 2018. Holding a doctorate from the Language Technology Institute from Carnegie Mellon University, as well as undergraduate degrees in computer science and English, Yu’s research is interested in the intersection of language and machine learning. Beyond receiving the accolade, Yu is excited about the possibilities it will bring her.
“It is very hard for me to get exposure to the general public, not only in my academic community,” Yu said. “So it opens up different opportunities for me to collaborate with other people so they can hear about my research.”
She develops intelligent interactive systems that are able to meaningfully communicate beyond simple commands. Unlike personal assistants like Apple’s Siri, which operates on a natural language interface, where linguistic data like verbs and clauses function as controls, Yu works with multimodal analyses of language for her machine-learning work. This means that data is not only being pulled from linguistic information, but also from spatial and visual cues. In other words, multimodal analysis looks at the meta features of language, such as nonverbal communication, to create a more sophisticated chatbot. Equipped with all this data, these multimodal chatbots are able to do more than communicate.
“My work is very interdisciplinary within the AI field itself, and it also has a wide application field,” said Yu. “Having worked on different applications for example, I collaborated with Educational Testing Service to work on automatic systems that could help people to improve their language conversational abilities, like learning English.”
She also collaborated with researchers from the University of Southern California during her Ph.D. on an interactive chatbot that could give a prediction if the interlocutor with the AI had depression or PTSD. The system would look at linguistic cues, facial affects and rate of speech to make its prediction. The system is not only built around the importance of what you say, but in how you say it, Yu explained. Many people with depression have what Yu calls “a cold flow” of speech, or a choppy voice, and the machine would pick up on this and quantify it for its prediction.
Yu is currently working on a project for Amazon where she received a $100,000 grant to create a chatbot for the Echo platform. The work is part of a challenge to develop a social chatbot that can maintain a conversation for the longest period of time possible. She is also working on how to evolve robots through the combination of natural processing for both dialogue and vision.
Yu is very excited about the current state of AI development overall, and feels very hopeful about its future. UC Davis students shares her enthusiasm.
“I think that the position that AI is currently in is astounding, and as we’re converging into an era with an emphasis of human thought coupled with machine learning, I think there’s more of a likelihood of more and more creations passing the Turing test,” said Amber Kumar, a fourth-year design major who is minoring in computer science and communication.
The Turing test was developed by the computer scientist Alan Turing in the 1950s and is essentially a metric to determine if a computer exhibits human-like intelligence.
“In regards to Zhao Yu’s work, I believe that her focus on linguistics is especially fascinating and important, as the proper use of words can create a setting for the user to feel more at ease with the technology,” said Kumar.
Yet with the strides in artificial intelligence shortening the gap between us and robots, many feel an unease. There is even an aesthetic principle to describe this feeling: the uncanny valley. It theorizes that there is a point in robotic development where the human-likeness of robots, described as an uncanny similarity, will bring a feeling of disgust in the human viewer. However, this disgust, which can lead to a negative understanding of robots, might not mean much about whether the production of artificial intelligence is good or bad.
“I don’t think that human apprehension surrounding human-like robots tells us anything about whether these developments are good or bad,” said Zoe Drayson, an assistant professor in philosophy and an affiliated faculty member with the Center for Mind and Brain. “There is lots of research on human disgust reactions which suggest that what we find distasteful doesn’t track any moral properties of the world, and I’d be inclined to think the same is true of our tendency to be ‘weirded out’ or feel uneasy around human-like automata.”
As for Yu, she is compelled in her research to develop algorithms that directly benefit human well-being. She wishes people weren’t so suspicious about artificial intelligence.
“It’s more about trying to help people, pushing humanity forward instead of saying robots are going to take over humanity,” Yu said. “That’s totally not what we wanted from the first place.”
Written by: Matt Marcure—firstname.lastname@example.org