The Future of Deep Learning Is Unsupervised, AI Pioneers Say
“How do we learn with fewer labels, fewer samples or fewer trials?” asked Mr. LeCun, speaking at an event organized by the Association for the Advancement of Artificial Intelligence. “My suggestion on it…is to use self-supervised learning, which is basically learning to fill in the blanks. Basically it’s the idea of learning to represent the world before learning a task, and this is what babies do.”
Mr. LeCun was speaking during a session with Yoshua Bengio, of the University of Montreal and the Montreal Institute for Learning Algorithms, and Geoffrey Hinton, of AlphabetInc.’s Google, the Vector Institute and the University of Toronto. The three shared the 2018 A.M. Turing Award for their work advancing the field of deep learning, a technique that powers image-recognition systems, natural-language understanding and more. The Turing Award, bestowed annually by the Association for Computing Machinery, recognizes lasting, major contributions to the field and comes with a $1 million prize.
Self-supervised machines will be able to better handle new situations they encounter, Mr. Bengio said. That would be comparable to the way people figure out how to drive in an area where they’ve never been or when construction activity forces them to change a familiar route.
Self-supervised learning works well when it is applied to natural-language problems, such as filling in the blanks of missing words in a sentence. But it doesn’t work so well with predicting the next frame of a video image, Mr. LeCun said. Humans can predict what’s going to happen next in a video of a ball dropping, but machines struggle with that sort of intelligence, he said.
Efforts to get to the next level in AI face three challenges, Mr. LeCun said. Researchers want to develop AI that can learn with fewer labels, reason more like people and “plan complex action sequences.”
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio