Devil’s in the details in Historic AI debate
Marcus started with a recap of his career, noting he had done the first work on big data with early childhood learning studies, as a cognitive scientist. What struck him about neural nets, all the way back in the 1990s, was that "classic connectionism couldn't learn universals outside training space." The failures of neural nets in some cases are an argument for "richer innate priors," he said, things that seem to be built into organisms through evolution, not merely that which is learned.
Marcus clarified that he has never said deep learning should be abandoned, but that that it should be "re-contextualized as a tool among many." He said he and Bengio at one time were far apart in their thinking because Bengio "relied too heavily on black boxes," meaning, neural nets. "Recently he has taken a sharp turn to positions I've long argued for," said Marcus, meaning, the notion of "hybrid" AI systems that would combine some machine learning with some form of symbol manipulation.
Bengio summarized his main interest these days as being about how neural nets can respond to data that is "out of distribution," or OOD, the problem of how to generalize beyond training data. He made reference to a paper recently published, and accepted at next year's ICLR conference, "A meta-transfer objective for learning to disentangle causal mechanisms."
Bengio quickly went into articulating what might be a manifesto of how he views all AI problems. Deep learning, with which he is most associated, is "not a particular architecture or a particular training procedure," said Bengio. "It's something that's moving, it's more a philosophy as we add more principles to our toolbox."
See the full story here: https://www.zdnet.com/article/devils-in-the-details-in-bengio-marcus-ai-debate/?fbclid=IwAR2vSX64KtwtPujoEMiHkOlTf6TWtP8zr9BP3oqtbRZMfkDNvjIF-b6gNxI
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio