Anthropic’s chief scientist Jared Kaplan is making some grave predictions about humanity’s future with AI. ...
Such a point is fast approaching, he says in a new interview with The Guardian. By 2030, Kaplan predicts, or as soon as 2027, humanity will have to decide whether to take the “ultimate risk” of letting AI models train themselves. The ensuing “intelligence explosion” could elevate the tech to new heights, birthing a so-called artificial general intelligence (AGI) which equals or surpasses human intellect and benefits humankind with all sorts of scientific and medical advancements. Or it could allow AI’s power to snowball beyond our control, leaving us at the mercy of its whims. ...
Kaplan conceded it’s possible that AI’s capabilities could stagnate. “Maybe the best AI ever is the AI that we have right now,” he mused. “But we really don’t think that’s the case. We think it’s going to keep getting better.”
See the full story here: https://futurism.com/artificial-intelligence/anthropic-ai-scientist-doom