philip lelyveld The world of entertainment technology


A reflection on artificial intelligence singularity

Thinking-robotShould you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about when our computers become as smart—or smarter—than us?

Jalsenjak takes us through the philosophical anthropological view of life and how it applies to AI systems that can evolve through their own manipulations. He argues that “thinking machines” will emerge when AI develops its own version of “life,” and leaves us with some food for thought about the more obscure and vague aspects of the future of artificial intelligence.

This is not a discussion about how and when we’ll reach AGI. ... Jalsenjack rather speculates of how the identity of AI (and humans) will be defined when we actually get there, whether it be tomorrow or in a century.

According to philosophical anthropology, the first signs of life take shape when organisms develop toward a purpose, which is present in today’s goal-oriented AI. The fact that the AI is not “aware” of its goal and mindlessly crunches numbers toward reaching it seems to be irrelevant, Jalsenjak says, because we consider plants and trees as being alive even though they too do not have that sense of awareness.

Another key factor for being considered alive is a being’s ability to repair and improve itself, to the degree that its organism allows.

An ideal self-improving AI, however, would be one that could create totally new algorithms that would bring fundamental improvements. This is called “recursive self-improvement” and would lead to an endless and accelerating cycle of ever-smarter AI.

Recursive self-improvement, however, will give AI the “possibility to replace the algorithm that is being used altogether,” Jalsenjak notes. “This last point is what is needed for the singularity to occur.”

See the full story here:

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.