... Fish’s report ties into the idea of artificial general intelligence (AGI) a goal of AI reaching human-level abilities—thinking, learning and even potentially feeling. OpenAI’s search for an “AI Welfare Specialist” signals a growing concern: if AI ever becomes conscious, how should we treat it? ...
The discussion centers on a question that has long challenged philosophers and computer scientists alike: what defines consciousness? Smolinski points to philosophical perspectives that define consciousness as the ability to independently affect oneself and the surrounding world. Current AI systems, he notes, operate purely by reacting to external inputs, rather than through independent agency. ...
Researchers exploring AI consciousness propose borrowing tools from animal cognition studies to hunt for signs of machine awareness, while conceding that definitively proving artificial consciousness remains a formidable challenge. ...
See the full story here: https://www.ibm.com/think/news/ai-welfare-debate