How to Avoid the Ethical Nightmares of Emerging Technology
... In this article, I will try to convince you of three things: First, that businesses need to explicitly identify the risks posed by these new technologies as ethical risks or, better still, as potential ethical nightmares. ...
Second, that by virtue of how these technologies work — what makes them tick — the likelihood of realizing ethical and reputational risks has massively increased.
Third, that business leaders are ultimately responsible for this work, not technologists, data scientists, engineers, coders, or mathematicians. Senior executives are the ones who determine what gets created, how it gets created, and how carefully or recklessly it is deployed and monitored.
These technologies introduce daunting possibilities, but the challenge of facing them isn’t that complicated: Leaders need to articulate their worst-case scenarios — their ethical nightmares — and explain how they will prevent them. The first step is to get comfortable talking about ethics....
Ethical challenges don’t disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively. ...
Third, the focus on identifying and pursuing “responsible AI” gives companies a vague goal with vague milestones. ...
In short, if you know what your ethical nightmares are then you know what ethical failure looks like. ...
Quantum computers throw gasoline on a problem we see in machine learning: the problem of unexplainable, or black box, AI. ...
Right now, data scientists can offer explanations of an AI’s outputs that are simplified representations of what’s actually going on. But at some point, simplification becomes distortion. ...
That leads to a litany of ethical questions: Under what conditions can we trust the outputs of a (quantum) black box model? What are the appropriate benchmarks for performance? What do we do if the system appears to be broken or is acting very strangely? ...
These claims fund a conclusion: Organizations that leverage digital technologies need to address ethical nightmares before they hurt people and brands. I call this the “ethical nightmare challenge.” To overcome it, companies need to create an enterprise-wide digital ethical risk program. The first part of the program — what I call the content side — asks: What are the ethical nightmares we’re trying to avoid, and what are their potential sources? The second part of the program — what I call the structure side — answers the question: How do we systematically and comprehensively ensure those nightmares don’t become a reality? ...
Notice that articulating nightmares means naming details and consequences. The more specific you can get —... — the easier it will be to build the appropriate structure to control for these things. ...
See the full article here; https://hbr.org/2023/05/how-to-avoid-the-ethical-nightmares-of-emerging-technology
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio