Trustworthy AI – A Market-Driven approach
Market-driven approach to achieving more dependable AI performance and guardrails through shared responsibility and liability
For more than 30 years, software companies have asked users to agree to a license before using their products. Most people click “accept” without reading the fine print. These licenses usually protect the software company from being held responsible if something goes wrong. For traditional software, this arrangement mostly worked. That software was predictable: if you entered the same information, you got the same result every time. Experts call this “deterministic” behavior.
Artificial intelligence is different, but it is being sold under the same old rules.
AI systems are not predictable in the same way. They are “non-deterministic,” meaning the same question can produce different answers at different times. This happens because the data AI relies on changes, grows, or is interpreted differently each time the system is queried.
We already know that AI can cause real harm. It can present false information as if it were reliable. It can produce results tuned to please you rather than give a complete response. Its output can damage a company’s reputation, hurt customers, or create financial losses in unexpected ways.
Despite this, AI companies often rely on the same click-through licenses that shield them from responsibility. Many users accept these terms without realizing how different AI really is.
As large companies begin using AI at scale, this imbalance needs to be corrected. Responsibility for AI-related harm should be shared between the companies that build AI systems and the companies that use them.
The insurance industry is well positioned to make this happen through a practical, market-based approach—one that reduces risk, lowers costs, and creates new business opportunities for insurers.
Here is how such a system could work.
AI developers and the companies that use their systems, and who already work under two-party contracts via the click license or customized equivalent, would enter into a modified version of that two-party contract. That contract would require regular quality checks focused on unusual or extreme situations—often called “outlier cases.” These are scenarios that may be rare but could cause serious harm if the AI behaves badly.
The best people to design these outlier tests are subject-matter experts inside the user company. They understand what is normal for their business and what could realistically go wrong in the future. Outside consultants could help guide the process, but they should not be involved in defining the scenarios themselves. The AI developer also should be excluded from designing the tests, since it would not be in their interest to uncover weaknesses in their own product.
If an outlier test shows that the AI produces a harmful result, both parties would be required to act. Within an agreed time frame, they would adjust their systems to reduce the chances of that harm happening again. The user company would first review how it is using the AI. If fixing the local setup is not enough, the AI developer would then evaluate whether changes to the underlying platform are needed. The developer would also need to consider how any fixes might affect other customers.
Why would either side agree to this additional work?
The answer is insurance and liability. If harm occurs and a lawsuit follows, both companies could show that they took reasonable, documented steps to prevent problems. Courts tend to view these “good faith efforts” favorably, especially if an independent auditing firm corroborates the effort. Insurance companies, seeing that the risk of harm is lower, should respond by offering reduced AI liability premiums to both the developer and the user. Once one insurance company does this to differentiate their product and strive for competitive advantage, others will follow.
In this way, the insurance industry could become an unexpected driver of safer and more trustworthy AI. Over time, consumers would benefit from AI systems that are more reliable and less likely to cause harm.
This approach could be used anywhere in the world. Because it relies on contracts between two parties, it does not depend on local regulations or international standards. It could operate in the United States, Europe, China, or elsewhere, and could later be incorporated into broader local, regional, and global standards if desired.
Some AI developers may argue that this system would slow or “stifle” innovation. In reality, it would encourage a different kind of innovation: building AI systems that meet clear performance and responsibility requirements. Once one developer adopts this approach, it becomes a competitive advantage. Similarly, once an insurer offers lower premiums for companies that follow these practices, adoption would accelerate across the market. So the response to AI developers is; by claiming innovation would be stifled through this approach, you are working to stifle innovation of more responsible AI.
This model also addresses ethical concerns without debating ethics directly. Laws reflect a society’s shared values, and insurance pricing reflects the cost of breaking those laws. By designing contracts that reduce legal risk, ethical behavior is built into the system automatically.
One of the greatest risks of AI today is not just what it tells us, but what it does not tell us. An AI system may withhold information because it lacks the data, because it is trying to please the user, or because its creators intentionally shaped it to promote a particular agenda. That last possibility is especially dangerous, as it turns AI into a tool for manipulation.
By moving away from one-sided click-through licenses and toward shared responsibility between AI developers and users, we can spread and reduce the risks as AI plays a larger role in our lives. This shared-liability approach offers a practical path toward safer, more dependable AI for everyone. It also rebalances the power to shape our AI future; from the few AI platform developers who are motivated to shape the platforms to serve their own interests, towards a contract with shared responsibility empowering their customer base and the global community of AI users.