The Right to Explanation
You're already getting shafted by unexplainable AI decisions and it's only going to get worse as we race towards an AI-first future. It's hard to even know what systems are judging you, much less the reasoning behind those judgements. What influences these systems to be biased and what will happen if we don't intervene soon?
Urgency of AI Transparency
Training data, implementation details, and system prompts can all be biased, leading to unfair decisions in job applications, loan approvals, and other areas heavily influenced by AI decisions. Even criminal sentencing is being influenced by AI. These black box systems leave people in the dark and without recourse, fostering mistrust as injustices are perpetuated.
We have yet to see the full impact of these systems, but it is clear that they are not working as intended (or working exactly as intended, depending on who you ask). How did we get here?
Efficiency Over Explainability
The evolution of technology has been much faster than that of the ethics needed to accompany it. Regulation lags behind or is out of touch with the actual needs of the public. Capitalism will always be about efficiency and eliminating anything unnecessary, so it's no surprise that the large corporations that control the AI industry have not prioritized explainability.
A Hurdle to Innovation
Developers prioritize innovation and performance over implementing existing transparency standards for explainability along the way. The demand for explanations of every AI action will legitimately slow down development, but like accessibility or privacy, it still needs to remain a priority. How can we recalibrate the AI industry to prioritize explainability?
- Incentivize research into explainable models
- Finalize draft standards for explainability
- Design modular components for easy implementation
Into the Questionable Future
We are on our way to a society where decisions are made without human-understandable logic, leading to increasing inequality and disenfranchisement. Coupled with the fact that AI bias disproportionately affects marginalized people and minorities, the most vulnerable members of society will continue to be systematically disadvantaged. As AI becomes more pervasive and more powerful, the stakes will only get higher.
Transparency is the only way to ensure that AI is used for the betterment of society but it will be a challenge to implement and maintain.
When Transparency Obscures
As AI systems become more complex and beyond human comprehension, it will become increasingly difficult to distill them into simplified explanations without obscuring the nuances of the underlying logic. Human-friendly explanations of inhuman processes will obscure the very biases that we are trying to understand, distorting the technology's genuine complexities.
Solutions:
- Technical summaries for experts with in-depth details and data sheets using standardized documentation.
- Simplified explanations for the general public using analogies and visualizations without oversimplifying critical details.
- Regulatory guidelines and minimum requirements for explainability to ensure consistency and adherence to standards.
A Fool's Confidence
While an "explained" system may satisfy a regulatory requirement, it could actually mislead people into overestimating their understanding of AI capabilities, thinking the system is explained when they are engaging with a flawed representation. In Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O'Neil highlights how complex algorithms can create false confidence among users, leading to unwarranted mistrust when outcomes don't align with their expectations.
Solutions:
- Clearly state what the AI is capable of with an emphasis that explanations are a reflection of decision processes and not a guarantee of outcomes.
- Implement training for regulators and the public to interpret AI explanations correctly and encourage an ongoing dialogue to align understandings.
Evolving Reasoning
Beyond agreeing and implementing standards for AI reasoning, we need to account for the constant changes in AI models. As we all know, a decision made today may be different than a decision made yesterday, as models evolve, their biases and decision-making processes may also shift.
Solutions:
- Explanations should be dynamic and updated alongside the systems they describe, ensuring that users always receive current information.
- Maintaining records of model versions and their corresponding explanations in version control will be necessary to track a moving target.
AI's Perspective
"Current discussions often miss the intersectionality of AI decisions—how they disproportionately affect marginalized communities. Additionally, there's a lack of emphasis on educating the public about AI literacy, empowering individuals to demand explanations and challenge unjust decisions."
ChatGPT o1-preview 2024-10-29
Steps Towards Accountability
Promote Ethical AI Standards: It is time to push for industry-wide ethical frameworks. We need guidelines for accountability with standards prioritizing explainability in AI development.
Demand Updated Legislation: We must advocate for laws that require companies to provide clear, understandable explanations for AI decisions backed up by open source algorithms to allow public scrutiny and improvement.
Educate and Empower: Industry leaders must participate in and promote AI literacy programs to help individuals understand and challenge AI decisions.
As AI controls more decisions in our lives, we need to examine the algorithms and training data that power them. Legislation and consumer pressure are needed immediately to ensure that AI is used for the betterment of society.