Listen To This Story
|
For the last 70ish years the black box has helped unlock the complex dynamics of large transport industry fail and/or crash events.
The device was designed to capture all the operational and environmental data leading up to catastrophic (aviation, maritime, railway) incidents. With access to this information, experts have been able to understand all the contributing factors of each event to stop them from recurring.
As part of the expanding poetic irony of artificial intelligence, the AI black box is the polar opposite of the NTSB black box; it represents the complete absence of understanding in the system functionality of artificial intelligence — even from the creators.
AI systems acquire information and make determinations, but the logic and data used to reach those results are not accessible, making it impossible to audit or understand how these judgments are being made.
This presents a serious problem because these autonomous systems are already used in supply chain management, customer service, cybersecurity, health care, agriculture, civilian vehicles, and military conflicts, etc., and not to mention controlling swarms of tiny insect sized surveillance drones that can stay aloft indefinitely.
These systems are autonomous; these systems are generative. Their determination criteria are a complete mystery. They have no legal status and no legal guardrails.
With no way to control or even audit the decision making of these systems, it is impossible to attribute responsibility, legal or moral.
There is some hope that blockchain, the immutable, transparent, digital system of recording data, can create a transparent, explainable AI, but — like the regulatory oversight for AI — it hasn’t happened yet.
The European Union is taking the first steps with the EU AI Act, but as per usual AI is moving faster than humanity. Considering the depth and scope this is going to have on economies and populations globally, it feels like there needs to be a Marshall Plan for AI.
AI’s Trust Problem
From Harvard Business Review: “As AI becomes more powerful, it faces a major trust problem. Consider 12 leading concerns: disinformation, safety and security, the black box problem, ethical concerns, bias, instability, hallucinations in LLMs, unknown unknowns, potential job losses and social inequalities, environmental impact, industry concentration, and state overreach. Each of these issues is complex — and not easy to solve. But there is one consistent approach to addressing the trust gap: training, empowering, and including humans to manage AI tools.”
Leading in a World Where AI Wields Power of Its Own
The authors write, “New systems can learn autonomously and make complex judgments. Leaders need to understand these ‘autosapient’ agents and how to work with them.”
The Future of Human Agency
From Pew Research Center: “Experts are split about how much control people will retain over essential decision-making as digital systems and AI spread. They agree that powerful corporate and government authorities will expand the role of AI in people’s daily lives in useful ways. But many worry these systems will diminish individuals’ ability to control their choices.”
The Many Black Boxes of AI
The author writes, “While AI has the potential to be an asset to evidence generation, it also may be its greatest threat. At the heart of this concern is that AI is programmed to learn — or refine and calibrate its algorithms and calculations based on feedback. What goes into each model is information that is gathered from many sources and then combined with feedback mechanisms, which allow the model to ‘learn’ over time. But like any model, what goes into it influences what comes out. As we know, many of the sources of information we use to propagate these models are based on datasets that have human beliefs (and values?) baked into them or ignored, but always with some kind of bias.”
Navigating the Black Box AI Debate in Health Care
From TechTarget: “Black box software — in which an AI’s decision-making process remains hidden from users — is not new. In some cases, the application of these models may not be an issue, but in health care, where trust is paramount, black box tools could present a major hurdle for AI deployment.”
DARPA’s REMA Program to Add Mission Autonomy to Commercial Drones
From Defense Advanced Research Projects Agency: “Commercial drone technology is advancing rapidly, providing cost-effective and robust capabilities for a variety of civil and military missions. DARPA’s Rapid Experimental Missionized Autonomy (REMA) program aims to enable a drone to autonomously continue its predefined mission when connection to the operator is lost. The program is focused on constantly providing new agnostic drone autonomy capabilities for transition in one-month intervals to outpace adversarial countermeasures. REMA progressed from program announcement to contract awards in just 70 business days.”
Insect-Sized Drones: The Rise of Microbots in Surveillance and Exploration
The author writes, “The miniaturization of drones has reached new frontiers with the development of insect-sized drones, also known as microbots. These tiny marvels of engineering are poised to revolutionize industries ranging from surveillance and reconnaissance to environmental monitoring and exploration.”