Picture this: An AI model recommends an action that does not seem reasonable. The model cannot explain itself so you don’t know what the reason behind it. You have two options: trust the model or not, but you don’t need context. Many who use artificial intelligence (AI), often find it frustrating and familiar.
These systems can sometimes function as “black boxes”, which are system that cannot be explained or trusted by the creators. Black box AI systems can be used in some cases.
They are even preferable by people who don’t want to explain their AI. In other situations, however, the negative consequences of an incorrect AI decision could be severe and potentially fatal.
For example, AI system mistakes in justice or healthcare can lead to a loss of life and livelihood. This in turn undermines the effectiveness and trust that these systems have earned from users and publics. XAI (explainable AI) is crucial to the healthcare industry.
Patients and providers need to understand the reasoning behind AI-related recommendations, such as hospitalizations or surgical procedures. XAI offers interpretable explanations using natural language and other simple representations that allow doctors and patients to better understand the reasons behind a recommendation and even question it if needed.
When is XAI needed?
AI can be used by healthcare professionals to improve and speed up many tasks such as decision-making and risk management. AI is an indispensable tool in many healthcare professionals’ arsenal. However, it can be difficult to explain for patients and providers when they are making important decisions.
Ahmad and colleagues state that XAI should be used in the following situations: Where fairness and transparency are paramount and end-users and customers require an explanation before making an informed decision. When new hypotheses are drawn by AI systems and must be validated by subject-matter experts.
This is to comply with the EU’s General Data Protection Regulation. Experts believe the slow adoption rate of AI systems for health care stems from the difficulty of checking results using black-box-type systems.
Forbes’ Erik Birkeneder, an expert in digital health and medical device, says that doctors are trained to recognize outliers, or cases that do not require standard treatment.
According to Forbes, “If an AI algorithm doesn’t have the right data and can’t comprehend how it makes its decisions, then we don’t know how it will correctly diagnose patients.” It’s almost impossible for a doctor or nurse to confirm a diagnosis made using complex deep-learning systems, such as MRIs, CT scans, etc., if they do not understand the context. Birkeneder points out that this ambiguity is a frequent problem for the U.S Food and Drug Administration, which has responsibility for validating and appraising AI models within health care.
A draft FDA guidance 2019 from September states that doctors should independently validate the AI system’s recommendations to avoid having them reclassified as medical devices, which have stricter compliance standards. XAI in healthcare: It’s best to keep it simple for now.
According to Cognylitica’s Ron Schmeltzer, doctors must verify the AI systems’ recommendations or risk having them re-classified as medical devices. These stricter compliance standards are not applicable to these types of algorithms. Thomas Lukasiewicz from the University of Oxford is AXA Chair for Explainable Artificial Intelligence in Healthcare also points out that deep learning algorithms are less understandable and can be biased, or even have reliability issues.
However, researchers Bellio et al. Bellio et al. argue that the debate about performance and explainability is somewhat misleading. They also believe that the sacrifice of accuracy for performance doesn’t always pay off in the long-term. According to them, health care XAI should be as fast and reliable as any high-performance product, such a car or laptop. Most users won’t understand the inner workings of their respective hoods, but they can spot when something is wrong.
They write that they don’t expect humans to be able to understand complex calculations. They say that explanations need to be customized to individual needs and not based on deep knowledge of the model. You can do this by understanding the results and how the system functions.
My car’s safety and efficiency are important to me. But it is not about its engine. What makes my car trustworthy? Bellio and colleagues. Bellio et al. go on to describe three methods advanced AI can help improve health care. Better accounting of generalization mistakes:
This will allow users to identify errors or deviations in their algorithm more quickly and help them determine whether it is trustworthy. Role-dependent AI: An AI system’s level of explanations will depend on its user. For example, a doctor might need more context than an HR planner. A tailored explanation that is based on the user’s requirements and roles could result in more satisfying outcomes.
Interactive interfaces: These intuitive graphical user interfaces allow users to better comprehend the system’s accuracy and give them a greater understanding of its accuracy. The “third wave” in AI capability Riding on the limits of existing XAI systems Thomas Lukasiewicz from Oxford University says that a third wave of AI technology will be required.
The first-wave AI systems, which are logic- or rule-based systems, while the second wave systems consist of machine learning and deeplearning. He says that the third wave in AI technology must incorporate the best and worst of both the two previous types.
The professor explained that the AI system is good at reasoning, but the one that is best at prediction and statistical learning is the better. “It is therefore very natural to combine them…and to create an AI system we also refer to as neural-symbolic AI.”