In sectors where decisions carry serious consequences, such as medical diagnosis, professionals often need to understand how AI reaches its conclusions. A research team from the Massachusetts Institute of Technology has introduced a new technique designed to make these systems both more transparent and more accurate.
Their work focuses on a method known as concept bottleneck modeling, which is meant to reveal how an AI model arrives at a particular prediction.
Traditionally, specialists determine which concepts a model should use. For example, when examining images of skin lesions, a doctor might suggest indicators such as grouped brown spots or uneven pigmentation to help the model identify melanoma. While these expert-defined features can be useful, they may not always capture the details needed for every task. If the concepts do not match the data closely enough, the system’s performance can suffer.
The new research takes a different route. Instead of relying entirely on human input, the method analyzes the internal patterns a model develops during training. It then extracts these patterns and converts them into descriptions that people can read and understand.
To accomplish this, the scientists created a system that uses two specialized machine learning tools. The first component, known as a sparse autoencoder, scans the trained model and identifies the most important features it uses when analyzing images. These features are then condensed into a small set of core concepts.
A second component, a multimodal large language model, translates those concepts into everyday language. This model also reviews images in the dataset and labels whether each concept appears in a particular picture. The labeled information helps train a module that forces the original vision model to base its predictions only on those extracted ideas.
The process effectively turns an existing computer vision system into one that can explain its reasoning step by step. Limiting the model to five concepts for each prediction helps prevent hidden information from influencing the result. It also encourages the system to focus on the most relevant clues, making the explanation easier to follow.
The researchers tested the approach on tasks including identifying bird species and analyzing medical images of skin conditions. In these experiments, the method delivered stronger accuracy than several existing concept bottleneck techniques while also producing clearer explanations.
Still, the scientists acknowledge that a gap remains between transparency and raw performance. Traditional “black box” models that offer little insight into their reasoning can sometimes achieve higher accuracy. The team hopes future work will reduce that tradeoff.
Planned improvements include expanding the training dataset and using larger language models to produce better annotations. The researchers are also studying ways to block unwanted information from influencing predictions.
Experts say the work represents a step forward for interpretable AI. By drawing explanations directly from the model’s own learned knowledge, the method may help build systems that are both more reliable and easier for people to trust in sensitive applications.
It would be interesting to hear what firms like Datavault AI Inc. (NASDAQ: DVLT) leveraging AI in their products and solutions have to say about this study showing how AI systems can be trained to explain how they arrived at the conclusions they made.
About AINewsWire
AINewsWire (“AINW”) is a specialized communications platform with a focus on the latest advancements in artificial intelligence (“AI”), including the technologies, trends and trailblazers driving innovation forward. It is one of 75+ brands within the Dynamic Brand Portfolio @ IBN that delivers: (1) access to a vast network of wire solutions via InvestorWire to efficiently and effectively reach a myriad of target markets, demographics and diverse industries; (2) article and editorial syndication to 5,000+ outlets; (3) enhanced press release enhancement to ensure maximum impact; (4) social media distribution via IBN to millions of social media followers; and (5) a full array of tailored corporate communications solutions. With broad reach and a seasoned team of contributing journalists and writers, AINW is uniquely positioned to best serve private and public companies that want to reach a wide audience of investors, influencers, consumers, journalists, and the general public. By cutting through the overload of information in today’s market, AINW brings its clients unparalleled recognition and brand awareness.
AINW is where breaking news, insightful content and actionable information converge.
To receive SMS alerts from AINewsWire, text “AI” to 888-902-4192 (U.S. Mobile Phones Only)
For more information, please visit www.AINewsWire.com
Please see full terms of use and disclaimers on the AINewsWire website applicable to all content provided by AINW, wherever published or re-published: https://www.AINewsWire.com/Disclaimer
AINewsWire
Los Angeles, CA
www.AINewsWire.com
310.299.1717 Office
Editor@AINewsWire.com
AINewsWire is powered by IBN