Digital TransformationFuture of AI

Composite AI: Uniting Generative AI and Neuro-symbolic AI for Transparent and Accurate AI Decision Making

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

Composite AI is the integration of different artificial intelligence methodologies to enhance the overall performance and applicability of AI systems. It’s like assembling a team whose individual strengths make up for each other’s weaknesses. This approach is particularly relevant given the advancements in decision intelligence and neuro-symbolic AI, in contrast with the limitations of traditional decision tree systems.

AI can utilize thousands of data points in gigantic datasets to drive automated decisions, but ensuring those decisions are transparent and easily explained is the challenge. The integration of Generative AI Large Language Models (LLMs) like GPT-4 with neuro-symbolic AI is creating a new frontier in artificial intelligence, one that promises AI-driven decisions that are not only reliable but also transparent and fully auditable.  

Single AI Solutions Have Limitations 

Traditional models, including decision trees, struggle with overfitting, limiting their applicability to new, unseen data. This happens because the model becomes too complex, capturing intricate details and anomalies that are not representative of the general trend. Essentially, it’s like memorizing the answers to a specific set of questions rather than understanding the underlying principles to apply them to different questions. Overfitting reduces the model’s ability to generalize, making it less effective in real-world scenarios where the data often varies from the training set, thus compromising the model’s practical applicability and predictive power. 

Large Language Models (LLMs) like GPT-4 represent the forefront of AI’s language processing capabilities. These models are trained on vast amounts of text data, enabling them to generate human-like text, understand context, and even exhibit creativity. They have revolutionized machine understanding and generation of human-like text, but they grapple with challenges like generating plausible yet incorrect information, known as “hallucinations.”

This phenomenon typically arises from the AI’s training process, where the model learns to predict and generate text based on patterns it has observed in its training dataset, without the capability to verify the truthfulness or relevance of its outputs. As a result, the AI can confidently produce responses that seem coherent and logical but are false or misleading, a challenge particularly notable in complex tasks involving factual accuracy or logical consistency.  

A Multi-AI Solution Overcomes Important Limitations 

An interesting composite AI that could address these limitations combines symbolic AI, neural networks, knowledge graphs, and LLMs. 

Symbolic AI isn’t a new method of looking at decision-making. It refers to human-readable forms of logic. By combining computational abilities with structured, rule-based logic and diverging from the linear and rigid paths of decision trees, this method fosters a dynamic “decision space” with interconnected rules, allowing for more nuanced and comprehensive analysis. 

Neuro-symbolic AI, a blending of symbolic AI with neural networks, elevates this approach to ensure decisions are both data-informed and logically robust. Neural networks, with their pattern recognition capabilities, can analyze and interpret complex, unstructured data. When neural networks are integrated with knowledge graphs, the combination leverages the strengths of both. 

Knowledge graphs are a way of structuring and storing information that enables AI systems to understand and utilize complex relationships between data points. They provide a structured, semantic context to the data. These graphs map entities (such as objects, people, or concepts) and the relationships between them, forming a network of interconnected information. Knowledge graphs enhance AI’s ability to process and interpret large amounts of structured data, providing a foundation for more accurate and context-aware AI applications. 

One of the most human-focused advancements for this combination of AI methodologies is the use of LLMs to provide plain English explanations for decisions made using neuro-symbolic AI. LLMs can articulate the intricate logic of neuro-symbolic AI in a comprehensible manner, significantly enhancing the transparency of AI decisions and bridging the gap between complex AI operations and human understanding. LLMs can also be used to automate the creation of the knowledge graphs that inform the decisions. They can ingest a business’s procedure and identify the entities and their relationships to kickstart the creation of the knowledge graph. 

This combination of knowledge graphs, neuro-symbolic AI, and LLMs is adept at countering issues like hallucinations and overfitting. While neuro-symbolic AI may base decisions on validated rules, relationships, and complex logic, LLMs offer clear, concise explanations, enhancing trust and understanding. Additionally, neuro-symbolic AI’s blend of symbolic reasoning with data-driven neural networks reduces overfitting risks by balancing data patterns with logical structures. 

Composite AI For Fast, Accurate, Fair and Fully Explained Decisions 

A practical illustration of this integration is in credit decisions, where neuro-symbolic AI analyzes a wide range of factors to evaluate an applicant’s creditworthiness, going beyond traditional metrics by using a knowledge graph created by the organization’s experts. It considers standard elements like credit scores, repayment history, and debt-to-income ratios. However, it also delves into more nuanced aspects such as employment stability, recent financial transactions, savings patterns, and even how economic trends might affect the applicant’s future financial stability. 

This system can identify subtle patterns and risk factors that might be overlooked in a conventional credit check. For instance, it might be noticed that while an applicant has a high income, their spending habits and recent large purchases suggest potential future financial stress. Alternatively, it might recognize that an applicant with a moderate credit score has recently improved their financial habits, indicating a lower risk than the score alone might suggest. 

After analyzing these factors, an LLM provides a comprehensive explanation of the decision. This explanation details why an applicant was approved or denied, pointing out the strengths and weaknesses in their financial profile. For a denied application, it could offer specific reasons, such as a high level of recent debt or insufficient savings history. For approved applications, the LLM might highlight factors like a consistent track record of timely repayments or a stable employment history.  A bank may decide to allow the applicant to respond to the AI’s decision, adding more detail or context which could trigger an AI-guided manual review of the decision. 

This level of detail not only makes the credit decision-making process more transparent but also helps applicants understand the financial behaviors that influence their creditworthiness. It could serve as a tool for financial education, empowering applicants to improve their financial habits and understand the lending process better.  

Conclusion 

The synergistic relationship between LLMs, knowledge graphs, and neuro-symbolic AI marks a significant stride towards achieving transparent, reliable, and explainable AI decision-making. This collaborative approach ensures AI-driven decisions are not only data-informed and logically sound but are also communicated clearly and understandably.

It heralds a future where the full potential of AI is harnessed responsibly and transparently, making complex AI processes accessible and trustworthy to all. This advancement is particularly crucial in areas like credit decisioning, where clear communication and understanding of AI-driven decisions are paramount to financial organization’s bottom line and their customer’s experience. 

Author

  • Cass Bishop

    Cass Bishop is a Director in the Automation unit of global technology and advisory firm ISG. He is a dedicated leader who implements global client-focused solutions which drive dramatic improvements to business process and automation lifecycles. Cass brings over 20 years of Technical Implementation and Program Management experience in transformative technologies with a focus on System Development, IT Automation, Cloud, IT Service Management and Business Process Automation.

Related Articles

Back to top button