diff --git a/index.html b/index.html new file mode 100644 index 0000000..fbc3764 --- /dev/null +++ b/index.html @@ -0,0 +1,101 @@ + + + + + + + + + + + +
+

Introduction

+ +
+

+ The introduction to Knowledge-Infused Learning guides readers through the paradigm's evolution and its ties to neurosymbolic AI. This chapter explores foundational ideas, including Prof. Daniel Kahneman's "thinking fast and slow" paradigm, Prof. Gary Marcus's "rebooting AI" viewpoint, Prof. Leslie Valiant's approach to knowledge infusion through logic, and Prof. Douglas Hofstadter's thoughts on why AI still falls short of true intelligence. Additionally, the chapter emphasizes the crucial role of knowledge-infused learning for researchers facing challenges in maximizing model performance with crowd-sourced datasets. Briefly, chapter summaries are provided to set the stage for the rest of the book, offering a preview of what's to come. This book is particularly valuable for projects where data may be scarce, but expert-curated knowledge—lexical, graphical, or procedural—is abundant. +

+
+
+
+
+

Knowledge Graphs for Explainability and Interpretability

+ +
+

+ Explainable and interpretable AI systems are essential for ensuring transparency, enabling users, developers, and regulators to understand how these systems reach their decisions or outputs. This transparency is crucial for accountability, allowing stakeholders to trust and critically assess AI systems. Traditional methods like feature attribution and visualization provide statistical insights into AI decision-making. However, a more robust approach involves leveraging structured knowledge representation, such as knowledge graphs and ontologies. + + Structured knowledge representation enhances AI accountability, regulatory compliance, and the ability to make targeted improvements. This chapter introduces the methods and use cases of using structured knowledge representation to achieve explainable and interpretable AI, with a focus on critical applications in health and disaster management. These fields, where the implications of AI decisions are particularly significant, benefit greatly from the enhanced clarity and reliability that knowledge graphs and ontologies provide. +

+
+
+
+
+

Shallow Infusion of Knowledge

+ +
+

+ Neurosymbolic AI, which combines statistical and symbolic AI, is a promising field especially crucial for applications that demand explainability and interoperability. This chapter delves into the significance of Neurosymbolic AI, the adjustments needed in existing AI systems to implement it, and methods to verify its effectiveness., We will explore knowledge-infused learning, a potent Neurosymbolic technique, for complex tasks in natural language understanding, recommender systems, and computer vision. This chapter acts as a primer on different types of knowledge infusion techniques, which specific emphasis on a novel paradigm of process knowledge-infused learning and knowledge-infused learning for language modeling. +

+
+
+
+
+

Semi-Deep infusion of Knowledge

+ +
+

+ In high-stakes environments, researchers compile datasets with annotations verified and validated by crowd-sourced workers, providing a benchmark foundation for enhancing AI models. However, the pursuit of significantly improved performance often leads to overly complex models. Through Shallow Knowledge-infused Learning, we introduce strategies to embed knowledge directly into datasets, serving as meta-information that AI models can utilize for tangible performance improvements. This chapter explores various shallow infusion methods and delves into their practical application, particularly in the context of mental health. This approach sparks curiosity about how integrating straightforward knowledge layers can streamline model complexity while boosting efficacy, opening new avenues for research and application in critical sectors. +

+
+
+
+
+

Deep Knowledge Infusion

+ +
+

+ Semi-Deep Knowledge-infused Learning (KiL) fuses deep learning with expert insights to boost both the efficacy and clarity of AI models. Central techniques involve merging knowledge graphs with neural networks, crafting tailored attention mechanisms that zoom in on pertinent concepts, and applying knowledge-oriented regularization to ensure predictions resonate with established facts. This chapter delves into leading knowledge-driven feature engineering techniques such as attention mechanisms and curriculum learning, explores knowledge distillations, and unveils innovative hybrid structures designed to enhance model interpretability. Within these hybrid frameworks, it introduces an innovative optimization strategy inspired by autoencoders and control theory, illustrating the application of semi-deep KiL. Moreover, the chapter presents new auxiliary tasks that facilitate a comparative analysis of semi-deep KiL strategies and modern AI techniques, igniting a curiosity about how these integrations can reshape AI development. +

+
+
+
+
+

Process Knowledge-infused Learning

+ +
+

+ Beyond just knowledge-guided data transformation or attention mechanisms, it's crucial to focus on (a) the variety of knowledge forms, (b) the need for effective infusion, (c) determining which layers should undergo infusion, and (d) ensuring system stability when external knowledge is integrated. These considerations allow LLMs to swiftly adapt to domain-specific concepts, align closely with user requirements, and maintain a grounded approach to explainability. This chapter explores how optimization techniques from control theory can regulate knowledge infusion, impacting both representation and model attention. Additionally, it delves into the theoretical foundations of deep knowledge-infused learning, discussing two well-established neural architectures: (a) Sequential Networks, and (b) Densely Connected Neural Networks. +

+
+
+
+
+

Knowledge-infused Conversational NLP

+ +
+

+ Knowledge-infused learning (KiL) for conversational NLP focuses on three crucial challenges: enhancing the conceptual flow in question generation, implementing knowledge-guided question creation, and safe conversations in mental health. The initial section explores the complexity of question generation, crucial for language comprehension. It highlights how KiL leverages external knowledge sources to improve the personalization, coherence, and context relevance of questions. The next part examines the role of knowledge graphs and knowledge bases in enhancing semantics during both single and multi-turn personalized conversations, ensuring logical coherence and semantic relatedness. Techniques like retrieval augmented generation are described for generating contextually rich, information-seeking questions. The final section discusses KiL’s application in ensuring safe conversations within mental health contexts, providing a comprehensive overview of both open-source and proprietary conversational systems. It identifies gaps in how these systems serve general users compared to mental health professionals or crisis volunteers, suggesting areas for further development. +

+
+
+
+
+

NeuroSymbolic Large Language Models

+ +
+

+ Trust in AI is primarily rooted in explainability and safety, necessitating consistent and reliable model behavior. To ensure these qualities, leveraging and scrutinizing data and knowledge through statistical and symbolic AI techniques tailored to specific applications is vital. This chapter advocates for the NeuroSymbolic AI approach, presenting it as more adept for establishing AI systems that earn trust. It introduces the CREST framework, which underscores how NeuroSymbolic methods, integrating data and knowledge, foster consistency, reliability, explainability, and safety—crucial for sensitive fields like health and well-being. + + Language models like ChatGPT and Google's MedPaLM have significantly advanced the handling of diverse natural language understanding and reasoning tasks. These models, however, often operate as "black boxes," even with improvements like human feedback and instructional tuning. For example, despite safety measures, ChatGPT may still produce inappropriate responses. The CREST framework proposes a method that employs procedural and graph-based knowledge within a NeuroSymbolic setup to illuminate and tackle the inherent issues of such LLMs. Furthermore, this chapter suggests innovative metrics for assessing consistency and bias that rely on contextual knowledge, offering a more nuanced approach than data-driven evaluations alone. +

+
+
+ + \ No newline at end of file