Towards Complex Reasoning in LLMs

Start date
Sept. 1, 2024
End date
Aug. 31, 2028

About CLAIM

While Large Language Models (LLMs) are widely popular for their ability to generate coherent text, they still struggle to follow certain patterns, particularly in complex reasoning tasks. A system capable of reasoning both in support of and against an argument (defeasible reasoning) can be valuable for drawing conclusions in ambiguous contexts. We believe that such reasoning-aware LLMs are pivotal for advancing artificial intelligence explainability. LLMs lag behind human ability in reasoning tasks, especially in generative modes. Effective reasoning often requires detailed, context-specific knowledge of the source material, which goes beyond the capabilities of the next-word prediction paradigm used by LLMs. Our project aims to address the lack of grounded understanding in LLMs by leveraging knowledge graphs.