Conceptual Framework for Trustworthy Artificial Intelligence: Combining Large Language Models with Formal Logic Systems
Article's languageEnglish
Abstract
The paper explores the problem of building trustworthy artificial intelligence based on large language models and p-computable checkers. For this purpose we present a concept of framework for reliable verification of answers obtained by large language models (LLMs). We focus on the application of this framework to digital twin systems, particularly for smart cities, where LLMs are not yet widely used due to their resource intensity and potential for hallucination. Taking into account the fact that solution verification from a suitable set of tasks is p-computable and in most cases less complex than computing and implementing the whole task, we present a methodology that uses checkers to assess the validity of LLM-generated solutions. These checkers are implemented within the methodology of polynomial-time programming in Turing-complete languages, and guarantee a polynomial-time complexity. Our system was tested on the 2-SAT problem. This framework offers a scalable way to implement trustworthy AI systems with guaranteed polynomial complexity, ensuring error detection and preventing system hangups.
Keywords
DOI10.31144/si.2307-6410.2025.n27.p93-118
Issue
# 27,
Pages93-118
File
nechesovkondratyev.pdf
(568.99 KB)