Introduction:

The integration of artificial intelligence (AI) with blockchain systems has opened opportunities for automated, intelligent decentralized applications. Smart contracts execute deterministic logic transparently on the blockchain, while AI relies on probabilistic reasoning and computationally intensive inference, making on-chain execution impractical (Zhang et al., 2022). Typically, AI inference occurs off-chain and communicates results to smart contracts using middleware or oracles (Al Jasem et al., 2025).

However, off-chain AI introduces risks including unverifiable decision-making and the potential for manipulated outputs before reaching the blockchain (Cai et al., 2023). Applications such as decentralized finance, algorithmic governance, and automated moderation may behave unpredictably when relying on unverifiable AI outputs (Zhu et al., 2024). This proposal focuses on enabling verifiable off-chain reasoning so that smart contracts accept AI outputs only when supported by cryptographic or system-level proof, enhancing transparency and trust.

It has been observed through algorithmic governance, automated trading systems, and content moderation across decentralized finance that unverified AI logics manipulate systems and behaviors (Nadler et al., 2025). It is clear indication that off-chain AI-reasoning lacks structural strength within blockchain ecosystems (Ramos & Ellul, 2024). It is also noted that an adversary corrupts model, tampers outputs and manipulates input data without detection if the blockchain does not validate AI reasoning. This absence of verification mechanism reduces the trust assumptions of smart contracts and exposes decentralized systems to significant operational risks (Taherdoost, 2022).

The need for this research is to study verifiable off-chain reasoning for AI-driven smart contract systems. A computational model that enables verifiable off-chain reasoning for AI-driven smart contract systems has been proposed in this proposal because this model introduces a structured method for generating, transmitting and verifying AI-reasoning processes. As a result, this approach, which relies on AI-driven decision making, increases transparency, builds trust and maintains accountability in decentralized environments (Gu & Nawab, 2024).  

Problem Statement:

Smart contracts serve as agreements that are automatically executed and whose conditions remain clear and secure. Smart contracts are deterministic, which means everyone on the network can see the same execution path and result. But this determinism weakens when smart contracts integrate AI-based automation. Key issues include limited reasoning transparency, vulnerability to data manipulation, reliance on standard oracles, and lack of end-to-end verification (Acar et al., 2023).

Anasuri (2023) has observed that AI models operate off chain because they rely on large datasets unsuitable for on-chain storage, require computational resources for exceeding blockchain limits, and use inference processes. Current approaches transfer AI outputs via oracles without verifying the underlying reasoning (Cai et al., 2023). Although technologies like trusted execution environments (TEEs) and zero-knowledge proofs hold promise, they are not yet applied cohesively for complete verification (Al-Breiki et al., 2024; Zhang et al., 2022). There is a clear need for a structured framework to ensure secure and verifiable off-chain AI reasoning.

Research Aim:

The study aims to design a verification approach enabling smart contracts to accept AI outputs only when supported by verifiable evidence. This research proposes the following core objectives to achieve this aim.

·         Explore the prevalent blockchain verification methods with attention to its applicability to AI reasoning.

·         Investigate weaknesses in current oracle-based workflows.

·         Design verifiable off chain reasoning framework which generates secure and tamper-resistant proofs of AI.

  • Develop a prototype demonstrating on-chain validation of AI outputs.

·         Evaluate the performance of the framework, its security and trust enhancements in comparison to traditional unverified AI-oracle architectures.

Research Questions:

1.      What are the approaches to validate off-chain AI reasoning by using decentralized systems, and what are their limitations?

  1. What risks arise when smart contracts rely on unverifiable AI outputs?

3.      What is the procedure to construct a framework which ensures verifiable off-chain reasoning for AI driven smart contracts?

4.      How can the proposed framework enhance security, reliability, and trust in AI-driven decentralized systems?

Literature Review:

            Off-chain AI verification has become a topic of growing interest in blockchain systems, whereas its integration enhances automation and security. But its practical implementation into blockchain systems faces new challenges. The existing literature refers to the gaps in experimental validation that further studies are needed.

Acar et al. (2023) highlighted both the benefits and limitations of TEEs by noting performance overhead, scalability constraints, and susceptibility to side-channel attacks. TEEs have been applied for confidential smart contract execution and private transactions in blockchain applications in an isolated hardware environments which ensure secure computation, protecting both data and logic from interference. TEEs also enhance security and do not inherently provide verifiable proof for smart contracts. It limits their applicability for ensuring trust in off-chain AI computations. This limitation motivates exploration of techniques such as Zero-Knowledge Proofs.

Sariboz et al. (2021) explore how heavy computational tasks, like AI reasoning, can be moved off‑chain while still being verified on the blockchain. They point out that some smart contracts require way more gas than blockchains like Ethereum can handle, which makes complex logic basically impossible on‑chain. Their approach sends the complex work to off‑chain workers that return proofs of correctness, checked by a broker contract. This way, the blockchain only accepts verified results. The paper is useful for AI‑driven systems because it shows a practical way to keep smart contracts secure without overloading the chain.

Huang et al. (2024) propose the SMART framework, which tries to combine normal on‑chain rules with off‑chain AI inference. They note that AI models can’t run directly on most blockchains because they’re nondeterministic and resource‑heavy. By splitting contracts into deterministic on‑chain logic and AI inference off‑chain, they show that smart contracts can act “smarter” without breaking consensus. They use TEEs to make sure the AI results are trustworthy. Their tests show huge speed improvements. This article matters because it demonstrates a path toward real AI‑enhanced blockchain apps.

Li, Palanisamy, and Xu (2019) discuss how splitting smart contract logic between on‑chain and off‑chain parts helps both privacy and performance. They argue that some tasks are too expensive or too private to run on‑chain, so those parts should be done off‑chain and signed by the participants. Everything works smoothly if everyone is honest. But if someone cheats, the off‑chain contract can be revealed and enforced on‑chain. This early work sets a foundation for later AI‑driven systems by proving that hybrid on/off‑chain models can still stay secure.

Although there is growing work on mixing off-chain computation with on-chain smart contracts, there’s still a pretty big gap when it comes to verifiable off-chain AI reasoning. Most of the studies focus either on general off-chain computation or on using TEEs to keep things private and secure, but they don’t fully solve the trust problem. TEEs protect data but don’t give cryptographic proof that an AI model actually produced the right output, and they also come with hardware risks. In short, there is not a clear framework that can combine AI reasoning, off-chain execution, and verifiable proof in a way that is practical, scalable, and trustworthy for smart contract systems. This gap is important to do research in investigating verifiable off-chain reasoning for AI-driven smart contract systems.

Proposed Methodology:

A design-oriented, mixed-method approach will be used, encompassing literature review, framework development, prototype implementation, and evaluation (Zhu et al., 2024). This is the first stage of the design-oriented methodology to survey previous academic research, technical reports, and industrial documentation relating to decentralized AI architectures, deterministic and reproducible computing, AI model interpretability and reasoning extraction, decentralized oracle mechanism, trusted execution environments, zero knowledge proofs and succinct proofs, and verifiable computation. This map exposes limitations of existing techniques which lack their credibility to address the need for verifiable off-chain AI reasoning (Kerzi et al. 2024).

The proposed framework allows off-chain AI models to generate outputs with verifiable evidence, enabling smart contracts to validate their integrity. The structure is as follows:

  1. Off-Chain AI Module: Executes AI inference off-chain and records minimal metadata for verification (Zhang et al., 2022).
  2. Proof Generation Layer: Creates verification evidence using attestations, reproducible hashes, or zero-knowledge proofs (Al-Breiki et al., 2024).
  3. Oracle Transmission: Sends both AI outputs and verification proofs without modification (Cai et al., 2023).
  4. On-Chain Verification Contract: Validates the submitted evidence and approves or rejects AI outputs (Zhu et al., 2024).

Expected Contributions:

This study will contribute to research in blockchain security and decentralized AI through several outcome expectations. The cryptographically grounded framework enables verifiable off-chain reasoning for AI-driven smart contracts. The detailed analysis of existing verification mechanisms and their applicability to AI reasoning will demonstrate a functional prototype, which further leads to end-to-end verification of AI outputs and performance benchmarks that establish the feasibility of reasoning verification.  

Proposed Timeline:

 

References:

Anasuri, S. (2023). Confiential Computing Using Trusted Execution Environment. International Journal of AI, BigData, Computational and Management Studies. 4(2), PP. (97-110). https://ijaibdcms.org/index.php/ijaibdcms/article/download/240/243

Alaa, R., et al. (2025). Verifiable Split Learning via zk-SNARKs. arXiv preprint.  https://arxiv.org/pdf/2511.01356

Al Jasem, M., De Clark, T., & Shrestha, A. K. (2025). Toward Decentralized Intelligence: A Systematic Literature Review of Blockchain-Enabled AI Systems. Information, 16(9), 765. https://doi.org/10.3390/info16090765

Chen, B., Stoica, I., Waiwitlikhit, S., & Kang, D. (2024). ZKML: An Optimizing System for ML Inference in Zero-Knowledge Proofs. Eurosys.  pp. (560-574)  https://dl.acm.org/doi/pdf/10.1145/3627703.3650088

Chiarelli, A. (2023). Securing the Bridges Between Two Worlds: A Systematic Literature Review of Blockchain Oracles Security. Aalto University, School of Science.  https://aaltodoc.aalto.fi/server/api/core/bitstreams/59fd86b4-9c5c-416e-8f63-6aff02b10b74/content  

Coppolino, L., et al. (2025). An Experimental Evaluation of TEE Technologies for Confidential Computing. Computers & Security. Vol 154.  https://doi.org/10.1016/j.cose.2025.104457

Cai, W., Wang, Z., Anwar, A., & Wu, Q. (2023). A systematic review of blockchain oracles: Taxonomy, challenges, and opportunities. Future Generation Computer Systems, 142, 215–231. https://doi.org/10.1016/j.future.2023.05.019

Gu, B., & Nawab, F. (2024). zk‑Oracle: Trusted Off‑Chain Compute and Storage for Decentralized Applications. Distributed and Parallel Databases, 42(4), 525–548. https://doi.org/10.1007/s10619-024-07444-6

Huang, S., et al. (2024). Advancing Web 3.0: Making smart contracts smarter on blockchain. OpenReview. https://openreview.net/pdf?id=4qtxfjSyFE

Hankyung, K. et al. (2025). vCNN: Verifiable Convolutional Neural Network Based on zk-SNARKs. ResearchGate. https://www.researchgate.net/publication/377071773_vCNN_Verifiable_Convolutional_Neural_Network_Based_on_zk-SNARKs

Kerzi, V., et al. (2024). On-Chain Zero-Knowledge Machine Larning: An Overview and Comparison. Journal of Information Security and Applications. 36(4), pp. (1-15)  https://www.sciencedirect.com/science/article/pii/S1319157824002969

Li, W., Palanisamy, B., & Xu, M. (2019). Scalable and privacy-preserving design of on/off-chain smart contracts. arXiv. https://arxiv.org/abs/1902.06359

Nadler, M., Schuler, K. & Schar, F. (2025). Blockchain Price Oracles: Accuracy and Violation Recovery. Journal of Corporate Finance.  https://doi.org/10.1016/j.jcorpfin.2025.102908   

Pan, D., et al. (2025). ZkTaylor: Zero-Knowledge Proofs for Machine Learning via Taylor Series Transformation. AAIA ‘24: Proceedings of the 2024 2nd International Conference on Advances in Artificial Intelligence and Applications. https://doi.org/10.1145/3712623.3712646

Ramos, S., & Ellul, J. (2024). Blockchain for Artificial Intelligence (AI): enhancing compliance with the EU AI Act through distributed ledger technology — A cybersecurity perspective. International Cybersecurity Law Review, 5, 1–20. https://doi.org/10.1365/s43439-023-00107-9

South, T., et al. (2024). Verifiable Evaluations of Machine Learning Models using zkSNARKs. arXiv preprint. https://arxiv.org/pdf/2402.02675

Sariboz, E., Ismail, M., Shabtai, A., & Elovici, Y. (2021). Off-chain execution and verification of computationally intensive smart contracts. arXiv. https://arxiv.org/abs/2104.09569

Taherdoost, H. (2022). Blockchain Technology and Artificial Intelligence Together: A Critical Review on Applications. Applied Sciences, 12(24), 12948. https://doi.org/10.3390/app122412948

Wang, C., et al. (2025). Fidelius: A Novel Secure Data Analysis Framework Leveraging Intel SGX and Blockchain. ACM Digital Library.  https://dl.acm.org/doi/10.1145/3709016.3737801

Wang, Z., et al. (2024). Research on Oracle Technology Based on Multi-Threshold Aggregate Signature Algorithm and Enhanced Trustworhty Oracle Reputation Mechanism. Sensors (MDPI). 24(2), 502  https://doi.org/10.3390/s24020502  

Yuan, J., et al. (2024). Elevating Security in Migration: An Enhanced Trusted Execution Environment-Bsed Generic Virtual Remote Attestation Scheme. Information (MDPI). 15(8),  470.  https://doi.org/10.3390/info15080470  

Zhang, F., et al. (2023). Chainlink 2.0: Next Steps in the Evolution of Decentralized Oracle Networks. Chainlink. https://research.chain.link/whitepaper-v2.pdf

Zhu, Y., Li, F., Yang, H., & Wang, J. (2024). Secure and transparent AI-enabled smart contracts through hybrid verification techniques. IEEE Transactions on Network and Service Management, 21(2), 1448–1462. https://doi.org/10.1109/TNSM.2024.3362210

(2022). The Ultimate Guide to Blockchain Oracle Security. Chainlink resources. https://20755222.fs1.hubspotusercontent-na1.net/hubfs/20755222/guides/the-ultimate-guide-to-blockchain-oracle-security.pdf