[CORE_INIT_v1.6]
AUTH_SESSION_ROOT

provnai

The trust layer theagentic erais missing.

Open-source infrastructure to secure, verify, govern, and audit the next generation of autonomous AI agents.

Systematic Vulnerability

Solving the
Black BoxProblem

Autonomous agents are inherently opaque. ProvnAI replaces blind trust with cryptographically verifiable traces of every decision, action, and state transition.

Opaque Logic
RESOLVED
Mutable Logs
VERIFIED
Policy Drift
GOVERNED
Identity Gap
ATTESTED

Proof of Execution.

Provnai transforms ephemeral agent logs into permanent cryptographic evidence. Every decision becomes a verifiable artifact.

Type

Traditional Log

Status
Mutable
Payload Structure
Format: .json / .log
ID: 0x2a13603de45883cd63f57e13a32b6d692a6ddfe9
  • Plaintext trace
  • Easily modified
  • No cryptographic link
Artifact_Audit_V1
Type

Verifiable Receipt

Status
Immutable
Payload Structure
Format: .attest / .capsule
ID: 0x4e5a093e823118f85c057414ce60974fedf32b22
  • Merkleized proof
  • Hardware signed
  • RFC 6962 compliant
Artifact_Audit_V1
Signers: 2/2

Evidence Portability

Portable receipts allow agents to carry their own proof, eliminating the need for blind trust between untrusted parties.

Ecosystem Safety & Safeguards

The Safety Stack

From hardware-level attestation to active tool-layer proxies. We consolidate fragmented agent security into a unified defense architecture.

Sim_Environment :: attestation
Live Preview
TPM 2.0 INTERACTORFETCHING PCR STATE...
QUOTE_SIG: 0x0000...000000000000000000000000000000
MetricTPM 2.0 / vTPM
Status Optimal
PYTHON PROXYv1.6.0

McpVanguard

A security proxy for AI agents that use MCP (Model Context Protocol). It interposes between the agent and the host system, inspects every tool call, and blocks attacks before they reach your underlying servers.

L1

Rules Engine

< 2ms

50+ YAML signatures — block path traversal, reverse shells, prompt injection, and SSRF attacks instantly.

L2

Semantic Scorer

async

LLM-based intent scoring via OpenAI, DeepSeek, Groq, or Ollama to detect zero-day evasion attempts.

L3

Behavioural

stateful

Shannon entropy and sliding-window anomaly detection. Stateful monitoring of conversational context.

v0.3 SPEC

Evidence Capsule[VEP]

Self-describing cryptographic container for agent logs. Every event (call, response, syscall) is Merkle-hashed and signed by a hardware-rooted TPM.

ZK-READY

Adversarial Debate[RBD]

Cognitive verification protocol. Red/Blue agents debate the validity of a proposed action until a cryptographically verifiable consensus is reached.

TEMPORAL

Cognitive Routing[A2A]

Secure transport layer for agent-to-agent negotiation. Preserves temporal memory integrity and prevents context hijacking in multi-agent swarms.

RUST COREv0.1.4

VEX Protocol

The cryptographic substrate for the agentic era. VEX is a 17-crate Rust kernel that enforces a zero-trust security posture and mandatory auditability without sacrificing execution performance.

capsule_verification.rs

let capsule = VexCapsule::from_bytes(raw_data)?;

// Verify Merkle Root & TPM Signature

capsule.verify_integrity(tpm_pubkey)?;

match capsule.debate_consensus() {

Consensus::Allow => execute_action(capsule),

Consensus::Halt => panic!("Policy violation detected"),

}

Core Principle

Governed Execution

In most agentic systems, the component that proposes an action also authorises and executes it. Governed Execution mathematically separates these functions into independent primitives.

CHORA Gate Collaboration

The CHORA Gate holds continuation authority. Before any action executes, the agent requests a signed token. The VEX Authorization Enforcement Module (AEM) intercepts the syscall, verifies the token against a hardware TPM, and permits execution.

Evidence Capsules

The result is an Evidence Capsule — a cryptographically signed record of intent, authority, identity, and cryptographic witness — identically compatible between CHORA (Python) and VEX (Rust).

Authorization Enforcement Module (AEM) Handshake

1
AI AgentVEX Runtime
Request action (syscall / API call)
2
VEX RuntimeCHORA Gate
Request continuation authority
3
CHORA GateVEX Runtime
Return signed ALLOW / HALT / ESCALATE
4
VEX RuntimeVEX AEM
Verify header + Merkle pillar hashes
5
VEX AEMHardware TPM
Validate identity / PCR registers
6
TPMVEX AEM
Identity confirmed
7
VEX AEMVEX Runtime
Issue short-lived capability token
8
VEX RuntimeAI Agent
Permit execution → generate Evidence Capsule
Defining the Standard

Inference Proposes.
Governance Decides.

ProvnAI is co-authoring the .capsule Verifiable Agent Receipt specification alongside CHORA. We are defining the shared protocol for how autonomous agents prove their intent, authority, and identity across distributed ecosystems.

Intent
VEX reasoning audit
Decision
CHORA policy lock
Identity
Attest silicon anchor
CAPSULE_PAYLOAD_V1.0
FIELDPROVIDER
INTENT_MAPPINGVEX_PROTOCOL
GOVERNANCE_GATECHORA_AUTHORITY
HARDWARE_QUOTEATTEST_SILICON
Verified Capsule Finality
Performance Verified

See It Run

We ran a 10x scale test pipeline using DeepSeek v3. The results verify VEX's concurrency model handles high-throughput agent swarms with minimal latency overhead.

1.6s
Single Agent Baseline
3.0s
Concurrent (5x)
7.7s
Sequential (5x)

Latency Comparison (Lower is Better)

Single Agent1,616 ms
VEX Concurrent (5x)3,042 ms
🚀 2.5x Faster than Sequential
Python Sequential (5x)7,768 ms
DATA SOURCE: scale_test_results.json VERIFIED
Sovereign Proof

VEP
Explorer.

Verify the cryptographic integrity of VEX Evidence Capsules locally. Zero server-side visibility. 100% cryptographic proof.

Live Logic Trail

Watch intent mapping as it happens.

Proof Validation

Every step is verified by the network.

EXPLORER.PROVNAI.COM
TECHNOLOGY PREVIEW

Where It Started

Before VEP. Before CHORA. Before Evidence Capsules.

VEXEvolve ran 29 autonomous agents for a full month — 480 articles researched, 158 published, 150 anchored to Solana. No human intervention.

That was VEX v0.1.4. A proof that verifiable autonomous agents work in the real world.

What we're building now is a different class entirely.

About the initiative

Provnai is an independent open research initiative. Everything published is open source under MIT or Apache 2.0 licenses.

Initial Research Commit13 December 202590-Day Build to v0.1 Core Launch