AI

IronClaw: NEAR AI’s Answer to the Trust Problem in Always-On AI

Unknown · Feb 23, 2026
Keep reading to earn more!
BUX
Your Earnings +0.0 BUX
IronClaw: NEAR AI’s Answer to the Trust Problem in Always-On AI

AI agents are becoming persistent, autonomous, and deeply embedded in everyday workflows. But as they gain the ability to act on our behalf, a harder question emerges: who controls the data, the execution, and the trust layer?

Today, NEAR AI introduced its answer. Announced live at NEARCON 2026, IronClaw is a new open-source, verifiable AI agent runtime designed for a future where agents run continuously — without exposing sensitive data, credentials, or user intent.

A Runtime Built for Autonomous AI — Without Blind Trust

IronClaw builds on the original OpenClaw vision, but strengthens it with cryptographic guarantees from the ground up. Written in Rust and deployed inside encrypted Trusted Execution Environments (TEEs) on NEAR AI Cloud, the runtime allows AI agents to access tools, maintain memory, and take actions on users’ behalf — all within a tightly controlled security boundary.

Rather than asking users to trust opaque platforms, IronClaw shifts the trust model toward verifiable execution. Data and inference stay protected at the hardware level, and agents operate under explicit, enforceable permissions.

Security by Architecture, Not Add-Ons

IronClaw is designed with defense-in-depth as a core principle.

Every untrusted or third-party tool runs in its own sandbox, limited to only the resources it is explicitly authorized to access. Network calls are restricted to approved destinations. Sensitive credentials are injected only at runtime and never exposed directly to tools or external services.

Agent activity is continuously monitored to detect misuse, including protections against prompt-injection attacks and abusive resource consumption. All user data is stored locally in PostgreSQL, encrypted with AES-256-GCM, and never shared externally. Importantly, IronClaw collects no telemetry or analytics, ensuring execution remains fully private.

A complete audit log gives users visibility into every tool interaction — transparency without surveillance.

Privacy-First AI, Ready to Deploy

IronClaw launches with a free Starter tier that includes one hosted agent instance running inside NEAR AI’s secure environment and powered by its inference infrastructure. Developers and organizations can scale up through flexible paid tiers as their needs grow.

The goal isn’t just safer agents — it’s practical deployment without forcing teams to choose between convenience and control.

Why This Matters

As AI systems increasingly serve corporate incentives and rely on opaque data pipelines, IronClaw represents a different direction: local control, verifiable execution, and privacy by default.

Illia Polosukhin, Co-Founder of NEAR Protocol and Founder of NEAR AI, described IronClaw as an “agentic harness designed for security,” extending NEAR’s full-stack trust model from blockchain infrastructure into the AI layer itself.

Rather than bolting security onto agentic AI after the fact, IronClaw embeds it into the runtime — combining confidential inference, cryptographic verification, and hardware-backed execution into a single system.

A Foundation for Responsible Agentic AI

George Zeng, Chief Product Officer and GM of NEAR AI, framed the launch more bluntly:

"AI agents are already entering critical workflows, but security, compliance, and data ownership remain unresolved. IronClaw is meant to close that gap — giving developers and enterprises the confidence to deploy always-on agents without surrendering transparency or control."

IronClaw is available now, with code accessible via NEAR AI’s GitHub.

As AI moves from tools to actors, IronClaw signals a clear position: autonomy should not come at the cost of privacy, and intelligence should never require blind trust.