Arjun Sethi, co-CEO of one of the largest crypto exchanges on the planet, says he'd trust an AI agent with 100% of his crypto. Haseeb Qureshi — managing partner at Dragonfly, one of the sharpest venture minds in the space — essentially told him he's out of his mind.
—
This wasn't a Twitter spat. It was a live debate at NEARCON 2026, and it might be the most important conversation happening in crypto right now.
Not because of the personalities involved — though they're heavyweights — but because the question they're wrestling with sits at the exact intersection of two megatrends that are about to collide: autonomous AI agents and self-custodial digital assets.
Get this right, and you unlock permissionless portfolio management for billions of people who currently can't afford a financial advisor. Get it wrong, and you hand the keys to the kingdom to a system that hallucinates.
Loading tweet...
View Tweet
The Case for AI Custody
Sethi's position isn't as reckless as it sounds on first listen. The Kraken co-CEO is essentially arguing that AI agents are approaching — or have already reached — a competence threshold where they can execute portfolio management tasks more reliably than most humans.
Think about it: the average retail investor panic-sells bottoms, FOMOs into tops, and makes emotional decisions that consistently destroy value. An AI agent doesn't have emotions. It doesn't check Twitter at 3 AM and decide to ape into a memecoin because a cartoon frog told it to.
There's also a democratization argument here that's genuinely compelling. Right now, sophisticated portfolio management — rebalancing, tax-loss harvesting, cross-chain yield optimization — is either reserved for the wealthy who can afford human advisors, or locked behind centralized platforms that require you to hand over custody.
An AI agent operating onchain, managing assets in a non-custodial smart contract framework, could theoretically give every person on Earth access to institutional-grade financial management. No KYC gatekeepers. No minimum balance. No Goldman Sachs relationship manager required.
That vision is powerful, and Sethi clearly sees the trajectory. When you run an exchange processing billions in volume, you've seen firsthand how automated systems already outperform human decision-making in narrow domains. The leap to broader AI custody feels incremental from that vantage point.
Qureshi's Skepticism Is Earned
But Qureshi's pushback isn't FUD — it's engineering realism. As someone who's deployed capital across dozens of crypto-native projects at Dragonfly, he understands something crucial: the alignment problem isn't theoretical in crypto. It's financial. When an AI agent misinterprets a prompt, hallucinates a strategy, or gets exploited through an adversarial input, the consequence isn't a weird chatbot response. It's irreversible loss of funds on an immutable ledger.
This is where the crypto-specific risk profile diverges sharply from traditional AI deployment. If a customer service chatbot gives bad advice, you can issue a refund. If an AI agent sends your ETH to a malicious contract, there's no undo button. There's no FDIC. There's no support ticket. The entire value proposition of self-custody — you control your assets — becomes a liability when the "you" making decisions is a probabilistic language model.
Qureshi's concern about premature deployment resonates with anyone who's watched this industry move fast and break things — except the things that break are people's savings. We've seen this movie before with algorithmic stablecoins, with automated trading bots that get front-run, with smart contracts that looked bulletproof until they weren't. The pattern is always the same: the technology works perfectly until it encounters an edge case that nobody modeled.
Loading tweet...
View Tweet
The Real Question Is Architecture, Not Timeline
Here's where I think both debaters are slightly missing the forest for the trees. The question isn't when AI agents will be trustworthy enough — it's how we architect the trust model. And this is where crypto's existing infrastructure actually gives us a massive advantage over traditional finance.
Smart contracts already solve a version of this problem. You don't trust a DeFi protocol because you trust the developer — you trust it because the code is open-source, audited, and constrained by onchain logic.
The same principle can apply to AI agents. Imagine an AI agent that manages your portfolio but operates within a smart contract cage: it can rebalance between approved assets, but it literally cannot drain your wallet to an unknown address. It can execute strategies within predefined risk parameters, but it can't exceed your maximum drawdown threshold because the contract won't let it.
This is the composability superpower that crypto brings to the AI agent conversation. In traditional finance, you trust your wealth manager because of regulation, reputation, and legal recourse — all of which are slow, expensive, and biased toward incumbents. In crypto, you can encode trust constraints directly into execution logic. The AI agent doesn't need to be perfectly aligned. It needs to be bounded.
The breakthrough isn't building an AI agent you trust completely — it's building a system where you don't have to.
Loading tweet...
View Tweet
What This Means for the Rest of Us
The Sethi-Qureshi debate is a preview of a much larger conversation that's going to dominate crypto for the next two to three years. As AI agents become more capable, every protocol, every wallet, and every exchange is going to have to answer the question: how much autonomy do you give the machine?
A few things to watch:
Onchain agent frameworks — projects building smart contract architectures that constrain AI agent behavior are going to be critically important infrastructure. The winners won't be the flashiest AI models; they'll be the most robust guardrail systems.
Insurance and recovery layers — if AI agents are managing real capital, the market will demand onchain insurance protocols that can cover agent failures. This is a massive design space that's barely been explored.
Regulatory arbitrage — and here's the part that should make every decentralization advocate pay attention. If AI agents can manage portfolios autonomously, regulators will absolutely try to classify them as investment advisors, fiduciaries, or something worse. The jurisdictional battles are coming.
Incremental trust models — the most likely path forward isn't Sethi's 100% trust or Qureshi's caution. It's graduated autonomy: start with small allocations, limited action spaces, and human-in-the-loop approvals, then expand as the agent proves reliability over time.
The Optimistic Read
What excites me most about this debate is that it's happening inside crypto, between people who actually build things. This isn't a congressional hearing where senators ask if Bitcoin can be banned. It's two practitioners arguing about implementation timelines and risk management — which means the underlying premise is already settled. AI agents will manage onchain assets. The only question is the architecture of trust.
And honestly? Crypto is better positioned to solve this than any other industry. We've spent a decade building trustless systems, verifiable computation, and programmable money.
The entire ethos of this space is: don't trust, verify. That's exactly the framework you need when your portfolio manager is a neural network.
Sethi's confidence might be premature. Qureshi's caution might be too conservative. But the fact that both are taking the question seriously means the future of AI-managed, self-custodial finance is closer than most people think — and it's going to be built onchain.