In future, AI agents won’t choose crypto rails because they’re trendy. They’ll use them because they’re the only rails that match how agents operate. Always-on, global and programmable.
Traditional financial rails are built for human operation; accounts, approvals, business hours, fragmented jurisdictions, slow settlement and closed APIs. AI agents are the opposite. Always-on, global by default, operating at internet speed and coordinating across dozens of services at once.
As AI agents move from “recommendations” to “execution,” they become a new class of economic actor. They will find opportunities, run workflows, pay for services, route orders and manage risk continuously. The limiting factor won’t be model quality alone, it will be trust. If for example a human wants to book a trip overseas in future, the AI agent will need to be trusted to make correct decisions to get the best outcome for the user’s benefit. Payments are simply the first domain where this trust problem becomes visible. The deeper issue is coordination and verifiable execution between autonomous actors.
A recent proof point is OpenClaw, an open agent that hit 100k GitHub stars in under a week by automating and easily executing everyday tasks like email, appointment scheduling and travel planning inside messaging apps that people already use.
Whilst it showed how fast agents that do real work can gain traction, it has also exposed critical security vulnerabilities. Cisco’s security team recently documented how OpenClaw ran malicious add-ons that secretly sent users’ data to external servers and performed actions without permission.
Therefore, the core issue isn’t the agent itself, but the trust model. When you grant an agent access to your email, calendar and messaging apps, you’re extending blanket trust with no way to verify, audit, or constrain what it does with those credentials.
Once agents can act on your behalf across all software, trust becomes the bottleneck and the problem only gets worse as the stakes get higher.
The trust problem compounds as the stakes increase. Today, agents like OpenClaw handle low-stakes tasks like scheduling meetings, summarising emails and drafting messages. But as AI agents move toward high-value actions like payments, legal work and business operations, giving an agent access to all of your personal credentials and private information becomes more risky. You can’t audit what the agent has done, verify it acted within your instructions, or prove to a counterparty that it was authorised to act on your behalf. There is also an increased risk of unauthorised activity being undertaken by agents against users, even inadvertently.
Existing technology companies like OpenAI, Anthropic and soon Stripe for payments, are building trust through brand reputation and closed ecosystems. But, their agents are currently constrained by siloed integrations, gated partnerships and centralised control over what can and cannot be automated. AI agents operating on traditional rails like this are limited by these constraints. APIs can be revoked, access throttled, or automation blocked when it threatens incumbents.
On the other hand, crypto infrastructure is permissionless and peer-to-peer. An agent can discover a service, pay for it and settle directly without asking for platform approval. That makes crypto not just cheaper rails, but neutral rails for autonomous commerce.
Crypto turns value transfer into a developer primitive. A wallet is a programmable entity with the ability to hold, send and receive value. Crypto enables always-on settlement, global interoperability, composability across services and atomic execution (i.e. “do + pay” in the same step). It also offers a crucial ingredient for AI agents – verifiability.
At a base layer, blockchains provide strong post-hoc verifiability and auditability. You can prove what happened. But ideally, in an AI agent economy, the bigger benefit will be in preventative verifiability (i.e. transactions that cannot finalise unless they satisfy user-defined rules and constraints).
Preventative, policy-bound execution will make it possible to trust agents with high-stakes economic activity.
When autonomous systems act, users and businesses need more than audit trails. They need constraints that bind agent behaviour to policy.
Basic tools like spending limits minimise risk, but they don’t capture context-specific intent. “Book a refundable SFO to JFK under $500 on these dates” is not a simple rule, it requires external context, such as information about the user, access to wallets, flight availability, passport information and special deals. In addition, the intent requires confidentiality without misuse.
The hard problem and the real opportunity is combining contextual data and policy into settlement in a scalable way without reintroducing a third party intermediary.
In many cases, what matters most is verifying outcomes, not every intermediate step. Models and tools will evolve quickly, but users will care that the result respects their rules, constraints and capital.
Long-term, AI models will converge and infrastructure will commoditise. The chat interface will become table stakes. Value will accrue to the control planes agents rely on such as identity, permissions, routing, settlement and reputation. The durable winners will not be “an agent” but the control planes that make agents reliable in the real world. Systems that manage identity, permissions, routing, compliance abstraction and settlement across interoperable rails.
The “Uber moment” for agents won’t come from intelligence alone. It will come from flipping trust: from “I’m not sure if I’m comfortable trusting this” to “I can delegate this because it executes within my rules, with guarantees.”
The biggest agent companies won’t just be “better models.” They will be the systems that make delegation safe.
The startup opportunity
This is where the startup opportunity lives. Incumbents will own major distribution surfaces (e.g. OpenAI and Anthropic in the chat interface, Apple and Google at the OS layer and Stripe across payments) but they are structurally incentivised to build walled gardens. They bias integrations toward their own networks, move slowly on high-risk primitives and avoid neutrality across competing models, wallets and rails.
Startups can win by becoming the trusted execution layer between user intent and real-world outcomes:
- policy and permissioning control plane for delegation
- neutral router for best execution across tools and venues
- trust layer that makes autonomous workflows safe through escrow, guarantees, dispute resolution and auditable state
This mirrors how Stripe succeeded not by inventing money, but by abstracting complexity, improving developer experience and reliably routing outcomes.
The largest markets won’t be driven by novelty. They will come as a relief to users who find current systems cumbersome. AI agents will remove friction from high-frequency, high-cost workflows where coordination is still shockingly manual and inefficient because trust and coordination are expensive, such as:
- payments and treasury
- cross-border commerce
- invoicing and reconciliation
- procurement and approvals
- disputes and claims
- personal logistics like travel, email and calendar management, etc.
As AI agents become default operators of the economy, crypto becomes the settlement substrate that lets them transact, coordinate and prove what they did across an open ecosystem.
AI will get cheaper and more common. What will matter is which systems people feel safe letting AI act for them. That’s why rails that make actions secure and reliable matter and why the biggest opportunities will be in the systems that make delegation safe. The most durable startup opportunities sit in the trust, execution and interoperability layers that make delegation real.
We at @frachtisvc are investing in agent-native crypto applications and infrastructure that abstract complexity, collapse workflows, execute reliably, personalise deeply, interoperate openly and deliver trusted outcomes.
Reach out if you are building in the space.
Special thank you to Felix Lutsch (Symbiotic), Seref Yarar (Index Network) , Erwin Dassen (Chorus One), John Shutt (Oya Protocol) and Myles O’Neill (Delta) for their thoughtful feedback and review. Also, a big thank you to Adina Fischer and Chiin Gandia for editing.