Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Deep Dive: Agentic AI in Financial Crime Fighting
The industry currently operates in a state of high-cost inefficiency. Banks commonly allocate 10% to 15% of their total headcount to Know Your Customer (KYC) and Anti-Money Laundering (AML) activities, yet they detect only about 2% of global financial crime flows. This delta between operational spend and effectiveness is the “compliance trap.” I believe agentic AI is the only credible exit strategy for this trap.
Agentic AI represents a shift from “assistive” technology to “autonomous” execution. While generative AI (GenAI) summarizes data and analytical AI identifies patterns, agentic AI has the capacity to plan, execute, and adapt sequences of actions to reach a specific objective. It is the difference between a chatbot that writes a summary and a digital worker that investigates a case.
The obsolescence of no-code and rule-based frameworks
For a decade, “no-code” was the benchmark for risk operations. It allowed compliance teams to build rules without engineering support. However, as crime volumes escalated, the analyst became the bottleneck. In traditional AML, up to 95% of alerts are false positives. Building a single Suspicious Activity Report (SAR) can take four or more days.
No-code tooling is no longer sufficient. The requirement is now for AI Risk Infrastructure. This infrastructure executes the full financial crime lifecycle: detecting risk in real time, investigating alerts end-to-end, and producing regulator-ready filings. Unit21’s 2026 relaunch signals this transition. Their platform moved from being a no-code rules engine to an agentic system where AI agents tune detection logic and conduct investigations without human analysts driving every step.
Defining agentic AI in risk operations
Agentic AI refers to systems that act with a degree of autonomy toward defined goals. In financial crime fighting, this means the AI can decide which data sources to query, how to interpret inconsistent information, and when to escalate a case.
Comparison of AI Generations in Compliance
The productivity potential of agentic AI is a 20-fold increase over manual practitioners. I categorize these agents into squads that mirror human roles along the value chain.Retrieval-Augmented Generation (RAG) agents handle data extraction from profit-and-loss statements and beneficial ownership documents. Data pipeline agents orchestrate ETL processes and perform entity resolution across fragmented datasets. Research agents monitor market trends and counterparty patterns, while validation agents review agent outputs to ensure quality.
The AI investigation workflow
When an alert enters the queue, the AI Investigation Agent follows a structured workflow rather than starting from a blank page.
Signal gathering: The agent retrieves the transaction history, entity profile, risk scores, and watchlist matches. It navigates across disparate screens to assemble the context a senior analyst would require.
Workflow orchestration: The agent follows modular steps configured to the institution’s standard operating procedures (SOPs). This includes checking prior alert history, running OSINT searches, and cross-referencing sanctions lists.
Findings assembly: The agent produces a structured package containing a written narrative, evidence logs, and a recommended disposition. The reasoning is explicit and traceable.
The “human-in-the-loop” model remains the default for final dispositions. Analysts approve, modify, or override the agent’s package, ensuring human accountability.
Context engineering vs prompt engineering
The hardest engineering challenge in agentic AI is not writing better prompts; it is context engineering. To produce an auditable investigation narrative, the model must receive the exact right evidence without overloading its context window. LLMs are based on transformer architecture, where every token attends to every other token, resulting in n2 relationships.This leads to attention scarcity as context length increases.
Effective context engineering is the science of curating high-signal tokens to maximize the likelihood of a desired outcome. For example, Unit21 leverages their rich dataset from 7 years of human reviews to work out the optimal context required to complete given tasks. These tasks are then evaluated against historical human investigations, completed by high performing analysts, to ensure correctness, consistency, and effectiveness.
Evaluation is performed using “LLM-as-a-judge” architectures. A secondary, more capable model evaluates the quality of the primary agent’s output, creating a self-checking layer that flags inconsistencies before they reach a human reviewer. This is supplemented by citation validation, where the system verifies that agent claims are grounded in retrieved data rather than model inference.
The AI investigation workflow
When an alert enters the queue, the AI Investigation Agent follows a structured workflow rather than starting from a blank page.
Signal gathering: The agent retrieves the transaction history, entity profile, risk scores, and watchlist matches. It navigates across disparate screens to assemble the context a senior analyst would require.
Workflow orchestration: The agent follows modular steps configured to the institution’s standard operating procedures (SOPs). This includes checking prior alert history, running OSINT searches, and cross-referencing sanctions lists.
Findings assembly: The agent produces a structured package containing a written narrative, evidence logs, and a recommended disposition. The reasoning is explicit and traceable.
The “human-in-the-loop” model remains the default for final dispositions. Analysts approve, modify, or override the agent’s package, ensuring human accountability.
The three failure modes of AI agents
Most early deployments of AI agents fail because of poor guardrails rather than weak models.
Sardine
The hallucinating investigator: This happens when teams provide too much context and open-ended prompts. In adversarial environments, the model fills data gaps with plausible but incorrect narratives. The solution is to use “atomic agents” with narrow decision boundaries.
The over-suspicious agent: Pattern-driven training without contextual grounding leads to over-escalation. For example, flagging high-value payments between related internal accounts as “layering”. Grounding questions must be injected into the agent’s logic to prevent default conclusions of fraud.
The black box agent: Producing conclusions that are not defensible to regulators. Accurate outputs without a chain of evidence are a liability. Agents must pull data deterministically and focus on structured documentation.
Predictive defense and digital workers
As we move through 2026, the distinction between private stablecoins and public digital money is becoming a critical strategic consideration. The fusion of fraud and AML operations is not just operational convergence; it is a deeper integration of the technology stack.
Agentic AI systems are moving from the pilot phase to the core of AML defense. We are seeing a shift from simple pattern recognition to predictive systems that anticipate criminal activity before a transaction is even flagged. I’ll be blunt: legacy rule-based systems cannot keep up with the speed of instant payments.
The path to impact is driven by speed of adoption and a tailored operating model. Leading institutions are starting with pilot perimeters to prove impact before preparing for a full-scale rollout. Agentic AI is the next major innovation lever for KYC/AML, offering stronger compliance and a more streamlined customer experience.
I treat the adoption of agentic AI as a necessity for survival in the modern financial landscape. The $4.4 trillion in illicit activity is a reminder that the cost of inaction is too high. We must move from a workforce of manual executors to one of AI supervisors, managing a digital factory of agents that detect and investigate at machine speed.