Apr 2, 2026

A Real-Time AI Fraud Decision Engine Under 50ms

A deep dive into how GeekyAnts built a real-time AI fraud detection system that evaluates transactions in milliseconds using a hybrid multi-agent approach.

Author

Syed Yaseen Abbas Rizvi
Syed Yaseen Abbas RizviSoftware Engineer III
A Real-Time AI Fraud Decision Engine Under 50ms

Table of Contents

Inside a high-performance Real-Time AI Fraud Decision Engine, a system that reviews a financial transaction and returns a decision in under 50 milliseconds.

Every time you tap your phone to pay for something, a quiet competition is taking place. On one side are fraudsters looking to steal money. On the other side are detection systems trying to stop them in the time it takes to blink.

At GeekyAnts, a team of engineers set out to build a fraud detection engine that could make that split-second call. The result is an Autonomous Multi-Agent Pipeline, a system capable of analyzing a financial transaction and deciding whether to approve, challenge, or block it in under 50 milliseconds.

The Problem With Fraud Today

Digital payments have grown at a pace that has outrun traditional fraud prevention. The scale alone is staggering: large financial platforms process tens of thousands of transactions every minute. No team of human analysts can review that volume in real time.

Four problems sit at the heart of the challenge:

  • Volume. Thousands of transactions arrive every minute, far beyond human review capacity.
  • False alarms. Many older systems block payments from real customers. These false declines push people away from digital banking.
  • Speed of attack. Once a fraudster gains access to an account, funds can be moved within minutes.
  • No clear explanations. Legacy systems often return error codes with no reasoning behind them, making it hard to communicate decisions to customers or regulators.

Three Layers of Intelligence

Rather than rely on a single tool, the team built a system that combines three distinct layers of decision-making:

  • A machine learning model that scores the risk of each transaction based on behavioral patterns.
  • A rules engine that checks transactions against known fraud patterns.
  • AI reasoning agents that generate written explanations of why a transaction was flagged.
Together, these layers handle what none could do alone: catch fraud at speed, explain decisions in plain language, and remain functional even when one component is unavailable.

How a Transaction Gets Reviewed

When a transaction arrives, it passes through a sequence of specialized processes, each one focused on a specific task.

Step 1: Signal Collection 

The system gathers and organizes the raw data attached to the transaction: device information, location, transaction amount, and account history. These are converted into a standard format the system can work with.

Step 2: Fraud Category Identification 

Not all fraud looks the same. The system checks which of nine fraud categories the transaction might belong to such as account takeover, card misuse, or wire transfer fraud. Identifying the category helps apply the right detection logic.

Step 3: Risk Scoring 

A machine learning model evaluates fifteen risk signals to produce a fraud probability score. These signals include device risk, transaction speed, geographic location, and whether a VPN or proxy is in use, among others.

Step 4: The Decision 

Using the risk score and pattern matching against 27 known fraud scenarios, the system decides one of three outcomes: approve the transaction, challenge it (for example, by requesting additional verification), or decline it.

Step 5: The Explanation 

In the background, an AI reasoning process generates a written summary of why the decision was made. This explanation is stored for compliance teams, auditors, and customer support, anyone who needs to understand the reasoning later.

Two Paths, One Decision

The architecture separates speed from depth. The fast path handles the core decision in 5 to 15 milliseconds using the machine learning model and the rules engine. This is what keeps the payment experience smooth for the end user.

The enrichment path runs in the background and completes within 200 milliseconds. It produces a fuller picture: threat severity, attack patterns, and recommended actions, all written in plain language rather than code.

Splitting the two paths means the payment does not have to wait for deep analysis. Both can happen without slowing each other down.

Why A Single Method Is Not Enough

Machine learning is good at identifying unusual behavior—transactions that deviate from a user's normal patterns in ways that are hard to put into words. Rules, on the other hand, are good at catching specific, well-documented attack patterns with high confidence.

This decision engine uses both. The rules catch what is known. The machine learning model catches what is unusual. The AI reasoning layer explains what was found. Each method covers the gaps of the others.

The Nine Fraud Types the System Covers

The system is built to recognize a wide range of fraud types common in digital finance:

  • Account Takeover. When a fraudster gains access to someone else's account, often through stolen credentials.
  • Transaction Fraud. Unauthorized payments were made from a legitimate account.
  • Card-Not-Present Fraud. Fraud occurs when a physical card is not required, which is common in online purchases.
  • Mobile Banking Fraud. Attacks that target users through mobile apps or devices.
  • Onboarding and Identity Fraud. False identities are used to open new accounts or pass verification checks.
  • Digital Wallet Fraud. Unauthorized use of payment apps and wallet services.
  • Loan and Credit Fraud. Applications for credit or loans using false information.
  • Wire Transfer and Business Email Fraud. Attackers impersonating executives or vendors to redirect payments.
  • Internal Employee Fraud. Misuse of system access by people within an organization.

Across these nine categories, the system models 27 distinct fraud scenarios, specific attack patterns that the decision engine checks for during each review.

The performance targets were set with real payment flows in mind. A decision that takes several seconds is too slow; customers expect near-instant responses.

  • Core decision time: 5 - 15 milliseconds
  • Full analysis with explanation: under 200 milliseconds
  • Fraud categories covered: 9
  • Fraud scenarios modeled: 27

What Building This System Taught Us

The project produced five practical conclusions about fraud detection systems:
  • Speed is not optional. In payment flows, a slow decision is as disruptive as a wrong one.
  • Explainability matters as much as accuracy. A system that cannot explain its decisions is a liability for compliance and customer communication.
  • Hybrid systems outperform single-method systems. Rules and machine learning cover different failure modes.
  • Observability is essential. Being able to trace each decision through the pipeline makes debugging complex systems possible.
  • Rules remain necessary for critical decisions. AI reasoning is a valuable layer, but deterministic logic still provides the reliability that high-stakes decisions require.

The Bigger Picture

This multi-agent architecture reflects the direction that real-world fraud prevention is taking. Financial institutions increasingly depend on layered systems that combine structured rules, statistical models, and AI-generated reasoning to keep pace with attackers.

As digital payments accelerate, systems built for the intersection of speed, accuracy, and transparency are no longer just a technical aspiration—they are a necessity. This project is a prime example of what that looks like in practice.

SHARE ON

Related Articles.

More from the engineering frontline.

Dive deep into our research and insights on design, development, and the impact of various trends to businesses.

Your AI Works in the Demo. It Will Not Survive Production Without Preparation
Article

Apr 23, 2026

Your AI Works in the Demo. It Will Not Survive Production Without Preparation

Why AI prototypes fail before reaching production, and the six readiness factors that determine whether they scale successfully.

From Manual Testing to AI-Assisted Automation with Playwright Agents
Article

Apr 23, 2026

From Manual Testing to AI-Assisted Automation with Playwright Agents

This blog discusses the value of Playwright Agents in automating workflows. It provides a detailed description of setting up the system, as well as a breakdown of the Playwright Agent’s automation process.

How to Choose an AI Product Development Company for Enterprise-Grade Delivery
Article

Apr 21, 2026

How to Choose an AI Product Development Company for Enterprise-Grade Delivery

A practical guide for enterprises on how to choose the right AI development partner, avoid costly mistakes, and ensure long-term delivery success.

AI MVP Development Challenges: How to Overcome the Roadblocks to Production
Article

Apr 20, 2026

AI MVP Development Challenges: How to Overcome the Roadblocks to Production

80% of AI MVPs fail to reach production. Learn the real challenges and actionable strategies to scale your AI system for enterprise success.

How to Build an AI MVP That Can Scale to Enterprise Production
Article

Apr 17, 2026

How to Build an AI MVP That Can Scale to Enterprise Production

Most enterprise AI MVPs fail before production. See how to design scalable AI systems with the right architecture, data, and MLOps strategy.

How to De-Risk AI Product Investments Before Full-Scale Rollout
Article

Apr 17, 2026

How to De-Risk AI Product Investments Before Full-Scale Rollout

Most AI pilots never reach production, and the reasons are more preventable than teams realize. This blog walks through the warning signs, the safeguards, and what structured thinking before the build actually saves.

Scroll for more
View all articles