Apr 7, 2026

How We Built an AI Agent That Fixes CI/CD Pipeline Failures Automatically

A deep dive into how we built an autonomous AI agent that detects and fixes CI/CD pipeline failures without human intervention.

Author

Deepanshu Goyal
Deepanshu GoyalSenior Software Engineer - III
How We Built an AI Agent That Fixes CI/CD Pipeline Failures Automatically

Table of Contents

Engineering teams spend between 15 and 25% of their development time responding to CI/CD pipeline failures. This figure represents hours that do not go toward product work, architecture, or anything a team ships. The cost compounds further when context-switching comes into the frame: Microsoft's Developer Productivity research found that each interruption to debug a build failure costs an average of 23 minutes of recovery time. Multiply that across a team and a sprint, and the number becomes an operational liability.

The pattern that makes this problem solvable is its predictability. Seventy-three percent of pipeline failures fall into automatable categories: type errors, broken imports, dependency conflicts, and test regressions. Google's SRE handbook advocates automating any repetitive operational task that scales linearly with growth. To solve this, we built a Stateful Agentic Remediation System—an autonomous agent designed to watch your pipelines and act the moment something breaks.

What the AI Agent Does

The system is a stateful agentic remediation system. When a CI/CD pipeline fails, it detects the failure, diagnoses the root cause using AI, generates a targeted code fix, and opens a pull request—all without requiring a developer to act. The fix is then validated against the same CI pipeline, running on GitHub runners, that surfaced the original failure.

If the fix does not pass after three attempts, the system escalates to the engineering team via Slack with full context: the original error, every attempted fix, and the agent's reasoning at each step. It is not a chatbot; it is an always-on agent that watches your pipelines and acts the moment something breaks.

Architecture Overview

The system runs as a distributed, event-driven architecture with three separated layers: Detection, Reasoning, and Orchestration. The entire codebase lives in an Nx monorepo containing:

  • A NestJS backend API that handles webhook intake and orchestration.
  • A BullMQ worker process that processes jobs asynchronously.
  • A Next.js frontend dashboard that provides visibility into every repair cycle.

The Tech Stack

The backend runs on NestJS with TypeScript at maximum strictness. Data persistence uses Drizzle ORM against PostgreSQL, extended with pgvector for embedding-based semantic search. Redis powers both the caching layer and the job queue. The AI layer routes through OpenRouter to Claude Sonnet 3.5, using LangChain.js for structured prompting and LangGraph for stateful agent execution.

How it Works: End-to-End

1. Detection

GitHub sends a webhook event to the controller on pipeline failure. All processing happens asynchronously via BullMQ. 

2. Log Parsing

The agent strips noise (ANSI codes/timestamps) and isolates the specific TypeScript or build errors. It enriches these with source code snippets fetched directly from the GitHub commit. 

3. Semantic Search

Every past fix is stored in PostgreSQL with vector embeddings. The system performs a similarity search to see if a similar problem was solved before, improving accuracy and reducing token usage. 

4. AI Diagnosis

An error classifier categorizes the failure (e.g., syntax, dependency). The agent generates a structured JSON fix with a confidence score. 

5. Fix & Validate

The agent commits changes and opens a PR. If the pipeline passes, it’s ready for review. If it fails, the agent captures the new logs and retries with an adjusted strategy (capped at three attempts).

Safety and Security The system operates on the principle of least privilege:

  • Write access is restricted to temporary branches; no direct access to main.
  • It never auto-merges; a human reviewer must approve every PR.
  • Loop prevention ensures the agent never attempts to fix its own generated branches.

The Dashboard

The Next.js frontend provides a single visibility layer for the entire system. On landing, it displays all connected repositories. Drilling into a repository reveals its branches; drilling into a branch shows individual commits with their pipeline statuses, passed, failed, in progress, or under repair. For each pipeline run, the dashboard shows the exact changes the agent made. Engineering teams gain full transparency without switching between tools or parsing logs.

Results

MetricWithout an AI AgentWith an AI Agent

What Comes Next

The roadmap addresses several key areas: converting the system into a platform any team can adopt with one click, real-time pipeline status surfacing, cross-repository learning, and multi-language support (Python, Go, Java, Rust).

The goal of this project was to return the hours they spend on routine build failures so they can concentrate on what matters: building software that ships.

SHARE ON

Related Articles.

More from the engineering frontline.

Dive deep into our research and insights on design, development, and the impact of various trends to businesses.

RAG vs Fine-Tuning vs AI Agents: Which Architecture Fits Your Use Case
Article

Apr 24, 2026

RAG vs Fine-Tuning vs AI Agents: Which Architecture Fits Your Use Case

RAG, Fine-Tuning, or AI Agents? Use a proven decision framework to choose the right architecture for accuracy, cost control, and real outcomes.

How to Build a HIPAA-Ready AI Healthcare Product Without Slowing Delivery
Article

Apr 24, 2026

How to Build a HIPAA-Ready AI Healthcare Product Without Slowing Delivery

AI healthcare products miss compliance reviews because of deferred decisions and poor architecture. This blog walks engineering leaders, product managers, and founders through practical patterns that keep delivery fast and compliance built in from the start.

Your AI Works in the Demo. It Will Not Survive Production Without Preparation
Article

Apr 23, 2026

Your AI Works in the Demo. It Will Not Survive Production Without Preparation

Why AI prototypes fail before reaching production, and the six readiness factors that determine whether they scale successfully.

From Manual Testing to AI-Assisted Automation with Playwright Agents
Article

Apr 23, 2026

From Manual Testing to AI-Assisted Automation with Playwright Agents

This blog discusses the value of Playwright Agents in automating workflows. It provides a detailed description of setting up the system, as well as a breakdown of the Playwright Agent’s automation process.

How to Choose an AI Product Development Company for Enterprise-Grade Delivery
Article

Apr 21, 2026

How to Choose an AI Product Development Company for Enterprise-Grade Delivery

A practical guide for enterprises on how to choose the right AI development partner, avoid costly mistakes, and ensure long-term delivery success.

AI MVP Development Challenges: How to Overcome the Roadblocks to Production
Article

Apr 20, 2026

AI MVP Development Challenges: How to Overcome the Roadblocks to Production

80% of AI MVPs fail to reach production. Learn the real challenges and actionable strategies to scale your AI system for enterprise success.

Scroll for more
View all articles