Apr 28, 2026

Keynote: Build It Right or Rebuild It Twice | Suresh Konakanchi

Learn why AI-first architecture, observability, cost control, security, and evals matter more than model choice when building scalable AI products.

Author

Sathavalli Yamini
Sathavalli YaminiContent Writer

Subject Matter Expert

Keynote: Build It Right or Rebuild It Twice | Suresh Konakanchi

Table of Contents

Editor's Note: This blog is adapted from a session by Konakanchi Venkata Suresh Babu at GeekConf Mini 2026. As a Senior Technical Consultant at GeekyAnts with a background spanning Flutter, Firebase, NodeJS, ReactJS, and Web3, Suresh brings a full-stack lens to AI architecture. His talk cuts through the noise around AI adoption, drawing a sharp line between teams that bolt AI onto existing systems and teams that build with AI at the core, and makes the case for why architecture, not model choice, will determine who wins the next five years.

Architecture Wins Over Models

The energy is high, so I'll keep this short and crisp. The topic is clear: are we building it right, or do we end up rebuilding it twice and thrice?

Here's a question: In the next five years, who wins? The team with the greatest model, or the team with good architecture and a clean approach?

Good architecture. Right.

In the next five years, we strongly believe that whoever has a great architecture and the cleanest approach will win over whoever has a very good model. Models keep changing. You see the trend, GPT-4 comes up, then Gemini, then all the big players follow. Models change every three to six months, but the architecture stands tall.

AI First vs. AI Enablement

So what do we mean by "AI first"? That is the agenda for this entire meetup.

AI enablement is writing prompts and adding API calls from LLM providers — that's it. AI first means going deep into your existing architecture, embedding systems inside your entire product cycle, and dealing with context, memory, cost, token control, and the full core of AI.

That is the key difference. We want everyone to be AI-first rather than AI enablement, so that we build the future together.

The Three Silent Forces

Three silent forces shape whether a production-grade AI product succeeds: context, cost, and token control, security, and orchestration.

Context Window

The context window is your working memory. A lot of people treat it as a clipboard — they go to GPT or any AI tool, dump whatever error they're getting, and let it handle everything. That is not a clipboard. Use it as working memory.

There are multiple strategies here: long-term memory, short-term memory, and session-level memory. We will not go into the solutions today — we're here to name the problems. The speakers lined up for the rest of the day will cover the solutions.

Cost and Token Control

Your AI bill is your P&L. If you don't control your tokens, the cost cuts into your profit. This is not a utility bill where you pay a fixed 500 rupees to your network provider. This is your P&L — use it wisely.

The data from the market backs this. Surveys across multiple companies in 2025 show a 72% spike in LLM spend compared to 2024. API spend spiked 2.4x. That is real money burning. No one today — not leaders, not CXOs — has clear control over how many tokens their team burns or what the bill at the end of the month will look like. That control is critical for production-grade, AI-first products.

Security and Orchestration

Few people talk about security in the context of AI, but it is critical. Without a bird's-eye view, without observability, without telemetry, you are building a blind system.

From the research we did, three security issues stand out: prompt injection, data leakage, and missing guardrails. More teams are using guardrails now — that is good. But prompt injection and data leakage still get left behind. Keep security at the top of your priorities so your production applications stand strong for years.

Three More Problems in AI-First Products

Beyond the three forces, three more problems show up when teams build AI-first products.

Prompt Brittleness

Prompts are load-bearing walls. Teams write prompts but do not test or validate them. Today you build with GPT-4o. Tomorrow, GPT-4o updates silently — and your prompts behave in unexpected ways. Every model update can break the prompts running in your production systems.

In February 2025, there was a major update from a provider that rolled out silently to the APIs. No prior notification. Production prompts broke. You need clear observability and monitoring for prompts so you get notified before your systems break.

Evals

Evals means evaluating your systems on a continuous basis. Run evaluations on your AI agents and workflows every day, capture the results, and keep a clear eye on system behavior. Autonomy without observability is chaos. You need that control and visibility to build confident, powerful systems.

Data from the market: memory recall errors spiked 13% because of overly complex systems. Teams built agents to monitor agents, then built a third layer to monitor those, creating complex chains with no real observability. That is the wrong direction.

Over-Agentic Systems

Stacking agents on top of agents without a clear observability strategy compounds every other problem. Build with intentional structure, not with layers of automation that no one can see into.

Speakers and Solutions

All of these problems have solutions. The speakers lined up for today cover spec-driven development, autonomous eval pipelines, and agents with guardrails. Stay tuned for those talks.

The Core Question

Push your AI teams to ship less and scale more. Ask yourself: am I completing a product, or am I building a scalable and secure product? That is an architectural decision.

Here is an open question for everyone today: what if your LLM provider doubles or triples their costs? Will your unit economics survive? You might be spending $20 today, but what if that jumps 2x or 3x overnight?

We have built a solution to this problem.

SHARE ON

Related Articles.

More from the engineering frontline.

Dive deep into our research and insights on design, development, and the impact of various trends to businesses.

The Gap Between an AI-Generated Prototype and a Shippable Product
Article

Apr 27, 2026

The Gap Between an AI-Generated Prototype and a Shippable Product

A working AI prototype isn’t a production-ready system. Learn the critical gaps in scalability, security, and architecture before scaling.

RAG vs Fine-Tuning vs AI Agents: Which Architecture Fits Your Use Case
Article

Apr 24, 2026

RAG vs Fine-Tuning vs AI Agents: Which Architecture Fits Your Use Case

RAG, Fine-Tuning, or AI Agents? Use a proven decision framework to choose the right architecture for accuracy, cost control, and real outcomes.

How to Build a HIPAA-Ready AI Healthcare Product Without Slowing Delivery
Article

Apr 24, 2026

How to Build a HIPAA-Ready AI Healthcare Product Without Slowing Delivery

AI healthcare products miss compliance reviews because of deferred decisions and poor architecture. This blog walks engineering leaders, product managers, and founders through practical patterns that keep delivery fast and compliance built in from the start.

Your AI Works in the Demo. It Will Not Survive Production Without Preparation
Article

Apr 23, 2026

Your AI Works in the Demo. It Will Not Survive Production Without Preparation

Why AI prototypes fail before reaching production, and the six readiness factors that determine whether they scale successfully.

From Manual Testing to AI-Assisted Automation with Playwright Agents
Article

Apr 23, 2026

From Manual Testing to AI-Assisted Automation with Playwright Agents

This blog discusses the value of Playwright Agents in automating workflows. It provides a detailed description of setting up the system, as well as a breakdown of the Playwright Agent’s automation process.

How to Choose an AI Product Development Company for Enterprise-Grade Delivery
Article

Apr 21, 2026

How to Choose an AI Product Development Company for Enterprise-Grade Delivery

A practical guide for enterprises on how to choose the right AI development partner, avoid costly mistakes, and ensure long-term delivery success.

Scroll for more
View all articles