Table of Contents

Expand

The Enterprise AI Reality Check: Notes from the Front Lines

Enterprise leaders reveal the real blockers to AI adoption, from skill gaps to legacy systems, and what it takes to move beyond the first 20% of implementation.

Author

Boudhayan Ghosh
Boudhayan GhoshTechnical Content Writer

Subject Matter Expert

Saurabh Sahu
Saurabh SahuChief Technology Officer (CTO)

Date

Feb 12, 2026

Editor’s Note: This article is adapted from a panel discussion at thegeekconf 2025. As we look toward 2026, the conversation has shifted from AI experimentation to the gritty realities of implementation. Industry experts here dissect the architectural and human bottlenecks preventing organizations from moving past the initial 20% of AI adoption, exploring the vital balance between rapid velocity and deep technical foundations.

Sanket Sahu: The Case for Creation and "Vibe Coding"

I look at this through the lens of creation, or what I call the "Brahma" phase. I don’t share the pessimism that often fills these rooms. In the last two months, I’ve built products where 95% of the code was AI-generated, and they are already making money. I’ve seen firsthand that you can build a secure database engine with a full UI and API orchestration in just two hours. When you are in that creation mode, the speed is unlike anything we have ever seen in software development.

The most tangible shift for me is how we handle requirements. The era of the 30-page PRD is essentially over. No one has the patience to read those documents, and honestly, they don't describe the final product well. Instead of writing, we should be "vibe coding"—using AI to create a mock setup or a visual prototype that describes the functionality better than words ever could. It allows us to move from an idea to a visual, working model in a single session, which completely changes the feedback loop with stakeholders.

We shouldn't just be followers in this space. Even with legacy systems, there is a massive opportunity to use AI to migrate data and modernize stacks. The bottleneck isn't the technology; it’s our willingness to experiment. While the "Vishnu" phase of sustaining a giant company is undeniably harder, the tools are now powerful enough to handle high-level architectural patterns if you know how to prompt them. We have to stop looking at AI as a toy and start seeing it as a way to build at a scale that was impossible just a few years ago.

Deepkumar Varma: The Leadership and Logic of Scale

When I look at the bottlenecks in a massive ecosystem like PayPal, I see two distinct cohorts that are moving at very different speeds. For the employee cohort—the developers and designers—the tools are already there. But the real bottleneck is initiative. You cannot wait for a manager to tell you that a tool is enabled. If you have the tools, the ownership and accountability are on you to start seeing where you can realize value. It’s about being aware of the transformations in your specific domain and not just waiting for top-down instructions that might come too late.

Then there is the leadership cohort, where many of us sit. Our role today is to be grounded. You have to understand what a foundational model actually is and how to ensure hallucinations don’t break your business logic. Leaders often run everywhere without direction when a new technology drops. We have to be disciplined enough to look at ten goals, identify the twenty projects tied to them, and then pick the four or five where AI transformation actually makes sense. You channel your effort there so you don't waste energy across the board.

In financial services, we have a unique challenge: there is no room for error. You cannot be "mostly right" on a transaction. We use AI to slash cycle times in testing—generating unit, integration, and functional tests—because that is usually where the most time is lost. By automating the grunt work of test case generation and failure analysis, we can ship code much faster. It turns a massive, time-consuming phase of the lifecycle into something much more manageable.

However, predictability remains a major question mark. In a traditional project, you have a timeline, and you hit your milestones. With AI, you might get 90% of the way there instantly, but then you hit a wall during iterations. Sometimes, doing multiple iterations is actually detrimental to the quality of the output. We also deal with legacy "black boxes" where no one knows the original logic. You need a human in the loop to close those documentation gaps and ensure that the migration actually works in the real world.

Apoorva Sahu: The Human Bottleneck and Hidden Costs

We talk a lot about technology, but I believe the human side is the most significant bottleneck we face. At GeekyAnts, we work with a lot of people, and what I see is a massive gap in skill. Out of a team of 300 developers, maybe only 30 or 50 can actually take the full benefit of AI. These are the people who understand the architecture and the patterns. Most people can chat with a model, but very few have the patience and dedication to read through and audit what the AI has actually generated to ensure it’s production-ready.

To the younger developers in the room: life is not going to be easy. The basic tasks—the stuff you thought would be your "bread and butter" for the first few years of your career—can now be done by a prompt. It is going to be more competitive than it was for the generations before you. If you don't go through the basics and learn the deep foundations of how this technology works, you won't be able to solve the bottlenecks of the future. You have to be the one who understands how bits convert to pixels if you want to remain invaluable.

There is also a humongous hidden cost that people are missing: fine-tuning. Everyone talks about the initial implementation, but if you want to go beyond basic chatting and actually use your custom enterprise data, the cost of inferencing and fine-tuning is massive. it requires a huge amount of trial and error and a lot of time. You also have to make sure you are measuring the right success metrics. It’s not just about shipping code faster; it’s about whether that code maps back to customer empathy and actual business results.

Avnish Malik: The "Hello World" of Enterprise Engineering

In my view, we are currently in the "Hello World" phase of AI. It’s easy to get a "wow" moment from a simple prompt, but moving from there to a product that actually meets business needs is where the real work begins. We have to move past the hype of just generating code snippets and start looking at how we build great technology. The technology itself still needs to evolve significantly before it can handle the high-stakes accountability that enterprise engineering requires.

As leaders, we have to find a new kind of patience. I cannot just tell an engineer that I gave them a tool, so I expect the project to be done in six weeks. That developer still owns the code. They are the ones who have to ensure it is correct, secure, and performant. We are in a period of sweat and perseverance where we have to support our engineers as they figure out how to bridge the gap between AI-generated drafts and production-grade software.

SHARE ON

Related Articles

Dive deep into our research and insights. In our articles and blogs, we explore topics on design, how it relates to development, and impact of various trends to businesses.