Table of Contents
From Labor to Intelligent Execution
Author

Date

Book a call
Editor’s Note: This blog post is adapted from a keynote delivered at thegeekconf 2025 by Shweta Shandilya, Executive Director of Data and AI at IBM. With over twenty-four years of experience in data evolution, Shweta explores the industry's pivot from the labor-heavy scaling models of the past two decades toward a new era of intelligent execution. Her session unpacks how the shift from traditional "pyramid" teams to an AI-augmented "trapeze" model is redefining the distance between a product vision and its final impact.
Last week I was in a workshop with a large group of people, and someone made a very sharp observation about our current relationship with AI. They noted that while AI makes our work faster and simpler, we are spending a massive amount of time just discussing what to do with it.
We are becoming very optimized in how we execute tasks, but the math often feels incomplete when you factor in the hours spent in deliberation. I think of this as a price-performance problem. We are seeing a significant amount of waste in these long discussion cycles, and we need to find a way to bridge that gap.
The Trapeze Model
For the last 20 years, the industry has grown by prioritizing scale. If a project was falling behind, the standard response was to add ten more people to the work. We believed that more hands would naturally reduce a 20-day task to 5 days. This model helped build industry giants, but that era is concluding. Organizations must now evolve to understand what this new intelligence brings to the table.
Redefining Development Speed
The effort of the human worker is moving up the value chain. While the muscle of these processes is being replaced by automation, the human intelligence required to direct them remains the essential differentiator. I have seen this transition happen in real time within our own teams at IBM.
We recently had a project where developers spent five months writing a specific piece of code. As an experiment, we asked Claude to write a better version of that same logic. The AI produced a version in just 15 days. It was significantly longer—roughly 25,000 lines compared to our original 10,000, but it was well-documented and ready for deployment.
We took another few weeks to iterate on that version, and the final product we released was actually the third iteration of that logic. This represents a massive shift in how we quantify effort. The work of writing code is becoming so much simpler that the vision of the product becomes the primary focus.
Trust and the Data Foundation
In an enterprise environment, the reliability of AI is the most critical factor. At IBM, we prioritize governance and maintain a 'human-in-the-loop' approach. This is highly use-case-driven. Reliability in a financial or banking sector requires a different level of rigor than building content for a marketing campaign. We have to get the guardrails right for each specific industry.
We must build specific guardrails for every use case because the definition of reliability changes with the stakes of the industry. Without these protections, enterprise-level AI cannot reach its full potential. The human element ensures that the thinking part of the process is never fully automated.
Leading the Change
We are entering an era where the old metric of headcount is being replaced by the metric of intelligence orchestration. Our success will be defined by how effectively we stop solving problems through the sheer volume of people and start solving them through refined, intelligent systems. At IBM, we see this as a significant opportunity to move away from the friction of labor and into the flow of pure execution.
Related Articles
Dive deep into our research and insights. In our articles and blogs, we explore topics on design, how it relates to development, and impact of various trends to businesses.





