Feb 5, 2026
From Labor to Intelligent Execution
From labor-heavy scaling to intelligent execution—learn how AI-driven teams, the trapeze model, and governance are reshaping product delivery and impact.
Author


Book a call
Table of Contents
Editor’s Note: This blog post is adapted from a keynote delivered at thegeekconf 2025 by Shweta Shandilya, Executive Director of Data and AI at IBM. With over twenty-four years of experience in data evolution, Shweta explores the industry's pivot from the labor-heavy scaling models of the past two decades toward a new era of intelligent execution. Her session unpacks how the shift from traditional "pyramid" teams to an AI-augmented "trapeze" model is redefining the distance between a product vision and its final impact.
Last week I was in a workshop with a large group of people, and someone made a very sharp observation about our current relationship with AI. They noted that while AI makes our work faster and simpler, we are spending a massive amount of time just discussing what to do with it.
We are becoming very optimized in how we execute tasks, but the math often feels incomplete when you factor in the hours spent in deliberation. I think of this as a price-performance problem. We are seeing a significant amount of waste in these long discussion cycles, and we need to find a way to bridge that gap.
The Trapeze Model
For the last 20 years, the industry has grown by prioritizing scale. If a project was falling behind, the standard response was to add ten more people to the work. We believed that more hands would naturally reduce a 20-day task to 5 days. This model helped build industry giants, but that era is concluding. Organizations must now evolve to understand what this new intelligence brings to the table.
Redefining Development Speed
The effort of the human worker is moving up the value chain. While the muscle of these processes is being replaced by automation, the human intelligence required to direct them remains the essential differentiator. I have seen this transition happen in real time within our own teams at IBM.
We recently had a project where developers spent five months writing a specific piece of code. As an experiment, we asked Claude to write a better version of that same logic. The AI produced a version in just 15 days. It was significantly longer—roughly 25,000 lines compared to our original 10,000, but it was well-documented and ready for deployment.
We took another few weeks to iterate on that version, and the final product we released was actually the third iteration of that logic. This represents a massive shift in how we quantify effort. The work of writing code is becoming so much simpler that the vision of the product becomes the primary focus.
Trust and the Data Foundation
In an enterprise environment, the reliability of AI is the most critical factor. At IBM, we prioritize governance and maintain a 'human-in-the-loop' approach. This is highly use-case-driven. Reliability in a financial or banking sector requires a different level of rigor than building content for a marketing campaign. We have to get the guardrails right for each specific industry.
We must build specific guardrails for every use case because the definition of reliability changes with the stakes of the industry. Without these protections, enterprise-level AI cannot reach its full potential. The human element ensures that the thinking part of the process is never fully automated.
Leading the Change
We are entering an era where the old metric of headcount is being replaced by the metric of intelligence orchestration. Our success will be defined by how effectively we stop solving problems through the sheer volume of people and start solving them through refined, intelligent systems. At IBM, we see this as a significant opportunity to move away from the friction of labor and into the flow of pure execution.
Related Articles.
More from the engineering frontline.
Dive deep into our research and insights on design, development, and the impact of various trends to businesses.

May 11, 2026
From MVP to Scale: Designing Architecture for AI-First Products
A panel of architects and engineering leaders at thegeekconf mini 2026 discuss how to build and scale AI-first products — from MVP decisions to production-level challenges. The conversation covers data quality, model selection, security, token economics, and the mindset teams need to navigate a fast-moving AI landscape.

May 7, 2026
The AI native Enterprise Evolution | Saurabh Sahu
Explore Saurabh Sahu’s insights on AI-native enterprise, AI gateways, model governance, agentic SDLC, and workspace.build for scalable AI adoption from thegeekconf mini 2026.

May 5, 2026
The Next Era of AI Builders: Building Autonomous Systems for Frontier Firms — Pallavi Lokesh Shetty
Discover Pallavi Shetty’s view on the next era of AI builders, covering autonomous systems, trusted agents, data quality, and frontier firms from thegeekconf mini 2026

May 5, 2026
The Autonomous Factory: Architecting Agentic Workflows with Clean Code Guards | Akash Kamerkar
Akash Kamerkar’s thegeekconf mini 2026 talk explores the ACDC framework for building safer agentic workflows with clean code guards, sandbox testing, and AI-driven software development.

May 4, 2026
OpenClaw: Build Your Autonomous Assistant | Deepak Chawla
Discover how Deepak Chawla explains OpenClaw for building autonomous AI assistants through data preparation, knowledge bases, AI engines, and agent automation.

May 4, 2026
From Prompt Chaos to Production AI: Spec-driven Development for AI Engineers | Vishal Alhat
Learn how Vishal Alhat’s thegeekconf mini 2026 session explains spec-driven development and how AI engineers can move beyond prompt chaos to build production-ready applications.