Events
AI in Product Development
One study says AI makes developers 55% faster. Another says it slows them down by 19%. Both are right — and that contradiction holds the key to using AI effectively as a product manager. As product managers, we are constantly bombarded with narratives about an AI revolution and mythical "10x productivity gains." The reality is far more pragmatic. My thesis is simple: Generative AI is a powerful accelerator for boring, repetitive tasks across the product lifecycle. The real value appears only when we stop treating AI as magic and start treating it as a tool. Not an autopilot. A turbocharger.
5 min
ProductCafe #37: AI in Product Development
Date: 02.10.2025
Location: StoneX, Cracow
Details: ProductCafe #37
AI in Product: A Turbocharger for the Engine, Not an Autopilot
As product managers, we are constantly bombarded with narratives about an AI revolution and mythical "10x productivity gains." The reality is far more pragmatic.
My thesis is simple: Generative AI is a powerful accelerator for boring, repetitive tasks across the product lifecycle. The real value appears only when we stop treating AI as magic and start treating it as a tool. Not an autopilot. A turbocharger.
The 10x Myth Meets Reality
Let's start with the data.
In GitHub's well-known Copilot experiment, developers using AI completed a coding task 55% faster than those without it. Impressive — but that was a controlled environment with an isolated task.
In 2025, METR published research conducted on experienced open-source developers working on mature, complex codebases. The result? Using state-of-the-art AI tools slowed developers down by an average of 19%. Even more telling: the developers themselves estimated they were 20% faster with AI. They weren't.
Context is everything. AI can help a junior engineer draft a script. But for a senior engineer navigating a complex system, the cost of verification, refactoring, and integration of AI-generated code can outweigh the speed benefit. AI is not a universal productivity switch. Expecting it to be one leads straight to frustration.
Where the Gold Is: Audit the Boring Work
The biggest trap is asking "How can AI help across our entire SDLC?" A better question is "Which repetitive, time-consuming steps can we accelerate at each stage?"
Stop thinking about fully autonomous processes. Start with small friction points.
Discovery: Synthesizing dozens of user interview notes, initial desk research, drafting early user stories, clustering qualitative feedback.
Delivery: Boilerplate code, writing basic tests, refactoring repetitive structures, generating API specs, creating documentation drafts.
Post-launch: Release notes, post-mortem first drafts, updating documentation, support ticket summarization.
Here's a practical exercise. List 10 tasks from the last two weeks that were repetitive and took more than 30 minutes. Pick two. In the next sprint, deliberately test AI on them. That's your experimentation sandbox.
Before Giving the Team a New Toy: Create an AI Pact
Unstructured experimentation is exciting — until it creates chaos. Before your team fully adopts AI tools, establish simple guardrails. Think of it as a Team AI Usage Pact. One page in Confluence is enough.
Answer five questions:
What's allowed? AI for drafts, but not final production code without review.
How do we label AI-assisted work? For example, an
[AI-assisted]tag in commits or documents.What does "done" mean? AI-generated code must have full test coverage and senior review.
Who is accountable? Always a human. AI is an assistant, not an autonomous employee.
What about data? No customer data or NDA-protected material sent to external models.
This document becomes your shield against confusion, security risks, and responsibility gaps.
Measure What Matters: Outcomes Over Outputs
The biggest risk for PMs in the AI era is that we build the wrong things faster than ever. AI is an output machine — it generates artifacts like code, designs, and documents. But our job is to deliver outcomes: measurable changes in user behavior and business metrics.
Every AI experiment should have a metric. Instead of saying "Let's use AI to speed up research," frame it as a hypothesis:
Hypothesis: If AI automatically clusters interview transcripts and proposes initial insights, researchers will save approximately 20% of drafting time and move faster to high-quality synthesis.
Experiment: In a real UX project, use AI summaries for half the interviews and manual analysis for the other half. Compare time to first draft and revision rates.
Metrics: Time to first report draft. Number of validated insights. Number of revisions after team feedback.
Focusing on outcomes protects you from becoming a fast feature factory that delivers volume without value.
The Role of the PM in the AI Era
The product manager role is not disappearing. If anything, it's becoming more critical. AI tools won't decide which problem is worth solving, align stakeholders, navigate trade-offs, or take responsibility for failure.
GenAI is a powerful accelerator. If you're heading in the right direction, it will help you get there faster. If you're on the wrong path, you'll arrive at the wrong destination sooner.
Our job is to hold the compass — and to press the gas pedal at the right moment.
View more articles
Learn actionable strategies, proven workflows, and tips from experts to help your product thrive.



