
I’m accepting new sponsorships for The Modern Leader for 2026. If you’re interested (or you think your company may be interested), you can learn more here.
Let's start with the obvious: AI, when used intentionally, makes engineers more efficient. More code can be written in less time, and as models continue to improve, fewer mistakes are being made. We've moved beyond "I use AI for scaffolding and then write the rest myself" to "I'm going to send a Jira link to my agent to research and build a plan, then run with the plan."
Engineering today is not the same as it was even a few years ago. With that, expectations of engineers have changed. Read: expectations around output have increased, and I don't see us backtracking anytime soon.
The problem is that most engineering leaders haven't actually said that out loud to their teams. They've noticed the shift, they've watched what's possible, but they haven't reset the bar explicitly. That gap between what's now achievable and what's officially expected is exactly where confusion and resentment grow. We're living in an age where it's easy to forget to level-set on new expectations, which often leads to surprises on performance reviews.
A thank you from this week’s sponsor:
Ship Docs Your Team Is Actually Proud Of
Mintlify helps you create fast, beautiful docs that developers actually enjoy using. Write in markdown, sync with your repo, and deploy in minutes. Built-in components handle search, navigation, API references, and interactive examples out of the box, so you can focus on clear content instead of custom infrastructure.
Automatic versioning, analytics, and AI powered search make it easy to scale as your product grows. Your docs stay accurate automatically with AI-powered workflows with every pull request.
Whether you're a dev, technical writer, part of devrel, and beyond, Mintlify fits into the way you already work and helps your documentation keep pace with your product.
There's a version of this conversation that feels uncomfortable to have. You don't want to seem like you're punishing engineers who are more skeptical of AI, or rewarding people who vibe-code their way to shipping features that fall apart in production. That discomfort is valid. It's easy to think of this age we're in as binary: you use AI or you don't. You use AI to write quality code or you use AI to generate AI slop. I encourage you to think instead of where your engineers fall on the spectrum of AI adoption, ranging from total skeptic ("I will never use AI") to total sycophant ("AI can do no wrong").
At this stage, incorporating AI into your workflow is table stakes. Not using it at all, on principle or out of habit, is increasingly a choice with professional consequences, the same way refusing to use version control would've been. Engineers who opt out entirely are making their job harder in a way that's hard to justify, and they're likely falling behind peers who aren't. This actually isn't limited to the field of engineering, either: non-technical fields are seeing an uptick in expectations around day-to-day AI adoption.
That said, "use AI" is not a useful expectation. What you're really asking for is intentional adoption: engineers who are actively building judgment about when AI helps, when it doesn't, and how to verify what it produces.
Output expectations have gone up. That's true. The amount of work a skilled engineer can move through in a sprint has expanded meaningfully, and anyone who manages engineers is aware of this whether they've said it explicitly or not. We've eliminated an important bottleneck around output. (This has also brought to the surface new bottlenecks, as I previously wrote about here.)
But here's the failure mode leaders fall into: they absorb the velocity gains silently and just keep adding to the backlog. The team ships more, the backlog grows to match, and nothing ever feels done. Engineers who were using AI to get ahead find themselves just running faster on the same treadmill. That's how you burn out people who are actually performing well.
I've been having conversations with my own team about this. Some of my engineers feel they can multi-task on tickets now, starting several tasks in their own unique worktrees. Some of them are still insistent on seeing one task through to completion before starting the next one. I'm calling this out because both paths can lead to burnout in different ways: multi-taskers can feel a cognitive overload from trying to do too much at once, and single-threaders can feel they're not moving quickly enough and work longer to try to close more tickets in the same amount of time.
The reset on velocity expectations has to come with an honest conversation about what those gains are for. Is it to ship more features? Clear technical debt that was always too expensive to touch? Get ahead of the roadmap for the first time in two years? The team needs to know, because otherwise the message they receive is just: more, faster, indefinitely.
When your output increases, you're also widening the gap between what an engineer can produce and what they actually understand. There are actually benefits to this: if you're tasked with fixing a bug in a repo that's not in your primary language, AI can help you understand and resolve the bug without needing to have a full understanding of the language.
But it's also important to remind your team that they're still responsible for outputs, even if they didn't write them. AI-assisted code is still your code. AI-written tests are still your tests. If something ships broken, the author is accountable, not the model that helped write it. An increase in regressions in production could be a natural result of increasing output in a given period, but it could also mean your team isn't paying close enough attention to what they're shipping.
Accountability hasn't changed. What's changed is that engineers now have more surface area to be accountable for, which raises the stakes on judgment and review, not lowers them. AI has made observability substantially more important, but it doesn't relieve engineers of their own responsibility. Early detection of issues and automatic rollbacks help in the moment, but they shouldn't be treated as a replacement for writing high quality, trustworthy code.
If you're setting expectations for your team, this is where the nuance lives. Output is up, but quality standards aren't negotiable. And the engineers who just ship AI output without developing the judgment to review it are a liability, even if they look productive in the short term.
What good looks like now
The performance bar has shifted, and it's worth naming what you're actually looking for:
Engineers who know which tasks benefit from AI assistance and which ones require focused thinking without it
Engineers who review AI output with real skepticism, not just a quick skim before committing
Engineers who are faster than they were, but not at the cost of correctness or maintainability
Engineers who are honest when AI-generated code is creating technical debt they'd want to address
What you're not looking for is engineers who use AI as a crutch and ship things they don't understand, or engineers who refuse to use it at all out of stubbornness. Both are problems.
How to communicate this to your team
Don't roll this out as a policy. Roll it out as a conversation.
Start by naming the shift directly: AI has changed what's achievable, and you want to talk about what that means for expectations. Be specific about what you're asking for: intentional adoption, maintained ownership of output, quality that hasn't dropped even as velocity has gone up. Ask what's getting in the way. Some engineers will be further along than you expect. Others will have real concerns about burnout or about quality eroding under pressure to ship more.
Give people a way to raise those concerns without it feeling like they're pushing back on AI adoption broadly. The goal isn't more output at any cost. The goal is more output without burning out the team or accumulating debt you'll pay back in the next incident.
Set a clear standard, state it plainly, and then actually hold to it. If engineers are using AI well and moving faster, that shouldn't just become the new floor that gets immediately piled on. The gains need to go somewhere visible, or you'll lose the people who were actually doing the work right.

