Judgment Is the Job
Article Digital Transformation

AI Made Building Easy. Knowing What to Build Just Got Expensive.

The cost of building software is collapsing. The cost of not knowing what to build is compounding. Every knowledge worker now faces the same challenge engineers have struggled with for decades.

On This Page

I've argued before that AI isn't here to replace your team - it's here to give them time. I still believe that. But there's a harder question underneath: time to do what, exactly? Because if your people get six extra hours in their day and don't know what to do with them, you haven't gained anything. You've just gotten to the real problem faster.

And in large organizations, even getting to that point is a fight. The processes, the approval chains, the rigid ways of working - they don't just bend because a new tool showed up. Companies that have spent decades optimizing for predictability don't suddenly become fast because someone bought an AI subscription. The inertia is real, and it's often the biggest obstacle before anyone even gets to the question of what to focus on next.

I built this entire website - design system, CMS, content pipeline - with AI as my primary engineering partner. Work that would have taken a small agency months took weeks. And that's not even close to the most dramatic example out there. Startups are shipping products with three-person teams that would have needed fifteen two years ago. AI-first companies are hitting revenue-per-employee numbers that make traditional software firms look like they're standing still.

This isn't a trend. It's what happens when the cost of building software starts approaching zero. And the implications go way beyond engineering.

When Production Gets Cheap, Demand Doesn't Shrink

There's a reliable pattern in economic history. Every time the cost of producing something collapsed, the world didn't need less of it. It needed vastly more. Think about cars. Before Ford's assembly line, they were a luxury for the wealthy. Mass production didn't shrink the market. It created an entire civilization built around personal transportation - suburbs, highways, supply chains, an economy that literally runs on wheels.

Cheap video equipment didn't eliminate filmmakers. It created YouTube, TikTok, and a content economy worth hundreds of billions that couldn't have existed when production required a crew and a studio budget. Cameras in every pocket didn't replace professional photography. They made visual content the default language of the internet.

Software is about to go through the same thing, and we're barely scratching the surface of how big it gets.

I've spent years watching organizations run mission-critical processes on spreadsheets because building proper software was too expensive to justify. I've seen operations teams manually reconciling inventory across systems because the integration project kept getting deprioritized. I've watched mid-size companies buy enterprise platforms ten times bigger than what they need because there was nothing built for their scale.

The demand for software has always been there. What's been missing is affordable supply. When AI compresses the cost of building from hundreds of thousands to hundreds of dollars, every process currently held together by email and manual workarounds becomes a candidate for real tooling. Not a contraction. An explosion.

But more software in the world doesn't automatically mean your specific role is safe. What matters is where the bottleneck moved.

Building Got Cheap. Thinking Didn't.

I've watched enough product launches to know this: the ones that fail rarely fail because someone wrote bad code. They fail because nobody figured out what to build in the first place.

"Improve the onboarding flow" isn't a specification. "We need something like what our competitor has" isn't one either. These are starting points disguised as directions. The whole apparatus of product management - user stories, acceptance criteria, sprint reviews - exists because turning vague intent into buildable instructions is hard.

Something is shifting. When a product took the better part of a year and a serious budget, the price tag itself forced discipline. Stakeholders had to align. Requirements had to be specific. Getting it wrong meant wasting real money, so people thought carefully.

AI is dissolving that natural brake.

What nobody wants to admit: the old way of working often cost more in alignment than in execution. I've sat in rooms where the combined hourly rate of everyone at the table was higher than the actual work being discussed. With AI pushing execution costs even lower, that ratio gets absurd. The question nobody's asking out loud: are we ready to accept the risk that comes with speed? To skip the third round of stakeholder alignment and just try something, knowing the cost of being wrong is now a fraction of the cost of the meeting to decide?

You can now prototype something in a day for the cost of lunch. If the thinking behind it was sloppy, you didn't save a year. You wasted a day and possibly shipped something that creates more problems than it solves. The barrier to building dropped, but the cost of building the wrong thing didn't. It actually went up, because now you can make more mistakes faster.

The bottleneck used to be "can we build it?" Now it's "should we build it, and can we describe it well enough that the result actually works?"

The ability to define what needs to exist - clearly, testably - is becoming the scarcest and most valuable skill in any organization. And it doesn't belong to any one job title.

A Split Is Forming

I see this playing out across teams I work with, and it's not just an engineering story.

One group is pulling ahead fast. They think in outcomes, not tasks. They can describe what a system should do clearly enough that AI handles the execution - including overnight, while they sleep. They're not just using AI to speed up their existing workflow. They're rethinking what's possible when execution costs almost nothing. One person with the right mindset and tooling is producing what used to take a whole department.

I used a legal firm analogy in a previous post that fits here. For decades, a big firm with twenty average lawyers beat a small firm with two brilliant ones, simply because they had more people to cover more ground. AI flipped that. The small firm with sharp minds and good tooling can now compete with the big one. But if that big firm doesn't adapt, they don't just lose their advantage. They get overrun by the smaller, faster players who did.

The other group is using AI like a faster version of the tools they already had. Better autocomplete. Quicker first drafts. The same work, slightly accelerated. And the gap between these two groups is widening every month.

There's also a messy middle phase that nobody talks about. Early AI adoption often doesn't look like success. People get access to these tools and go wild - over-creating, over-ideating, prototyping things nobody asked for. Vibe coding gives you that instant hit of "I made this" and the gratification of self-producing for the first time. But output isn't value. The initial sugar rush of generating everything you can think of has to mature into the discipline of generating what actually matters. Most teams haven't made that leap yet.

The uncomfortable truth is that this split doesn't follow seniority lines the way you'd expect. I've seen senior people coast on their existing patterns while more adaptable colleagues run circles around them. The differentiator isn't years of experience. It's whether you're willing to rethink how you work from the ground up.

If the value you provide is primarily in executing known tasks, you're in a race against tools that get cheaper and more capable every quarter. If your value is in figuring out what tasks should exist and why, you're in a completely different position.

The Wall Between "Tech" and "Business" Is Disappearing

What I find most interesting about this moment.

For decades, there was a clear division. Engineers built things. Business people decided what to build. The two groups spoke different languages and needed translators - usually what we nowadays call product managers - to bridge the gap.

That wall is crumbling, and from both sides.

I've watched financial planning evolve from quarterly decks and gut-feel conversations into structured models with defined inputs and testable assumptions. Strategy is starting to look like specification. In legal departments, contract review is shifting from pure judgment calls to pattern-based analysis against documented standards. In marketing, "let's try this campaign" is becoming structured experimentation with clear success metrics.

The common thread: work that used to be evaluated by instinct is being restructured into something verifiable. And once it's verifiable, it starts to follow the same logic as software - you can test it, iterate on it, and eventually automate parts of it.

A product manager writing a feature spec and a strategy consultant defining a market entry plan are doing the same cognitive work. They're translating fuzzy intent into precise enough instructions that someone - or something - can act on them. As AI gets better at handling the execution side, that translation skill becomes the whole game.

If you've been thinking "I'm not in tech, so this AI stuff affects me less" - that assumption has an expiration date. And it's sooner than most people think.

The Hidden Tax on Large Organizations

One more pattern I keep seeing, especially in the enterprise world I come from.

Quick check: how many meetings did you have this week? How many had an agenda sent beforehand? Minutes or action items afterward? How many actually required your active involvement beyond sitting there and nodding? And here's the real question - how many of you are using AI to run those meetings, capture decisions, and track follow-ups?

I have a personal fantasy about this. I want meetings to have a budget. Actual money. You need thirty minutes of a senior architect's time? That's $250 from your department's meeting allocation. Want the VP of Engineering in the room? $500. Watch how fast meeting culture changes when you have to rent people's attention instead of taking it for free.

A huge portion of knowledge work in large organizations doesn't create direct value. It exists to manage the organization itself. The alignment meetings, the status reports, the deck-building, the "can you loop me in on that thread" chains. I've sat through hundreds of these. It's the connective tissue that complex organizations need to function.

But most of that work is only necessary because the organization is large enough to require it. The reporting exists because there are too many teams to coordinate informally. The alignment meetings exist because information doesn't flow naturally through a 500-person org.

When organizations get leaner - and they are getting leaner - that coordination work doesn't get transformed by AI. It gets removed entirely. Fewer people means fewer handoffs, fewer status updates, fewer "syncing" meetings. The communication overhead drops faster than you'd expect.

If you're honest with yourself about your role and most of your day is spent making a large organization function smoothly rather than producing something customers or the business directly benefits from, that's worth thinking about. Not panicking about. Thinking about. Because as organizations restructure around smaller, more capable teams, the roles that exist to manage complexity are the first to go when that complexity disappears.

What Actually Helps

I'm not going to pretend there's a clean framework for this. But having watched it play out across different organizations, a few things consistently separate the people who land well from the ones who get squeezed.

Get specific about outcomes. Engineers learned decades ago that "make it better" isn't actionable. They developed disciplines - acceptance criteria, test cases, definition of done - to force vague intent into precise language. Every knowledge worker needs that skill now. If you can't describe the success criteria for your work in terms someone could verify, you're operating on vibes. Vibes don't scale.

Build in ways to prove you're right (or wrong). Engineers write tests. The knowledge-work equivalent is building checkpoints into your outputs. Your analysis should cite its sources and assumptions. Your project plan should have milestones that are measurable, not just "complete phase 2." Your strategy should include the conditions under which you'd call it a failure. Work that can be verified can be trusted. Work that can't is just opinion with a nice font.

Think about systems, not deliverables. There's a difference between producing a quarterly report and building a system that generates the insights that report was supposed to surface. One requires your time every three months. The other requires your thinking once and your attention occasionally. People who frame their work as systems build things that compound.

Be honest about where your value sits. Ask yourself what would happen to your role if your company suddenly operated with half the people and twice the clarity. If the answer is "my role probably disappears," that's a signal to start migrating toward work that directly moves the needle - things customers see, revenue you can point to, decisions that change direction.

The Good News

I want to be honest first: this transition is not easy. It brings fear. Disruption always does. A lot of energy in your organization will be spent not on the actual work, but on managing the friction. People who've never had access to powerful tools suddenly feel like they can do everyone else's job better. "How are you to tell me how this should work when I can prototype it myself now?" That tension is real, and it's everywhere. Teams push back. Stakeholders feel threatened. The whole org chart feels like it's vibrating.

That's normal. Every major shift in how work gets done creates this kind of friction. The question isn't whether it'll be uncomfortable. It's whether you're willing to go through it and come out the other side with a different way of working entirely.

Because the gap between people who can direct AI effectively and those who can't is real and growing. But there's nothing fixed about which side you end up on. Specifying clearly, thinking in systems, understanding what AI does well and where it falls apart - these aren't talents. They're skills. They can be learned by anyone willing to rethink how they work.

The companies that figure out how to develop these skills across their teams - not just in the top performers - will have a structural edge that compounds fast. We're not talking about incremental productivity gains. The output differences between AI-native teams and traditional ones aren't measured in percentages.

The window to build these skills is now. The reason isn't that AI is going to plateau. It's that learning is cheaper when the field is still forming than when everyone else has already figured it out.

Start by specifying your work more exactly than you think you need to. Test your judgment against what AI produces. Ask yourself whether your outputs could be structured, measured, and verified.

The future of work isn't about who produces fastest. It's about who knows what's worth producing.