Memos

Memo: Automation Before Authority

January 10, 20266 min read

To: CEOs navigating AI product decisions
From: Matt Kantor
Re: Why the pressure to "put AI on it" produces inferior products


The pattern is recognizable now. And unfortunately very common.

Board meeting. Analyst call. Competitive review. Someone says: "We need AI in the product."

Not because there's a clear problem AI solves.
Because there's a clear fear: everyone else is doing it, and if we don't, we're screwed.

So the mandate goes out: "Put AI on it."

Product identifies existing features that could be "AI-powered."
Engineering builds technically sophisticated implementations.
Marketing adds "AI-driven" to the copy.

Six months later, adoption is disappointing. Customers ignore the new features or actively complain they made the product harder to use.

This isn't a technology problem.
It's an authority problem disguised as a feature request.

The Wrong Problem Gets Optimized

Here's what actually happens under the surface:

The pressure to implement AI comes from fear of competitive displacement.
That pressure bypasses the authority to ask: "What problem are we solving, and for whom?"

Instead, teams optimize for a different problem entirely:
"How do we ship AI features fast enough to satisfy the board/market/analysts?"

That's the attractive problem. It's tangible. It has a deadline. It can be measured in features shipped.

It's also the wrong problem.

Because the issue isn't whether you have AI in your product.

It's whether AI creates capability your customers couldn't get before, or whether it just makes existing features more complex.

Most companies skip that distinction entirely.

When "AI-Powered" Makes Things Worse

Example: Natural language querying for data warehouses.

The pitch sounds transformative: "Ask your data questions in plain English instead of writing SQL."

In demos, it's impressive.
In production, it creates a specific kind of failure.

Why?

Because the people who actually use data warehouses - marketing analysts - don't want natural language. They want precision.

They speak SQL fluently. They know all the properties.

Natural language feels imprecise, unreliable, slow. It introduces ambiguity where they had clarity.

They want to be able to nest queries N-levels deep.

You've taken a tool designed for power users and made it worse by making it "easier."

Unless.

Unless the actual problem was: "Non-technical executives need data insights but don't know SQL."

That's a different problem. Different users. Different definition of "better."

Same technology. Completely different product decisions.

One makes the feature ridiculous.
The other unlocks new capability.

As long as those users have low expectations.

The difference isn't the AI. It's whether you had authority to define the problem before implementing the solution.

Automation Before Authority

Here's the phrase that gives it a name:

Automation before authority.

Teams are being told to automate (add AI to all the things) before there's clarity on:

  • What problem this solves

  • For which users specifically

  • What trade-offs are acceptable

  • What "better" actually means

  • What problems COULD it solve if focused on a more fundamental problem.

That's not a technology gap. It's an authority gap.

Nobody has permission to slow down and ask:
"Should we be building this at all?"

And in some cases, people don't see the potential because it's been invisible forever.

The deadline already exists. The pressure is real. The competitive fear is legitimate.

So teams do what they're told: they automate.

But automation without authority on what to automate creates:

  • Features users don't want or don't work

  • Complexity that slows everything else

  • Engineering maintaining systems that add no value

  • Organizational resentment between teams

  • Cost of managing models and/or paying for queries

The opportunity cost isn't just wasted engineering time.

It's this: competitors who took the time to rethink the problem from first principles are building fundamentally different products.

You're adding AI to existing features.
They're using AI to create capabilities that didn't exist before.

You're playing feature parity.
They're playing different games entirely.

The Hidden Trade-Off

There's a second-order effect most leadership teams don't see coming:

As you add more AI features without rethinking the product architecture, you make future transformation harder.

Each AI feature you bolt onto existing infrastructure creates:

  • Technical debt that constrains what's possible next

  • User expectations that are difficult to change later

  • Organizational inertia around "how we do AI here"

You're not future-proofing. You're lock-in.

(this isn't new, we're just adding the AI flavor. What is new is the pressure to "AI" everything)

The companies that will win aren't the ones shipping AI features fastest.

They're the ones asking: "If we were building this product today, from scratch, with AI native—what would be fundamentally different?"

That question requires authority to rethink everything.
Most teams don't have that authority right now.

They have authority to add features. Not to redesign products.

What Actually Needs to Happen

Before implementing AI anywhere in your product, one question clarifies everything:

"What capability would transform our product if we had it, but we've never been able to afford or build it?"

Not: "What existing features can we make AI-powered?"

But: "What becomes possible now that wasn't possible before?"

That question does three things:

  1. It reframes the conversation from feature enhancement to capability unlock

  2. It forces specificity about user value, not technology implementation

  3. It separates "keeping up with competitors" from "creating differentiated value"

Most AI initiatives right now are optimizing the wrong problem:

Wrong problem: "How do we add AI fast enough to not look behind?"
Right problem: "What can we build now that creates unfair advantage?"

The first produces feature parity.

The second produces market position.

I met with a company recently and they were talking about AI at a fundamental level. Rather than using AI to make features easier, they were removing features altogether.

If AI allows a product user to skip 5 steps, what is the value for the user to know the steps exist.

This was a first principles approach.

The Real Constraint

The issue isn't lack of AI capability.
Every company has access to the same models, APIs, and tools.

The constraint is decision authority under ambiguity.

Do you have authority to:

  • Question whether AI is the right solution?

  • Redefine the problem you're solving?

  • Say "not yet" even when competitors are shipping?

  • Redesign from first principles instead of bolting on features?

If the answer is no and if the pressure to ship AI features is overriding the authority to think clearly about what to build, then you're not actually solving a product problem.

You're solving an organizational authority problem.

And no amount of sophisticated AI implementation will fix that.

What This Means

The companies that get transformational value from AI in their products will be the ones that:

  1. Reclaimed authority to define the problem before implementing solutions

  2. Asked "what becomes possible?" not "what gets faster?"

  3. Were willing to redesign, not just enhance

  4. Moved slower upfront to move faster in the right direction

The ones that don't will have impressive feature lists and disappointed users.

The margin between those outcomes isn't technology.

It's whether you have authority to ask the right question before you automate the wrong answer.

- Matt

ai
Back to Blog

Occasionally, I circulate early drafts of future memos before they’re public.

If that would be useful, you can request access here.