Why AI Taste and Judgment Beat Tools — Martin Pagh on Winning the Slop War

Everyone has access to the same AI tools. So what actually separates the work that lands from the slop that clutters your feed? Martin Pagh Ludwigsen, Director of Creative Technology and AI at Goodby Silverstein & Partners, has a direct answer: taste, intent, and knowing exactly where to let the machine stop.

Why AI Taste and Judgment Beat Tools — Martin Pagh on Winning the Slop War

Key Takeaways

  • AI creates the most leverage in ideation, not execution — video output quality still requires heavy post production to reach professional standards.
  • The 80% of creative work that AI can handle is now everyone’s baseline. The competitive edge lives entirely in the last 10–20% that requires taste and original ideas.
  • AI has zero ability to evaluate its own output. Taste — the human capacity to identify what’s good and why — is the only moat that scales.
  • Over-prototyping with AI can backfire: show clients too polished a concept and they give final-production feedback on something that was meant to sell an idea.
  • Bet deep on one specific skill or domain rather than trying to be an AI-assisted generalist. Everyone can reach 80% expertise on anything now — mastery of one thing is the differentiation.

If your mental model of AI in creative work is “prompt in, content out,” Martin Pagh Ludwigsen would like a word. As the Director of Creative Technology and AI at Goodby Silverstein & Partners — one of the most respected ad agencies in the world — Martin has spent three years embedded inside the creative department (not IT, not strategy) figuring out what AI actually unlocks for professional creatives. His answer is more nuanced, and more useful, than anything you’ll find in a LinkedIn carousel.

What follows are the unvarnished lessons from that conversation: where AI genuinely moves the needle, where it falls flat, and why the most valuable thing you can develop right now has nothing to do with which model you’re using.

AI’s Real Power Is in Ideation, Not Execution — And the Gap Is Bigger Than You Think

The loudest promises around generative AI focus on output: video models, image generators, text-to-everything pipelines. Martin’s day-to-day experience tells a different story. When asked directly where AI creates the most leverage in the creative process, his answer was unambiguous: ideation, by a significant margin.

The reason is practical. On the execution side, even the most advanced video models available to a flagship agency routinely fail to produce work that clears the bar for client-facing production. Resolution is low. Clips come out H.264-encoded, already lossy before post production even starts. Motion within scenes is still inconsistent. None of that is a dealbreaker for prototyping — but it means a substantial amount of traditional post production work still has to happen to make anything genuinely finished.

Where AI changes the game is earlier in the process. Martin describes using live AI conversation modes to run what amounts to a stream-of-consciousness ideation session — talking out half-formed ideas, contradicting himself, leaving sentences unfinished — and then asking the AI to organize and surface what was actually useful in that mess.

“What I really enjoy about that is that that is a mode where I can sort of just like talk out into space and come up with my ideas as I am talking, sort of like work on them and keep saying things. And I will say nonsense, I will not complete my sentences, I will interrupt myself. But that doesn’t matter because I know that there’s this live feature of the AI that’s constantly listening to me.”

He applies the same approach to agency brainstorming sessions. Instead of whiteboards and designated note-takers, the team records the conversation, talks freely, then asks the AI to organize everything at the end. The notes are better than any human note-taker would produce, and the creative team stays focused on the ideas instead of the documentation.

But the most interesting application is using AI to stress-test ideas rather than generate them. Martin’s approach is to instruct the AI to ask pointed questions — to push back on the idea, probe its weaknesses, help articulate what hasn’t been worked out yet. His operating principle: the idea should come from you. AI is genuinely useful for helping you go deeper on it, or helping you realize it wasn’t worth pursuing in the first place.

Prototyping: The Middle Ground That’s Changing Client Relationships

Between the idea and the final product sits a phase that generative AI has quietly transformed: the prototype. For decades, advertising agencies sold video concepts to clients using scripts and a handful of supporting frames — storyboards and rough comps meant to communicate what the finished piece might look like. The problem was always the abstraction gap. Clients and creatives often weren’t picturing the same thing when they read the same script.

Generative AI has compressed that gap substantially. What used to require six to eight hours of Photoshop work to build a single polished comp can now be done in thirty minutes with multiple iterations. Adding motion to those stills — rough visualizations of how the final film might move — is now accessible before a single production dollar is spent.

That’s genuinely useful. But it creates a new problem: clients start giving final-production feedback on what was meant to be a concept visualization. Martin’s team ran into this directly on a recent internal video project — the client came back with brand guideline notes and character-design feedback on a piece that was explicitly scoped as an AI-generated prototype. The work had gotten close enough to finished that the client forgot the ground rules.

His framing for this: it’s the same issue UX designers face when they accidentally add color to wireframes. The moment something looks polished, feedback shifts to the polish. If you’re trying to get alignment on the concept, over-finishing the prototype works against you. Do enough to make people understand the idea. Stop before you’re fixing the logo in the bottom-left corner at 2am, because it’s probably worse than the version you generated at 2pm.

There’s also a counterintuitive limit to how finished you want something to look when you’re pitching to Hollywood-level collaborators. Rich Silverstein — one of the agency’s founders — has been trying to sell a musical concept to Broadway and discovered that showing something too polished can actually kill interest. Directors and filmmakers want creative territory to inhabit. If it looks done, they have no reason to get involved.

The 80% Problem: Why “Good Enough” Is Now Everyone’s Baseline

Here’s the uncomfortable truth about the current state of generative AI: the tools are remarkable, and everyone has equal access to them. That means the 80% version of almost any creative output — the competent, technically acceptable, aesthetically passable version — is now table stakes. It’s what AI produces when pointed in a reasonable direction by someone with a reasonable prompt.

What that means in practice: the competitive advantage has shifted entirely to the last 10 to 20 percent. The part that requires taste, judgment, original ideas, and the ability to make choices that aren’t just statistically probable outputs from a model trained on existing work.

“AI has no taste. It comes back with something, and it has literally zero idea if what it came back with is good or bad. It just knows that according to probability and statistics and the mathematical model inside of it, it addresses what you asked it to do. But if it’s good or bad, it has no idea.”

This is the core of Martin’s argument, and it’s worth sitting with. When you use AI tools, you are commanding what he calls “an army of interns” — capable of generating volume at speed, occasionally producing something genuinely surprising, but constitutionally unable to evaluate their own output. That evaluation is entirely on you. The taste is entirely on you.

He points to Rick Rubin as a useful model for how to think about this. Rubin famously has no technical production skills. What he brings is taste, and a deep confidence in that taste — the ability to say what’s working and what isn’t, and to trust that judgment enough to act on it. Martin’s advice to creatives and agencies: develop your inner Rick Rubin. Not because technical skills don’t matter, but because the technical floor has been raised to the point where taste is now the scarce resource.

Taste Is Learnable — and It’s the Only Moat That Matters

The good news, according to Martin: taste is not fixed. It’s something you can deliberately develop. The mechanism is simple, even if it requires sustained effort — expose yourself to genuinely great work. Great music, great film, great theater. Not to copy it, but to train your own instincts for what “good” actually feels like, so that when AI hands you ten outputs, you can identify the one worth pursuing and know why.

This connects directly to why purely AI-generated content — the “AI celebrities,” the faceless generated accounts, the content with no discernible human behind it — tends to fail to build real audiences. Audiences are human. They want to connect with other humans. And they can tell, even when the pixels are technically perfect, when there’s no authorial intent behind what they’re watching. Martin frames this through reception theory: what an author encodes into their work, and what a viewer brings to it. When the machine is driving and the human is just making small adjustments, that connection breaks down. People feel it even if they can’t articulate it.

For content creators specifically, his read is that this is actually good news. The competition isn’t AI-generated slop. The competition is other humans with good ideas. That’s always been the competition. AI just made the tools cheaper and the volume higher.

The Giraffes on Horseback Salad Project: A Case Study in Using AI’s Weaknesses as Features

The clearest illustration of what separates professional AI-assisted creative work from the flood of generated content is a project Martin’s team completed with the Dali Museum: Giraffes on Horseback Salad.

The source material was a real piece of history — a screenplay that Salvador Dalí tried to sell to Hollywood studios around 1937, when he was in exile in the United States during the Spanish Civil War. Studios rejected it. The technology didn’t exist to bring a Dalí fever dream to screen, and the project was forgotten for nearly ninety years.

Google Cloud and Google DeepMind came to Goodby Silverstein with their then-new video model (V01 — now two major generations behind current), wanting to see what a flagship creative agency could do with it. Martin’s team, working with creative directors Lukas and Katsuo, made a deliberate choice: instead of fighting AI’s tendency toward the strange and hallucinatory, they built around it.

In the domain of Dalí surrealism, a clock with a wrong Roman numeral isn’t a bug. It’s consistent with the aesthetic. Figures that move with slightly unreal physics aren’t production failures — they’re appropriate to the dreamlike register the work demands. The team chose to treat the model’s instabilities as features rather than problems to fix, and built a seventeen-minute short film that earned two Webby Award nominations.

There’s a pragmatic lesson embedded in the project that applies beyond surrealist art films. The team discovered early on that Google’s video model was outputting H.264 clips — already compressed, already artifact-laden, already limited in dynamic range before post production touched them. Martin flagged this directly to the Google team, who were surprised: they’d assumed the goal was for the system to spit out finished films. The Goodby Silverstein team had to explain that no professional editor works with source material at that compression level. The model’s output was the raw material, not the product.

That gap — between what AI researchers think creatives want and what professional production actually requires — is still present. But the project demonstrated that closing it is possible when both sides are willing to have honest conversations about workflow. The resulting film, made with what is now two-generation-old technology, still holds up as a demonstration of what happens when human creative intent drives AI tools rather than the other way around.

Roles, Teams, and What’s Actually Changing Inside Agencies

The organizational structure of a major advertising agency hasn’t radically restructured around AI — at least not yet, and not at Goodby Silverstein. Projects still flow from client brief to strategy to creative director to creative team (typically an art director and a copywriter working in tandem). What’s changed is the innovation layer on top of that structure.

Martin leads an internal unit called Labs — a small team explicitly tasked with figuring out whether technology can be used to execute ideas in ways that haven’t been done before. The key distinction: Labs operates with explicit permission to fail. When a creative team has an idea that seems like it might work through some novel technological approach, they bring it to Labs. Labs experiments. Sometimes the experiment works and becomes an add-on that elevates the whole campaign. Sometimes it doesn’t work, and the campaign proceeds on its original concept without the tech layer.

The Doritos x Stranger Things campaign illustrates this clearly. The core concept was a recreation of an 80s-style telethon raising money for the fictional town of Hawkins. The creative team asked Labs: what if people could actually call into the telethon and have a real conversation with an AI-generated 80s celebrity? Labs figured out it was technically possible. In the client presentation, they demonstrated it live — calling an AI-rendered 80s celebrity mid-meeting. Doritos and Netflix loved it. The technology add-on became part of the sold campaign.

If the technology hadn’t been ready, the telethon video would have existed on its own and still worked as a campaign. The Labs model meant the agency could take the risk of exploring the harder version without betting the entire campaign on it.

What’s worth noting for anyone running a smaller operation: storyboard artists are being hired less. That work has been largely absorbed by AI-assisted visualization. But the creative directors who evaluate which ideas are worth prototyping — that judgment hasn’t been automated, and there’s no obvious path to automating it. The waterfall production model (finish step A before starting step B) is also loosening. Martin’s team was in full post production on a generated film when they seriously discussed recasting one of the characters — something that would have been logistically absurd in traditional production. They didn’t do it, but they could have.

The One Bet Worth Making Right Now

Martin’s answer to what people doing this work need to do to stay relevant is deliberately uncomfortable: pick one thing, go deep on it, and accept that you might be wrong.

His reference point is Amy Webb and the Future Today Institute, who this year retired their annual trend report entirely — replacing it with a “Convergence Outlook” — because trends move so fast that any trend report is outdated by the time it’s published. The advice to organizations in that environment: don’t try to cover everything. Bet on something specific.

The logic: AI has made it possible for anyone to become an 80% expert on almost anything quickly. That means being a generalist is no longer a defensible position. The only real differentiation comes from being genuinely excellent at something specific — and then letting AI handle the rest.

The scary part he doesn’t gloss over: you might bet on the wrong thing. You might pick Betamax when VHS is what wins. That’s the actual risk, and there’s no framework that eliminates it. The best hedge is to bet on something you’re genuinely passionate about, because you’ll always be better at something you care about than something you’re pursuing strategically but don’t actually love.

For creators, the practical translation: figure out what makes you specifically interesting and irreplaceable to your audience. Then use AI to handle the 80% that doesn’t require that. Stop trying to compete on volume or polish. Compete on being unmistakably yourself — because that’s the one thing the army of AI interns genuinely cannot generate.

← All Insiders Episodes

About the Author

Mike

Michael Holmes is the founder and CEO of Vidpros, a trailblazer in video marketing solutions. Outside the office, Michael nurtures a growing community of professionals and shares his industry insights on the blog.