In partnership with

Top Brands Use minisocial to Create Content That Converts

Want content that drives engagement, boosts conversions, and goes viral? Here's how.

minisocial combines micro-influencer activations with high-performing UGC creation. Join top brands like Plant People, immi, Imperfect Foods, and Topicals to see results like:

  • TikTok ads performing in the top 1% CVR

  • A 50% drop in cost per add-to-cart

  • A 92% surge in organic video views

  • Over a 30% increase in ROAS

With minisocial, it's simple: create your brief in 10 minutes, approve your curated creators, download scroll-stopping content with Whitelisting/Partnership Ad code access baked-in!

👋 Hello fellow Ladderers!

Slop! Slop! Slop! It’s everywhere. And yes, you can blame the lazy writers - but you certainly can’t blame the tools. Generative AI has lowered the barrier for producing the written word - but it doesn’t have to lower the bar.

This week we’re going super-practical. I’m going to take you through a detailed and powerful framework for you to get the very most ‘you’ out of any LLM tool (ChatGPT, Claude, you name it).

And in the process you’re going to learn exactly why most people produce slop and the foundations as it how it happens within the technology itself.

It’ll change the way you think about using LLMs.

Today’s shares hit on AI becoming less theoretical and more operational: search traffic is being revalued, service bots are carrying real workload, and martech may finally be leaving its explosive/ peak era. The case studies are all about practical over theoretical, so you’re off to a flyer first thing this coming week.

If you missed last week’s practical walk-through of ‘reverse benchmarking’, you can catch-up here

🗞 In The News

  • 🔎 Google adds the web back to AI search (Ars Technica)

  • 🧱 Peak martech has arrived. Finally. (ChiefMartech)

  • 🏠 Airbnb’s service bot starts earning its keep (CX Dive)

  • 🧭 B2B software buying moves into the answer box (Foundation)

  • 📺 B2B TV targeting gets a very LinkedIn accent (Marketing Dive)

💼 Case Studies: Case Closed

🧰 You Won’t Blame These Tools

  • 🧠 How AI-pilled are you? - Benchmarks organisational AI fluency without pretending vibe checks are strategy.

  • 📊 Zappy by ZapDigits - Turns marketing performance data into client-ready reports and plain-English answers.

  • 📈 Omniflow HQ - Turns spreadsheets into AI-assisted budget analysis, forecasts, variance explanations, and risk flags.

  • 🐞 Bugzy - Gives QA teams a cleaner workflow from bug report through release decision.

  • 📱 AppMySite - Builds iOS and Android apps without writing code, useful when “just make an app” somehow lands on marketing.

Today’s feature

Let's Put “Delve” To The Sword

🗡️ The “Show Don’t Tell” Approach To Make AI Work For You 🖋️

~ 5 minutes 17 seconds to read

AI DIDN’T INVENT THE EM-DASH 😅

I have been integrating generative AI into my marketing outputs and workflows since 2017 - long before ChatGPT launched and made "prompt engineering" a dinner-party conversation.

Over the past few years, I have watched the technology evolve from simple, amnesiac chat interfaces to today's robust ecosystem of system prompts, knowledge bases, project folders, Retrieval-Augmented Generation (RAG), and MCP connectors. The infrastructure for context loading is now extraordinarily powerful.

Yet, despite this massive leap in capability, the complaint I hear from marketing leaders remains stubbornly consistent: "The output sounds like a generic corporate brochure."

They blame the tool. They assume the model just isn't smart enough yet.

But the truth is it’s a foundational lack of understanding what generative AI is, how it works and therefore how to use it. The problem isn't the tool. The problem is that we are treating AI like a human copywriter who understands adjectives, rather than a statistical sampling engine that needs patterns.

If your AI content feels generic, it is because your systems are generic.

This is not a theoretical essay on the future of work. This is a practical, five-step guide to help you stop fighting the machine and start building a brand voice system that actually works across any major LLM - whether you use ChatGPT, Claude, or Gemini.

STEP 1: UNDERSTAND THE MACHINE (ESCAPING THE MEDIAN TRAP) 🪤

Before you can steer the machine, you need to understand why it defaults to generic output. Large Language Models are first trained by predicting the next word across billions of internet documents — this is where the "median corporate blog" problem starts, not ends. The model internalises the statistical centre of whatever it was trained on.

A subsequent fine-tuning stage — typically using Reinforcement Learning from Human Feedback (RLHF) or similar techniques — then shapes how the model behaves when given instructions.

This is literally where your Claudes separate from your Gemini’s.

Human annotators compare outputs and choose which they prefer; the model learns to produce what scores well with them. But these aren't "typical" people — they're a selected group with specific demographics, cultural assumptions, and instructions from the platform training them.

The combined result: when you ask for something "professional but friendly," you're getting what a particular group of annotators, trained on a particular corpus, found most acceptable - not a genuine creative interpretation.

Prompting well means giving the model enough specific signal to escape that gravitational pull.

STEP 2: SHOW, DON'T TELL (YOUR CORE PRINCIPLE)💡

Most marketers try to solve the generic output problem by writing longer, more complex prompts. They will write a three-page instruction manual with twenty rules for how to write a headline. This fundamentally misunderstands how the technology works.

LLMs are pattern recognisers first. When you give them a rule - "write with energy and specificity" - they have to translate an abstraction into behaviour. When you give them an example, they have nothing to translate. The pattern is already there.

What's actually happening when you add examples to a prompt isn't learning - the model isn't updating. It's priming: you're shifting what the model treats as the target distribution for that response. One well-chosen example does more to define the target than three paragraphs of adjectives, because the model can activate the pattern directly rather than interpret your description of it.

If you want the AI to write a compelling headline, give it one perfect headline and explain why it works. Then give it one terrible headline and explain why it fails. Contrastive examples - good and bad together - outperform either alone.

The caveat: this holds most strongly for stylistic and creative tasks. For structured reasoning or multi-step logic (“agentic” tasks or “skills”), explicit instructions still carry weight. The real unlock is combining both - tight constraints plus precise examples - not choosing one over the other.

We over-engineer our instructions and under-engineer our examples. One brilliant example is worth a thousand adjectives.

STEP 3: REPLACE ADJECTIVES WITH STRUCTURAL PATTERNS 🧬

We have all seen the expensive, agency-designed brand books. They proudly declare that the brand is "Bold, Authentic, and Human." This might pass for aligning a marketing team, but it creates a specific problem for AI - not because the model can't read these words, but because it reads them too broadly.

"Authentic" exists in the model's training across millions of wildly different contexts. When you hand it that word without constraint, it defaults to the statistical centre of all of them. You get the median of every brand that has ever called itself authentic.

Which is to say: nothing in particular.

Feeding your brand book into a Custom GPT and expecting magic is a recipe for disappointment. The model needs the architecture of your voice, not the vibe.

"Friendly" is an adjective. An AI-readable constraint is: "Use first-person plural (we/our). Ask rhetorical questions. Use contractions. Keep sentences under 15 words."

These constraints don't replace the model's understanding - they narrow the space it operates in. Less interpretive latitude, more consistent output.

Examples define the target. Structural constraints define the boundaries. When you combine both, the model has something to aim at and a fence around the field. That's when your output stops feeling generic.

STEP 4: TELL IT WHAT TO DO, NOT WHAT NOT TO DO 🙅

When marketers realise the AI is producing too much corporate jargon, their first instinct is to write: "Don't use buzzwords" or "Don't be corporate." This is far less effective than it appears - and the reason is in how LLMs generate text.

LLMs are next-token predictors. They work by selecting forward from context. When your instruction is abstract and negative - "don't be corporate" - you've told the model what to avoid but given it no alternative distribution to aim at. With no positive target, it defaults to what it knows best: the statistical centre of everything it trained on. Which, in a professional writing context, is exactly the corporate tone you were trying to escape.

This isn't quite the same as the human "don't think of a pink elephant" effect. Instruction-tuned models can follow specific negatives reasonably well — "don't use passive voice" works. The failure point is specifically with the use of vague negatives, where the model has no concrete replacement to reach for.

The fix is the same either way: pair every prohibition with a positive direction.

Instead of "never say 'leverage,'" instruct the model to "use plain verbs - use, apply, build - instead of leverage, utilise, or optimise."

Show the replacement. Give the model somewhere to go, not just a door to avoid.

Negative instructions define the problem. Positive instructions define the path. You need both.

STEP 5: BUILD THE SYSTEM (THE COMPOUNDING EFFECT)

The final step is moving from prompt engineering to context engineering. If you are re-explaining your brand voice every time you open a new chat, you are paying a "Blank Slate Tax" - burning time and tokens recreating context that should already exist.

Today's tools - Projects in Claude, Custom GPTs, Gemini Gems - let you load that context persistently. Build a master voice document: your structural constraints, your positive replacements, your contrastive examples. Load it into your system prompt or project folder once, and it carries forward into every session.

Two caveats worth knowing. First, longer system prompts are rarely better. A bloated context document can dilute its own instructions - the model's attention distributes across everything in the window, so precision beats comprehensiveness.

Second, the model isn't learning from your corrections. When it gets something wrong and you update the source document, you're not teaching it - you're recalibrating the context it operates within. The distinction matters: you're improving the system, not the model.

That said, the compounding effect is real. Every refinement you make to the document sharpens the signal. A brand voice system that gets five minutes of maintenance a week looks dramatically different after six months than one written once and left alone.

Don't just fix the output. Fix the source.

CONCLUSION: THE PRECISION PRINCIPLE 🪒

Every step in this framework points at the same underlying truth: AI doesn't fail because it's unintelligent. It fails because it's underspecified.

The model defaults to generic because generic is where the statistical weight lives. Your job - the actual work of prompt engineering - is to move the target. Not with longer instructions, but with better ones. Not with adjectives, but with architecture. Not with broad prohibitions, but with specific directions.

None of this is magic. You will still get bad outputs. The model will still occasionally ignore a constraint you've carefully written, produce a sentence that reads like a sappy LinkedIn post, or reach for an adjacent buzzword you asked it to avoid. The goal isn't perfection. It's raising the floor.

A well-built context system doesn't make the AI creative. It makes the AI consistently useful and accurate - which, for most marketing teams, is actually the harder problem to solve.

The marketers who will get the most from AI aren't the ones who find the best tool. They're the ones who build the best system around whichever tool they use.

That system development part, is still entirely human.

If you enjoyed this edition, please forward it to a friend who’s looking to level-up their content and creation game - they’ll love you for it (and I will too) ⏭️ 💌

Thanks for climbing today,

Troy Muir | The Ladder

PS. When you’re ready here’s how I can help you:

Martech House is a hand-picked, private peer group for senior marketing, digital and martech leaders to get sharper thinking, better signals, and more honest conversations than they’ll find at industry events. Applications are now open for the next intake if you want to be part of it (APAC only) - click here to learn more.

🙋 Got a Question? I Might Just Have Some Answers.

Each week I'm here to answer any question you might have in the space of marketing, strategy, leadership, digital and everything in between.

Just hit 'reply' and let me know what's on your mind, and I'll share my answer with the community the very next week, including a special shout out (if you're into that, otherwise we can keep it anon) 🥸

How is this working for you?

Lying only makes it worse for both of us.

Login or Subscribe to participate

Reply

Avatar

or to participate

Keep Reading