One of the most famous ad campaigns in history is Yellow Tail wine's "Crazy Rooster" campaign. It's so famous because it broke two cardinal rules of advertising:
- You don't want to be memorable for the wrong reasons.
- You don't want to offend your customer base.
What made this campaign so effective was that it didn't just break these rules, but completely shattered them by doing the exact opposite of what conventional wisdom would tell you.
They built their entire brand around being offensive. They were known for making crazy claims like, "Yellow Tail tastes like s**t." They put up billboards in airports saying, "If it wasn't for stupid people, we wouldn't have any customers." And they even used slogans like, "The only thing yellow about Yellow Tail is the color of our piss." These things wouldn't actually make anybody want to buy Yellow Tail wine… Right?
What they found was that people loved it. And when people love something, they tell other people about it. The end result? A $500 million brand in Australia with no marketing spend. Why? Because they focused on the right things.
Von Restorff Effect: Bonkers Marketing Campaigns Spawn Successful Brands
Ok... so now... this is an important place to take a pause.
I've been playing around with the idea of using AI to generate blog posts for my site, mostly out of curiosity. I wanted to see if AI could produce quality writing content, with the hope of comparing analytics for a few AI-generated articles to some that were written entirely on my own.
There are tools available online which let you feed a couple bullet points into a generator and select the type of writing you're looking for (article, email copy, bulleted lists, etc). You pick a few options and hit go, and in moments you've got some fresh new prose to use for whatever you please.
Validating the "Crazy Rooster" campaign
After reading the generated article stub above, I wanted to find some images of the marketing campaign mentioned in the text, so off to the search engines I went. This, for me, was a truly jaw-dropping moment: There wasn't a single image I could find which mentions or references the "Crazy Rooster" campaign.
I dug deeper. I soon found that not only were there no images from the campaign, but I couldn't find any references to that ad campaign anywhere. Yellow Tail's site, marketing blogs, search engines, all returned nothing. I even searched for smaller fragments of text from the article above, and came back with nothing meaningful.
As far as I can tell, it never happened. It doesn't exist.
In other words, AI had written a really compelling article which was completely and utterly fabricated!
What I suspect really happened is this: the information I fed to the algorithm that this particular AI used weaved together partial anecdotes from a large number of publicly available external sources, creating a believable, compelling, yet non-existent story about a real brand's marketing strategy.
That is fucking impressive... and kind of terrifying!
Friends, I'm here to tell you: the AI tools that are available right now can author content which is shockingly good -- but there is a massively important caveat that we all need to understand about AI-powered writing:
Can you imagine if I had accepted this as truth, and published it without surrounding context? The only reason I discovered this wasn't true was because I took the time to look up corroborating information - but it would be super easy to skip that step, especially if you don't understand some of the fundamentals of AI.
What is the Von Restorff Effect?
In principle, the Von Restorff Effect is simple: given a group of similar choices, people tend to remember experiences or products which defy expectations - the thing which stands out the most tends to be stickiest, and the more memorable the unexpected experience is, the stickier something will be.
For an extremely simple example, let's say I gave you 10 seconds to memorize as many animals from this list as possible:
I'd put money any arbitrary person remembering Owl over some of the others mentioned in there.
This phenomenon can also be used to make products more sticky by building experiences which defy expectations - in any marketplace with significant competiton, creating a stand-out experience can give your product a massive advantage.
How this post came to be
For articles on this site, I keep a list of topics that I think I want to write about in a big list. I slowly add to individual topics over time, and once I feel like there's enough information on a given topic to produce a meaningful article, I spend some time organizing thoughts from the note into prose, and then publish.
To generate this article, I grabbed one of the ideas from that list, which was this Von Restorff effect, which is a very real psychological phenomenon that caught my attention some time back.
I fed the topic and a few cliff notes into an AI tool to see what it could generate - really it was just, a few bullet points and the thesis statement for the article. I wanted to talk about how the Von Restorff Effect can be used in product marketing, to create something sticky; something that will be memorable.
And so I fed the tool and I clicked
go. The article above is what came out of it. What I was really really impressed with initially was the quality of the writing. No grammatical changes were needed, there was a beginning, a middle, and an end, and it put together a super compelling story that served the points I was hoping to make.
In short, I was really impressed... and then absolutely stunned when I realized what happened.
Learning from this experience
So I came away with this experience with two important learnings:
The Von Restorff Effect and product marketing
The nugget here for people building products is simple: if you can create an experience that completely defies expectations, your product will be far stickier with your potential customers.
A caveat on AI tools
It's worth noting that I have intentionally omitted the names of the AI tool that led me to this experience. For me, this is because I'm struggling with whether or not it's responsible to point my readers to tools which can easily lead you into a false sense of certainty. With great power, as they say...
Given that most of my audience are developers, I think it should suffice to say that I used a product based on Open AI's GPT-3 - an extremely robust AI tool for generating language.
This is a story that I've struggled to tell because of how important I feel like this is.
I can tell you this much: I certainly won't be writing blog posts with AI any time soon, but I will be periodically checking in on this stuff to see how much better it's getting. I think there's an ethical responsibility to understand the limits and capabilities of AI content creation tools.
AI and Code Generation with GitHub Copilot
If you're a software developer, you may have come across GitHub Copilot - billed as "Your AI Pair Programmer", it's a freely available tool which will generate code for you with little prompting. You can do as little as writing a few comments, and copilot will generate everything from classes to algorithms to complex patterns login flows and cryptography.
You should really spend some time using Copilot if you haven't already. It's pretty amazing to write a few lines of pseudocode in comments and see what it can generate for you. It feels like magic.
AI-generated code is still risky
But, just like when using AI to generate writing, be warned: GitHub Copilot isn't perfect - it's pulling from a massive amount of public code, and making guesses based on what you tell it to do. It won't necessarily write the world's most performant code, and in some cases it will generate code with serious flaws. Since its early release as a technical preview, there have been all kinds of problems reported with copilot, including one report which gained attention since it appeared that Copilot was generating code that included (other people's) valid API keys:
A cautionary conclusion
As we introduce AI into more parts of our work and our lives, we need to be aware of the risks of using it. As technologists, there's an ethical responsibility to communicate these risks to our non-techncial neighbors in ways that they can understand. It's my hope that this anecdote about trying to generate a blogpost with AI is something that can serve as a warning which you can explain to your aunt Judy, your boss, or the influencers in your feed who may not seem to understand the risks of using AI.