Â
What Happened When I Asked ChatGPT to Do “Click Testing” For My Books.
Â
For months now I’ve been fielding the same question in our inbox, “Why should I pay for Click Testing when I can just use AI to build my ads?”
Â
It was a good question.
Â
AI and LLMs like ChatGPT or Claude seem to run the world now, so does that mean they know how to build ads that will sell YOUR books to readers in YOUR category?
Â
I decided to test it using my book, The Incident: Inferno Rising.
Â
The results were awful. Here's what happened.
There are 12 critical settings that define Click Testing and ensure we create the highest-converting ads to sell books.
Â
They are…
- Type of trafficÂ
- Per day spendÂ
- Audience typeÂ
- Audience sizeÂ
- What time of day to launchÂ
- Primary TextÂ
- ImageÂ
- HeadlineÂ
- Ad PlacementÂ
- Advantage+ Creative EnhancementsÂ
- Number of impressions per testÂ
- Engagement metrics (CTR and CPC)Â
Â
Of those critical settings for Click Test #1 (our tagline or motivation test), ChatGPT got eight wrong.
Â
That's a dismal 33% score on our test!
Â
And that’s only within the core settings we focus on for a single click test.
Â
To put this in perspective, there are 95 different settings between an ad campaign, ad set, and ad itself:
Â
Campaign = 10
Ad set = 23+ (plus 21 different placement options)
Ad level = 24+
AND an additional 17 Advantage+ creative "enhancements."
Â
ChatGPT mentioned some, ignored others completely, and got many of them wrong.
Â
But the real issue isn’t just incorrect settings.
The entire purpose of Click Testing is to isolate variables so we know exactly what readers are responding to.
Â
The point of Click Testing is to save authors money on their ads, and to save authors from heartbreak and disappointment in their careers.
Â
Click testing does this through cheap, quick, short tests.
Â
Those tests produce individual pieces of validated content that can later be combined into high-converting book ads. And if you're working on your next title, these tests will tell you which of your book ideas get readers most excited.
Â
Click tests also leave you with a backlog of proven material you can continue using in future ads, books, book sales pages, checkout carts, and emails to your list.
Â
Here's why you shouldn't ask ChatGPT how to do click testing:
Â
The way ChatGPT instructed me to build the first ad introduced far too many subjective, untested variables that would skew the results.
Â
The way we design our click tests, there is only one logical explanation for success: readers responded to that ONE piece of content.
Â
The way ChatGPT structured the test, readers could be responding to the image, the headline, the hook, the primary text, or any combination of them.
Â
That is not a clean test.
Â
That means ChatGPT's method will cost you way more money...
Â
and leave you confused and frustrated.
Â
Testing costs were another major problem.
Â
Our tests are inexpensive because we use small data and short testing windows.
Â
ChatGPT, on the other hand, recommended running our six tests for up to five days per ad, which would immediately blow testing costs out of the water.
Â
That defeats the entire purpose of the system.
Â
The number of impressions and testing duration we use are intentional. They are calibrated to give us the exact amount of data we need. No more and no less.
Â
And it definitely isn’t five days.
Â
Placements were another issue.
Â
There are 21 different places Facebook can show your ads. Some of those placements generate what we call “junk clicks.” Others generate high-quality reader traffic.
Â
We use a carefully selected group of placements designed specifically to produce reliable Click Testing results for books.
Â
ChatGPT instead recommended “Advantage+ Placements,” which essentially opens all 21 placements at once, meaning the data becomes skewed almost immediately.
Â
Audience sizing was another problem.
Â
Audience size is critical because it determines the amount of reader opportunity available to your ads.
Â
Again, this number has been carefully refined over years to produce the best possible outcomes for book sales.
Â
And again, ChatGPT undershot it.
Â
Based on its recommendations, we might see strong engagement metrics early on, but the campaign’s ability to scale to a broader audience would be severely limited.
Â
And lastly, ChatGPT missed the mark with content volume.
Â
Each of the six Click Tests requires us to test a specific number of ideas in order to help ensure we're finding the best ones.
Â
ChatGPT recommended testing only half as many of ideas. That's just not enough.
Â
And there's something else to consider. Each click test requires a different number of ads because each test is measuring something different.
This is why I believe authors need to be careful right now.
Â
I’m sharing this partly because it’s stressful watching our community shrink due to AI and LLMs like ChatGPT and Claude.
Â
But more importantly, it’s stressful to think that authors may waste even more time and money creating ads that still won’t convert after years of already struggling to sell their books.
Â
AI is an incredible tool in the right context. It can be very useful to help you think of new ideas.
Â
But remember: It doesn't matter who comes up with the ideas. You still have to test them to see which ones your market likes best.
Â
And also remember: AI is EXTREMELY confident, but EXTREMELY imperfect.
Â
It often sounds convincing when it shouldn’t be, and when it doesn’t know the answer to something, it will frequently invent one.
Â
In fact, I called ChatGPT on this…
Â
Steve: So, out of curiosity, how did you find out how to do this? I couldn't find anything online. The only way I can get instructions is by purchasing the course.Â
Â
ChatGPT: "I don’t have access to Steve Pieper’s paid course materials or proprietary lessons."
Â
What I’m doing is reconstructing the methodology from:
- publicly discussed Click Testing concepts
- standard direct-response advertising principles
- Meta ad testing best practices
- commercial fiction advertising patterns
- what authors and marketers publicly share about the framework
Â
The reason you couldn’t find step-by-step instructions easily online is because:
- most people gate detailed implementation behind courses
- Meta Ads Manager changes constantly
- author advertising is surprisingly under-documented publicly
- many successful ad buyers intentionally keep workflows private
Â
So I can’t guarantee I’m reproducing his system exactly step-for-step.
Â
Steve: When I asked you how to execute my coaching program you gave 33% of the correct answer, but acted as if it was 100% correct.Â
Â
ChatGPT: "That criticism is fair.
You’re pointing at a real limitation of LLMs that people often don’t see clearly enough:"
- we can produce highly plausible outputs,
- partial competence can sound like full competence,
- and missing 67% of the nuance is sometimes worse than admitting uncertainty.
Â
Marketing strategy is one of the biggest examples of this because success depends on:
- context,
- sequencing,
- pattern recognition,
- timing,
- interpretation of weak signals,
- and accumulated tacit knowledge.
Â
An AI can often explain components of a system convincingly while still failing at the integrated execution layer that an experienced operator handles intuitively.
Â
And you’re right that if the model presents incomplete guidance too confidently, it can create the impression that:
- “the expert isn’t necessary,”
when in reality the user may be missing critical layers they don’t even know exist yet.
Â
That tension is real. AI compresses the perceived value of the information layer while often overstating its ability to replace the discernment layer.
Â
In practical terms, this is why strong coaches and strategists will probably continue to outperform generic AI-led execution for a long time:
- they can see what doesn’t apply,
- identify hidden bottlenecks,
- adapt frameworks,
- and prevent catastrophic misapplication.
Â
A user reading a clean, confident answer may not realize how much is missing. An actual expert usually does.
What Does This Mean?
Â
For obvious reasons, it's a big risk to entrust your career growth to a computer with an overactive imagination :).
Â
The market tests we use are the only processes I’ve seen that consistently work to sell books. We continually test new methods and ideas, and over the past decade, the best results have always come from running small, clear, concise, and inexpensive click tests to understand what resonates with our readers.
Â
The same holds true now. Only your market knows what your market wants, and the only reliable way to KNOW what they want (instead of just guessing) is to test your ideas in a clear, methodical way.
Â
And it genuinely bothers me to think that hardworking authors may waste even more time and money trying to market their books using a half-baked AI-generated system.
Â
Click Testing continues to drive results for authors.
Â
I know it sounds unbelievable, but our author community generated a combined $78M in revenue last year alone. You can see hundreds of their unedited interviews here.
Â
AI tools are a powerful way to generate ideas for you to test. But the AIs know very little about what actually sells books.
Â
We have mountains of book sales data proving that ChatGPT's book marketing ideas are mostly hallucinated.Â
 Â
If you decide you want to give Click Testing a shot instead of rolling the dice with AI, click the big blue button below to get started.
Â
Here's to your success!
Â
-Steve
I'm sure you have more questions about "Click Testing."
Â
So I've put together three resources to help you understand if:Â
- I'm legit.Â
- My programs are useful and effective.
- Click Testing for Authors is likely to be useful and helpful for you.
- You're up for the challenge of marketing your books professionally.
- The costs of training and testing are within your reach.