In Q2 of 2016, Taylor Pearson and I collaborated on the launch of his first course. After 3 months of experimentation and fine-tuning, we produced a result that allowed us to take screenshots like this:
Taylor would caution me to include the caveats: the fact that the launch in the “after” photo was to a finely-curated cohort.
I want to go one-further and make the caveat the entire point of this case study.
I love reading success stories, but we’re often left with correlation/causation confusion: did you convert gangbusters because your copy was fire, or because you found a seamless product-market-fit for a bleeding-neck pain?
Taylor and I took an iterative approach to his course rollout:
- A “soft launch” in April to his most engaged cohort to-date: 150 people who had downloaded his Antifragile Planning Template, upon which the course would be be based. We wrote copy based on the results of a “deep dive” survey to that group.
- A secondary launch in May to a cohort of around 1000 (randomly selected from email list), incorporating product and positioning improvements gleaned from results and surveys.
- A “rollout” in late June to a carefully segmented cohort of the whole list, again incorporating the learnings and survey results from the second generation launch.
The “rolling” launch gave us a luxury not available to all course-creators: a “control group”. In at least 3 instances, we changed only one variable and noticed an order-of-magnitude result.
(In at-least-a-few-others, I changed too many things, creating confusion about what was working and what wasn’t, a mistake I won’t repeat.)
All-the-same, there are at least three things I can tell you with high-confidence we’re “sure” worked. (I’ll also tell you about the things we have a good hunch about, but with which we can’t prove causation.)
2 Things We Know for Sure Worked
For the final (“successful”) round of the launch, we launched to a group of only 260, all of whom had “opted in” after we sent a week of “teaser” content from the course to Taylor’s entire list.
Quick dilation: Taylor chose 3 tantalizing excerpts from his course, and embedded each on its own landing page on his site. Then, after bi-monthly “teasers” at the end of Taylor’s regular essays, we hit the whole list with emails Monday, Wednesday, and Friday, each linking to one of the course excerpts.
We asked people to “opt in” just to participate in the launch, and set a time-limit, to maximize enrollment.
By asking people to “opt in”, and selling only to those who demonstrated interest, we accomplished 3 things:
- Identified the cohort of Taylor’s entire list with the most intense interest in the course, relative to whom we could measure conversions.
- Segmenting meant we felt we could sell more unabashedly to those who had made the micro-commitment to opt in as they were already marked as interested. As I’ll describe below, we emailed these folks an average of twice-a-day on launch week.
Note from Taylor: I wanted to do this because I didn’t want to send a shitload of launch emails to everyone on the list that was potentially a long term asset, but not interested in the masterclass. I’ve had lots of interesting opportunities come up from cool people, partners at 500 startups, PE guys, etc. that I would like to stay in communication with that doesn’t want to get 10 emails in three weeks during a launch. I also try to be very conscious that most people have two inboxes – one they check and one they let accumulate as a swipe file/newsletter. I do everything I can to stay in the latter.
Dopamine Week/Content Marketing
This is a secondary effect of the above. While creating the course excerpts gave us a way to identify the most engaged cohort of Taylor’s list, it also let us “educate” them and “warm them up”, so they knew roughly what to expect before we launched.
After we converted badly to the May cohort (those chosen at random), Taylor proposed this style of “feeler” launch for the final round. We called it “dopamine week” because it let people feel the “dopamine release” of getting a small win from his material.
How do we know for sure it worked? If what we were measuring was whether giving people a dose of the course material would cause them to want more, a good metric would be did they opt in for the launch, and, more importantly, did many end up buying.
If either the response to the “teaser/dopamine” week or eventual sales from the cohort who opted in were weak, we’d know content from the course wasn’t the 80/20 of selling it, but people did end up opting-in, and a high percentage of those folks ended up buying.
3 Things we’re pretty sure worked
Between each pair of launch iterations, Taylor and I surveyed both his buyers and non-buyers, asking the former why they decided to buy, and the latter why not. We also embedded “survey” style questions in the emails leading up to launch week: questions like “what’s your biggest challenge when it comes to productivity”, and “what’s the issue you most help the course will help?”
Can you ever be 100% sure copy made the difference unless you A/B test two sales pages with the same cohort and get statistically-significant data? No. But we’re pretty sure it helped.
What we changed: After reviewing Taylor’s initial deep dive survey, I had a mental picture of a new entrepreneur or somebody just on the verge of leaving a day job. He got into this for the freedom to choose his own path and live anywhere, but entrepreneurship is proving a lot more stressful-than-advertised.
After some spirited discussions, Taylor convinced me to think of someone with 5-10 years in the game, with ambitions to emulate heroes like Warren Buffett and Steve Jobs.
Why we’re pretty sure it helped: While the final version of the copy converted well, it’s hard to know exactly how big a part copy played. But we have pretty strong anecdotal evidence. We started noticing that the survey results we’d get back were “echoing” the story we were already telling about the course. When it started to feel like we weren’t discovering anything “new” from any of the responses, we were pretty sure we had it dialed-in.
Nate: My hunch about this is that focus on a single pain-point and value prop probably had a greater effect than the particular thing we chose. Once we had conceptual agreement, you were a lot more comfortable turning up the emotional intensity of the copy.
Taylor’s Note: I basically didn’t want to sell someone that I had a magic pill that would let them quit their job and start a business. It’s a productivity masterclass so yea, it might help you do that, but that’s not the purpose. This was more to do with long-term branding considerations. TBH, it would not surprise me if Nate’s original copy would have sold better.
Massive Email launch week
“What would launch week look like if we sent twice as much email?” Taylor asked me on a strategy call.
What we changed: We decided to send a variation on an email I have in my evergreen funnel for 80/20 Drummer. Whenever someone clicked to the sales page but didn’t buy, they’d get an email: “I notice you checked out the course but haven’t pulled the trigger. Is there anything I can answer for you?”
Taylor took it one step further, asking to get on the phone with the clickers – even offering his cell # in the email.
The result was a real-time stream of questions and feedback that we then incorporated into future emails. Instead of sending something rote and generic, like “Reminder – course is still open”, this allowed us to talk about questions he’d received just the day before. We even got to have some fun with it: a customer who nearly decided not to buy because of a video (more on that below) wrote a constructive critique that became the whole subject of an email: “I get it – I should stick to books”.
Why we’re pretty sure it helped: Again, the only true way to measure the effectiveness of a strategy like this would be an A/B test between two random samples of the same cohort. But the volume of questions Taylor’s extra emails generated, not-to-mention the open and click rates of the “off-the-cuff” emails, let us know they drove engagement pretty well.
Nate: What gave me the “spidey sense” this was succeeding was the response your self-effacing, “off-the-cuff” emails got. It’s also hard to argue that more proof is bad, and it was great to be able to incorporate some of the early rave-reviews into the subsequent emails.
Taylor’s Note: I felt way more comfortable sending a lot of messages during launch week because of the dopamine campaign. We explicitly told people “if you click here, we are going to try very hard to sell you stuff because you’ve expressed that you think you could benefit from it”
We saw almost no unsubscribes from the cohort that opted in.
Shelling for Decent html for The Sales Page
After we switched over to Teachable’s native sales page, Taylor made the decision to shell out some money to my developer for decent html. The result was a clean sales page.
Why we’re pretty sure it helped: We didn’t split test it, but plenty of others have. And it’s hard to go wrong with a clear offer above-the-fold, and a visual flow that draws the reader in. We asked my developer to make a banner that included the video player on the left and clear CTA on the right, with the text headline visible below.
Compare that to the default sales page, which forced us to embed the video player beneath the welcome banner, and felt very distracting and “out of the box”, instead of cleanly-branded in Taylor’s aesthetic:
Nate: We did all that hard work on the positioning, and it was a bummer to see it buried by clunky graphics on the first-generation sales page. My feeling is: it’s intuitive that without good copy people won’t buy. And if you’re going to bother to write good copy, you should make sure people can see it clearly, otherwise your efforts are wasted.
2 Things I Wouldn’t Do Again
The following are things that I know for-sure didn’t help, and I’m 85% sure hurt.
Changing the format of the deliverable between launches
When we rolled out to the initial cohort, we offered a personal coaching call with Taylor as the deliverable. The idea was to validate without spending any time creating the course ahead-of-time. While the call sold, it’s my belief that we didn’t get a lot of good information about future deliverables/price points from the initial launch.
Why I’m pretty sure it hurt us: When we switched from a live call to a webinar, conversions went down, and when we switched from a webinar to the first generation of the Teachable course, they went down again. Because we were also launching to progressively colder cohorts it’s impossible to tell exactly what part the changing deliverables played, but that’s just the point: next time I’d validate with as close to the final deliverable as I could. Then we’d know for sure that the deliverable wasn’t the problem.
Initially, we offered Taylor’s personal time. Then we took it off-the-table in future incarnations. As such we knew people would pay for Taylor’s time, but we didn’t know if they’d pay for an automated course.
If I had this to do over-again, I’d solve the problem of not-building-anything-until-we-got-paid by trying to pre-sell an MVP version of the eventual Teachable course, then spending 2 weeks building that MVP.
Taylor: I did this because it basically let me get paid to do customer development. I learned a lot that shaped the actual content. So I agree with Nate from a purely marketing perspective that it wasn’t a good benchmark, but that wasn’t necessarily the point in my mind. We did a second cohort that got what amounted to a live 2 hour webinar recording (so still automated) which I think was a better test.
Raising The Price Between Cohorts
Luckily, Taylor’s mid-launch-week intervention showed us price was an issue during the June launch. Otherwise, we’d have no way of knowing.
After launching in May at $199, we decided to test $299 for the next round. I didn’t voice any objection. I should have.
Raising the price gave us the “too many variables” conundrum again, and it’s intuitively a bad move when you’re rolling out to a “colder” cohort. I’d have kept the price the same: then we’d have known from Day 1 how well the course would perform with a randomly-selected cohort.
Were I to do this over, I’d start with the highest price at which I could find product-market-fit with my initial cohort, then see how conversions faired as we rolled it out to less-engaged members of Taylor’s audience.
Nate: I’ve got a few big takeaways. Things that, if we were doing this again knowing what we know now, I’d want to nail if we got everything else wrong.
- Product-market fit matters. Launching to a cohort that’s interested and “warm” can be a make-or-break.
- Content matters: letting people feel what it’s like to use your solutions likewise made the difference between zero interest and a lot of interest.
- Validate with as close the final deliverable at the final price point as possible.