Launch HN: Just words (YC W24) – Optimize your product's copy for user growth
87 points| nehagetschoice | 2 years ago
“Copy” in this context means short-form content such as landing page titles or email subject lines. Small and seemingly insignificant changes to these can lead to massive growth gains. We observed this on many occasions while working at Twitter. Tweaking a push notification copy from “Jack tweeted” to “Jack just Tweeted” brought 800K additional users to Twitter.
However, coming up with the copy, testing it, and optimizing different variants across users would take us forever—sometimes months. There was no methodical way to optimize copy on the fly and use what we learned for subsequent iterations. The entire process was managed ad hoc in docs and spreadsheets.
After experiencing this pain at Twitter, we observed the same problem at other companies like Reddit. After years of this, we are convinced that there’s enough evidence for this pain across the industry.
In our experience, the main challenges with copy optimization are:
Engineering effort: Copies are hard-coded, either in code or config files. Every small change requires redeployment of the app. To run an A/B test, engineers must write if/else logic for each variant. As a result, one copy optimization becomes a 2-week project on an engineer’s roadmap, and companies are only able to prioritize a small number of changes a year.
Fragmented content: There is no copy repository, so companies lose track of the history of changes on a particular string, learnings from past experiments, and their occurrences across the product. With no systematic view, product teams make copy changes based on “vibes”. There is no way to fine-tune next iterations based on patterns obtained from previous experiments.
Lack of context: Companies either test 1 copy change at a time for all users, or rotate a pool of copies randomly. In an ideal world, they should be able to present the best copy to different users based on their context.
We built Just Words to solve these problems through 3 main features:
No-code copy experimentation: You can change any copy, A/B test it, and ship it to production with zero code changes. All copy changes made in our UI get published to a dynamic config system that controls the production copy and the experimentation logic. This is a one-time setup with 3 lines of code. Once it’s done, all copy changes and experiments can be done via the UI, without code changes, deploys or app releases.
Nucleus of all product copy: All product copy versioning, experiment learnings, and occurrences across the product are in one place. We are also building integrations to copywriting and experimentation tools like statsig, so the entire workflow from editing to shipping, can be managed and reviewed in one place. By storing all this in one place, we draw patterns across experiments to infer complex learnings over time and assist with future iterations.
Smart copy optimization: We run contextual Bayesian optimization to automatically decide the best-performing copy across many variants. This helps product teams pick the winner in a short amount of time with one experiment, instead of running many sequential A/B tests.
We are opening up our private beta with this launch. Our pricing is straightforward - a 60-day refundable pilot for $2000 (use discount code: CTJW24), for one of the following use cases: landing pages, push notifications, email subject lines, or paid ad text. We will show visible growth gains to give you a substantial ROI on your pilot and refund the amount if we fail to deliver on it. We are inviting companies with >2K monthly users to try us out here: https://forms.gle/Q3xthubQFfZcXZe88. (Sorry for the form link! We haven’t built out a signup interface yet because our focus is on the core product for the time being. We’ll add everything else later.)
We would love to get feedback from the HN community on (1) the flavor of problems you have experienced in the world of copy changes, (2) how your or other companies are solving it (3) feedback on the product so far, or (4) anything you’d like to share!
passwordoops|2 years ago
"Healthcare, infrastructure, K12 education and other essential services are a mess because we lack the talented resources to solve those fundamental problems"
"We largely do have the resources, it's just that they're all trying to get people to click ads, getting people to waste time endless scrolling on Reddit or Twitter or Instagram, managing their brands' social media accounts, etc."
This is not a problem in need of a solution. You both seem like really talented guys, and it's depressing that this is what you've decided will help make the world a better place.
That you're part of the YC24 cohort says a lot about what's wrong with the tech industry's priorities right now
bradhilton|2 years ago
Don't hate the players, hate the game.
If we deregulated there would be a rush of innovation and the next YC cohorts would have many more startups targeting those industries.
rnts08|2 years ago
>> That you're part of the YC24 cohort says a lot about what's wrong with the tech industry's priorities right now
Couldn't agree more, it seems that there's a big disconnect of what startups are getting founded and what would actually change the future.
Personally this ranks somewhere in-between web 3.0 and generic AI <feature>.
yawnxyz|2 years ago
erikig|2 years ago
sandspar|2 years ago
debarshri|2 years ago
yowayb|2 years ago
sandspar|2 years ago
edmundsauto|2 years ago
nehagetschoice|2 years ago
i_am_a_squirrel|2 years ago
So some sort of tracking knows that your opti-cop worked well on me for some other site you service, so then it tries a similar style for another site (who uses your service)?
bawejakunal|2 years ago
nprateem|2 years ago
bawejakunal|2 years ago
A few examples from industry include: 1. Netflix artwork optimization https://netflixtechblog.com/artwork-personalization-c589f074... 2. Duolingo notifications optimization https://research.duolingo.com/papers/yancey.kdd20.pdf
helloworld|2 years ago
In my experience, copy is used the same way as code. And just as you probably wouldn't say, "I changed the codes for the three apps," most people wouldn't say, "I rewrote the copies for the three websites."
antonvs|2 years ago
Unless you’re - shudder - a data scientist
ritzaco|2 years ago
Not claiming you did this, but in your example of "Jack tweeted" vs "Jack just tweeted", it's so often bad statistics or cherry-picking that leads to results like this. Testing null hypothesis is hard, and many people will take a reality like this chart [0] and claim that their A/B test is successful.
I definitely think there's a gap in the market since Google sunset optimize, but $2000 seems kind of steep for something that many people got for free before.
[0] https://i.ritzastatic.com/images/3dcdd822dbd241b1b3ddeeb9540...
sgslo|2 years ago
Perhaps it would be more effective to put a lower-quality landing page in as your demo. Off the top of my head, something like https://www.intuit.com/ might work. Their existing tag line is "The global financial technology platform that gives you the power to prosper". Doesn't mean much to me - I'm sure your tool could give me some better options, which would serve much better for a demo.
smt88|2 years ago
A good example would be a company that isn't an unavoidable juggernaut in its space and also isn't great at marketing. There are many open-source projects that would work because they don't have a big marketing team, but they might still be well known.
vintagedave|2 years ago
Yes! Look at the alternatives the tool generated:
* Maximize Revenue with Secure Transaction Processing
* Elevate Digital Commerce with Trusted Payments
* Unlock Opportunities with Secure Transactions
* Simplify Your Online Payments
* Accelerate Your Online Business Growth
These are just... bland. Generic. I've seen a thousand webpages with taglines just like these.
AI can be quite bad at generating unique responses. Its answers are middling. This can be great for, eg, coding where you want a copilot to generate known algorithms and approaches, but terrible for marketing where you want a slice of brilliance and genius.
Would turning up the AI temperature and other settings help, maybe?
nehagetschoice|2 years ago
Areibman|2 years ago
nehagetschoice|2 years ago
The things are make us stand out -
1) There is no tool afaik that looks at inferential learning based on past experiment results on copies. The experiment analysis and the continuous feedback loop is missing. 2) To make (1) happen, the first step is copy versioning and management. Without a tool that can abstract strings in a CRM, and monitor learnings over time, it's hard to make (1) possible. 3) We integrate with tier 0 services like notifications with a low latency solution that makes copy iterations possible for in-product features, outside of logged out surfaces like web pages.
Would be interested to know if you have seen software that solves for (1), (2) and (3)?
trevoragilbert|2 years ago
How are you handling messaging consistency for specific users? Ie. A user is shown an experimental string of copy with one value prop and you want it to show up multi-channel for them. Do you have a way to associate the experiments on a per-user level?
bawejakunal|2 years ago
example: website_landing_page_title and email_subject_line can be part of the same experiment for a multi-channel experiment.
nehagetschoice|2 years ago
mattw2121|2 years ago
nehagetschoice|2 years ago
smt88|2 years ago
dmits|2 years ago
jetblowfish|2 years ago
FounderBurr|2 years ago
earthling998|2 years ago
nehagetschoice|2 years ago
willsmith72|2 years ago
bawejakunal|2 years ago
doctorpangloss|2 years ago
So why don't they offer that?
I know it's not because the data is proprietary or private, because basically all the information you need is visible on Facebook Ad Library, more than enough to answer most questions about authoring copy by sheer mimicry.
You emphasize the UX here a lot. I don't know. I think Meta's UX is fine, end of story.
This isn't meant to be a takedown. It just seems intellectually dishonest. Anyone who has operated these optimization systems at scale knows that the generalized stuff you are saying isn't true. You're emphasizing a story about engineers versus product managers that is sort of fictional, like the right answer is the one that most companies are already taking which is to not A/B test at all, because it doesn't matter, and when you do see results they are kind of fictional.
And anyway, it belies the biggest issue of all, and this is actually symptomatic of Twitter and why things were so dysfunctional there for so long, long before they were taking private: you are saying the very Twitter esque theory that "Every idea has been had, and it's just a matter of testing each one, one by one, and picking the one that performs best." That was the core of their product and engineering theory, and it couldn't be more wrong. I mean if you don't have any experience or knowledge or opinions, why the hell are you working there?
> However, coming up with the copy, testing it, and optimizing different variants across users would take us forever—sometimes months.
The right answer is right in front of you! Don't do it! Don't optimize variants, it's a colossal waste of time!
altdataseller|2 years ago
HN is a good example of this. Headlines that are too outrageous or catchy do not get upvoted that much here but something simple like “I created a rust debugging toolkit” will likely get upvoted like crazy, while something like “I got laid off a day after I got pregnant. Here’s what happened” probably will get buzz on TikTok.
nehagetschoice|2 years ago
jn31415|2 years ago
phgn|2 years ago
emptysea|2 years ago
i_am_a_squirrel|2 years ago
nehagetschoice|2 years ago
youniverse|2 years ago
fwip|2 years ago
pferrel|2 years ago
Rather we should optimize for user understanding or satisfaction - whichever fits. These are harder to capture but FAR more beneficial to the consumers of content.
throwaway87891|2 years ago
[deleted]
ivanheadoff|2 years ago
[deleted]