What 78,000 websites reveal about CRO.
Sahil Patel from Spiralyze shares what analyzing 78,000 websites taught him about what actually drives conversion rate improvements - and what most teams get completely wrong. Data-driven, direct, and full of surprises.
11% industry average. 34% with proven winners.
The average B2B A/B test wins 1 in 10 times - roughly one win per year for the average company. By running only tests that have already proved themselves on similar companies, Spiralyze achieves a 34% win rate, three times the industry baseline.
Stock photos cost you 19% of your conversions
Happy-person stock photos consistently underperform real product screenshots. The one-second test - translate your homepage into a foreign language and ask strangers what you sell - exposes the problem in minutes without any tooling.
Mirror the ad hook at the top of the page
When the promise in your ad doesn't match what visitors see on landing, conversions drop. The hook from your ad should appear verbatim at the top of the landing page. A 'cat food ad, dog food landing page' mismatch is one of the most common and costly mistakes in B2B.
Fix fundamentals before personalizing
Personalization adds 7-9% lift - but only after low-hanging fruit is captured. Most B2B sites have unaddressed basics worth far more per dollar. Going straight to personalization is doing things out of order.
'One change at a time' is often wrong advice
In B2B with limited traffic, testing coherent multi-change variants produces faster, more business-relevant results than waiting months for statistically pure single-variable tests. What matters is whether the business objective improved.
Kill the test brief, run the test
Spending days getting legal, brand, and executive sign-off on every test proposal kills programs. A one-hour brief and a fast kill decision is more valuable than a week of meetings about a test that may not even run.
Hello, welcome to another episode of Web Unpacked. Today I'm joined by Sahil Patel, CEO of Spiralyze, a CRO agency that does something I haven't really seen anybody else doing at this scale. They've built a database by scraping over 78,000 websites to track what companies are actually testing, what they're shipping, and what they're actually keeping. I've had a chance to work with Sahil and his team directly in the past, so I've seen their approach up close and it is different - instead of starting from guesses or best practices, they start from data about what's actually been tested across thousands of sites. Sahil, welcome. Give us a quick intro about yourself and Spiralyze.
Well, thank you first of all for having me, Margus. I think of us as two old friends getting to hang out. For everyone listening, I'm Sahil, I'm the CEO of Spiralyze. We help companies that spend lots of money to get traffic to come to their website - and that's really hard. It's getting harder. No-click search results in the search engines are cannibalizing traffic. Cost per click is going up if you're doing paid search. Once the traffic gets to your website, getting it to convert is even harder. Whatever convert means for you - you might be selling shoes, selling software, doing lead generation. All of those are conversions. So it's so hard to get the traffic here. It's even harder to get it to convert. That last part is where we help companies. And we do it by crawling the internet and borrowing everyone else's A/B tests. Our customers get to run the best ones.
Pretty clever. But what makes Spiralyze different from other agencies? From your homepage, you're basically giving a conversion promise - if somebody starts working with Spiralyze, you're promising a conversion improvement. That's not something I see other agencies doing. What's the take there?
Well, I think the first thing is the fact that we have this prediction engine. And it's not just that we crawl the internet and find other people's A/B tests - that is the starting point. It's that the algorithm is able to predict which tests are most likely to win by finding similar companies. If someone is a CRM software like where you work, do you really care what A/B tests Netflix has run? Probably not. On the other hand, if you knew which A/B tests the ten other CRM companies in your niche have run, that's a good starting point. It's not guaranteed to work for you, but you're betting on a portfolio of tests rather than the outcome of any one single test. And if you run that portfolio, it turns out you can predict which tests are going to produce lift.
Can you share what the success rates of that prediction model look like?
Yeah, let's put some context. Industry average - and this is not our data, it's from multiple sources including Optimizely - 11% of tests beat the variant. So 1 in 10 tests beats your control. That means if you run 10 tests, you get one win. By the way, the average company runs about one test a month. So you work one year and you get one win. Who is excited about that? Who wants to go back to their boss and say "we got one win this year"? The second thing is that most websites have low-hanging fruit because they've never optimized before. So what often happens is the first few months, you get a bunch of wins. Everyone is really excited. Then it gets a lot harder. Then that 11% win rate kicks in. The CRO program goes from something in the board deck to very quietly fading because who wants to say "we ran some tests, they all lost"? Spiralyze tests - running a portfolio of proven winners based on what has worked for companies like you - 34% win rate. One out of three. Three times higher. Nothing is guaranteed to work. But you do yourself so many more favors by skipping everyone else's losing tests. Let some other poor person run the losers. Your starting point should be the winners.
Yeah, that makes selling the CRO program in your company so much easier if you can bring in more wins.
Now, you don't need to go to Spiralyze to find out the winning tests. There are blogs. If anyone here subscribes to Tom Orbach - amazing person who has done a ton of CRO, subscribe to his newsletters. He regularly talks about tests he's run and which ones work. So you really don't need Spiralyze to do that. But if you want to run a CRO program, having an endless supply of data really helps. When you have that data, you can predict the lift. When you predict the lift, you can guarantee the lift. We have performance pricing - you pay based on the lift. The idea of performance pricing is not unique, we didn't come up with it. But in the context of CRO it's very different. On one end, you have legacy CRO agencies - great ones, pure service businesses. On the other end, you have really good self-serve tools - VWO, Optimizely, AB Tasty. They're great, but they won't tell you which test to run and they won't guarantee their fees. The third choice we're able to offer is the best of both worlds. We have a software product called SpiroMetrics - that's the prediction engine. We have a service component that looks and feels like an agency. And that's what allows us to say "only pay if you get the lift."
What do you feel companies are still getting wrong? What patterns keep coming up?
There's a group of companies that regularly run A/B tests. For them, the risk is very low - they test their way out of whatever's wrong. But most B2B companies do not test. Maybe 20% of B2B SaaS companies are regularly doing A/B tests. And even those that do test, they run about one test a month. So I'll give you three CRO crimes I see. One: using stock photos on the homepage. Two: poor skimmability. Three: using video in the wrong place. On stock photos - I don't know why in 2026 companies are still using stock photos instead of showing the product. The data consistently shows happy people hurt conversions - about 19% of your conversions. We know this because we've tested showing a happy-person stock photo against just showing a simple static image of the product. On average, 19% higher conversion rate when you show the product. Especially in the hero section where you have one chance - if people don't scroll down, that's it - and you're not even showing the product in the hero.
Really interesting. And yeah, we've also tested video and it can be quite bad for conversion if it's in the wrong place.
Yes, in the wrong place. But video can work really well in the right place. In the wrong place, it cannibalizes conversions. Let me share something everyone at home can do. It's called the one-second test. Take your website, take the homepage, right-click in Chrome, choose Google Translate, and translate everything on your homepage into another language. Preferably a language no one around you speaks. Then go to five people who know you but don't know you well - not someone from the company, just an acquaintance - and show them the homepage and say "what do you think this company does?" By removing the copywriting from the equation, you force them to rely only on the imagery. The reason it's called the one-second test is because your brain processes images in about one second, much faster than it processes words. That is what your audience does when they get to your homepage. They're quickly scanning: what is this? What do they do? Is this relevant to me? Their brain is making snap judgments. So if you rely too heavily on the copywriting, you're going to fail that test. If you have that stock photo of someone sitting in a conference room, no one's going to get it right. "I don't know, what do they sell? Sweaters? Conference room tables?" If you show the product, they'll say "it looks like software" or "some kind of business tool." That's enough. I'm in the right place. You go to a restaurant - when you walk in, your brain makes snap judgments about whether it's fancy or fast food. You don't have to look at the menu. The white tablecloths and the maitre d', or the plastic chairs and dollar menu, tell you everything before you read a word. That's what you're looking to deliver on your homepage.
Yeah, missed opportunity. Have you seen patterns that constantly win? Any universal truths?
Showing the product is as close to a universal truth as I've seen. There are a lot of different ways to execute it. We ran this test for a company that does air conditioning repair. In the original, they showed a mom with her two kids. In the variant, they showed three people from the air conditioning company in uniforms standing in front of their van with the company logo. That is the product. Why do you call an air conditioning repairman? Because your air conditioner is broken and they're sending an expert to fix it. The variant beat the control because it showed the product. So yes, showing the product has a high likelihood of getting you more conversions. The way you execute it is very specific to the circumstance. I've seen some companies where an animated GIF performs really well. I've seen some where it does not. Nothing wins all the time.
This already brings me to personalization. How do you see personalization working out? This is a good candidate for personalization rather than A/B testing.
Maybe, yeah. The broad promise of personalization is: you show your audience something that is most relevant to them. The idea has roots in direct-to-consumer, where someone walks into a store and you say "what can I help you find today?" - "I'm looking for a jacket." - "Let's go look at the jackets." That's the central premise. In that sense, if you have the ability to take the headline from your ad - say it's "get live in a week" - and put that in the headline of the landing page, I think that could work really well. Where I think personalization has been oversold is the idea that you should personalize every part of your landing page for every persona. You add complexity, everything gets harder. In B2B, if you have five personas and you personalize across all of them, you get really small segments that are hard to measure. Let me be clear - I'm not saying don't do personalization. Here's what I found: most websites in B2B SaaS have low-hanging fruit you can solve without personalization. Once you've exhausted that, personalization can usually get you another 7-9% lift. But if you go there first, you're doing things out of order. If you're out of shape and you want to get in shape, the first things your personal trainer has you work on are running, pushups, and diet basics. Then you start fine-tuning. But you can't do five pushups yet and you're talking about specific muscle group optimization. You're not starting in the right place.
Let's go to the future. Where do you think CRO as an industry is heading with AI? Is it more hype? Can you find it useful already?
AI is here for sure. And it's already had an impact on CRO. The dream is AI will optimize your landing pages, make decisions about which test to run, maybe even simulate the tests using synthetic data and give you a winner. There are some tools getting close to there and I think they're pretty cool. But broadly, the broad market is not there yet, at least not in B2B. Because in CRO, one part of the equation is "which test should I run?" Another part is "let me interpret the data and pick a winner." Those two components represent a fraction of CRO. The rest is: how do you convince your legal team to approve the copywriting? How do you sell your CEO to do this message test when the board just told them to do the opposite? How do you tell your brand team that wants video on every landing page that this is probably going to cannibalize conversions? AI can help with some of that because it can provide useful data. But I haven't seen an AI that can persuade your CEO that what the board told him was wrong. I could be biased by the industry I grew up in. Maybe I just can't see it. I think the right thing is to continually test ideas. The risk of picking the wrong winner is low if you test continuously because the next test will beat it.
Before we go to wrap-up questions, I have one section called "things nobody says out loud." What would you say, Sahil - something that everybody thinks in your industry but nobody really dares to say?
I don't know about "nobody says them out loud" because I say them out loud. But here's one: you should only test one change on a webpage at a time. What a load of rubbish. If you do that, you're going to be here until the end of time trying to find a winning test because your minimum detectable effect is going to be so low. If you're booking.com or LendingTree with millions of visits, and you want to test the corner radius of your CTA button - maybe you can do that. But in B2B, you can't. I hear all the time "we don't have the traffic to test." Usually it's because you're running meek tests - trivial things that don't move the needle. This whole idea of "I'm going to change the color of the button and be exactly sure it worked" does more harm to CRO than it helps. Let's say you rearrange your hero section so the form is above the fold, which allows you to bring social proof above the fold, which makes your headline more credible. All of those things work together. Yes, you changed three things. How do you know which of those three did it? I'll be the first to say you can estimate - you don't know definitively. But they all work together. The test has to be coherent. If you're just changing random stuff, I'd agree it doesn't work. But if it works, isn't that what you care about? And let's say you're wrong - the next test will beat it because it wasn't a true winner. When you continually test, you lower the risk of calling the wrong winner.
A couple of wrap-up questions. What one thing would you suggest to fix on somebody's website tomorrow?
The one-second test - I think that's a good starting point. Number two is skimmability. In B2B, for some reason we love to talk about ourselves and write about ourselves. Long walls of text. Sometimes I think, are we paying our product marketers by the word? Charles Dickens got paid by the word. His stories are really, really long. But you're not writing to impress your 12th grade English teacher. You're writing for a very specific skill: short-form copywriting on a small screen.
Something you changed your mind about after seeing data - something where you were wrong and data proved it.
The page that looks the best should win. I've seen some ugly pages that convert incredibly well. And things that look amazing that don't convert. Number two - I have changed my mind about the role that brand and brand positioning plays on the homepage. A year ago, I was much more "give me a quantitative headline that delivers a specific benefit-oriented claim, cut all the other stuff out." I have changed my tune a bit. On things like the homepage - not paid landing pages or squeeze pages - sometimes part of what you're trying to do is tell a story. You might be trying to move the company from here to there. You might be trying to explain how you're different. And I think the promise of the brand has a bigger role to play than I used to think. And by the way, that can't always be captured by A/B tests.
Last question. What should teams stop wasting time on when it comes to experimentation?
Stop spending hours writing a brief to convince everyone to run the test. The time you spend on that, just run the test. Often in two or three days, if the test isn't winning, you kill it and move to the next one. Treating every proposed test as if you were writing a doctoral dissertation is a giant waste of time. A brief should take an hour, and then you run the test. But I have seen organizations that spend days or weeks just on the brief, before they even build the test. Then they circulate it, product wants to weigh in, brand wants to weigh in, legal wants to weigh in. And you're going to committee the thing to death.
Yeah, that's a good way to end the episode because I agree 100%. That's how we try to run it inside my team too.
And I would couple that with the flip side: the fallacy of asking "what did we learn from this test?" when it didn't win. Most of the time what you learned is the test didn't beat the control. Run it in the next test. If you do a specific vein of testing and certain types consistently win more than others, okay, you've learned something. But most of the time, the "learning" from one test is that your audience found this option more or less favorable during this time period. That's it. Run the next test.
Thank you very much, Sahil. That was a pleasure. Love the insights, love the snippets you shared. If you enjoyed the episode, make sure to follow Sahil on LinkedIn. Thank you very much. Until next time.
Thank you for having me, Margus. See you next time.
How does Spiralyze's prediction engine actually work?
Spiralyze crawls 78,000+ websites to identify A/B tests that companies are actively running and - crucially - keeping (which signals the variant won). The algorithm then finds similar companies to the client and recommends tests that have already proven to work in that niche. Instead of starting from guesses or generic best practices, clients run a portfolio of proven winners.
Why do stock photos hurt B2B conversions?
When visitors first land on your page, their brain processes visuals before copy - in roughly one second. A stock photo of a person using a laptop in a conference room tells visitors nothing about what you actually offer. Showing the product gives an immediate visual signal. In Spiralyze's data, showing the product vs. showing stock photos produces an average 19% higher conversion rate.
How do you run the one-second test on your own website?
Right-click your homepage in Chrome, choose Google Translate, and translate everything into a language no one around you can read fluently. Then show the page to five acquaintances and ask 'what do you think this company does?' If they can't answer from the imagery alone, you're relying too heavily on copy - which visitors won't read on their first visit.
When should B2B companies invest in website personalization?
After fixing the fundamentals. Most B2B SaaS sites have high-impact, low-effort fixes - showing the product, improving skimmability, matching ad hooks to landing pages - that are worth more per dollar than personalization. Once those are in place, personalization can add another 7-9% lift, typically by mirroring ad copy in page headlines and showing relevant case studies per segment.
What should CRO teams stop wasting time on?
Over-investing in test briefs. The process of getting legal, brand, product, and executive sign-off on every proposed test is one of the biggest time sinks in CRO programs. If a test is reasonable and won't damage the brand, spend an hour writing it up and run it. The data from the test is worth more than any pre-test debate.
Sahil is the CEO of Spiralyze, an A/B testing agency that has run thousands of experiments across 78,000+ websites. One of the most data-driven voices in the CRO space.