The Secret to Scaling a Marketplace: Experiments

By Adrian Githuku  ·
Marketplaces
...

Share article

A speedy 3-minute ETA when you’re late for a meeting. Your groceries hand-delivered to your door. A beautiful house overlooking the ocean for a weekend getaway. These amazing experiences have all been made possible thanks to two-sided marketplaces, the wonders of the tech world over the past decade.

I’m lucky enough to have gotten a front-row seat to the marketplace revolution, having spent four years building the Uber Eats business as a General Manager in the Southeast. During my time there, as we moved from a janky operation delivering salads out of coolers to delivering McDonald’s and Starbucks at scale, I noticed that the most successful teams were able to do one thing very well: rapid experimentation.

Supply-side growth is key

Scaling a two-sided marketplace is a unique challenge because each side of the marketplace has to grow in lockstep. However, it’s not obvious which side to focus on first -demand or supply. Lenny Rachitsky, former Growth Lead at Airbnb, estimates that in 80% of marketplaces supply should be the primary focus because supply growth kicks off a flywheel of overall marketplace growth. At Uber for example, as more drivers (supply) are on the system, ETAs go down and customers (demand) have better experiences. This leads to more demand from customers, which attracts more driver supply, and on and on.

However, the work isn’t over after the flywheel begins. The two sides of the marketplace still need to be in balance over time. Even the biggest marketplaces are still laser-focused on supply growth. For example, right now Uber is struggling to retain supply as drivers have been slow to return to the platform as the pandemic ends, prompting the company to offer $250 million in incentives.

In short, the difference between marketplaces that fail and those that succeed is the ability to reliably acquire and retain supply

Operations teams: the unsung heroes

While the founders, product visionaries, and engineering wizards who helped build the Airbnbs and Lyfts of the world get plenty of accolades, there is a group of unsung heroes who are critical to driving this needed growth of marketplace supply: operations teams.

Operations teams are the secret sauce of the most successful marketplaces. They are on the frontlines, driving supply growth and engagement to ensure that all sides have great experiences. Ops teams have to accomplish this important goal without any control over the product, the digital interface (app or website) where the transactions actually take place.

In my experience at Uber, and after interviewing dozens of marketplace operators, it’s clear that one thing differentiates the best marketplace ops teams from the rest. Similar to their counterparts in product and engineering, the best operations teams run rapid, targeted tests to optimize how they manage supply.

Ops experiments are the key to success, but they’re hard to run

These business experiments are unique because they're testing how a real-world action outside of the product affects user behavior in the product. Some examples include: testing types of incentives to encourage hosts to remove blackout dates on their properties in a home rental marketplace or referral rewards for new couriers in a food delivery marketplace.

Because of this need to connect the behavior of a cohort outside of the product to their behavior in the product, there’s no existing tool ops teams can use to run these experiments on their own. Product experimentation tools like Optimizely are designed to test user interface changes in an app or on a website, making them unhelpful for these tests. Marketing platforms like Iterable have email A/B testing functionality, but you can only measure opens and clicks. You still have no visibility into whether or not you're retaining those users over time.

Without bespoke tools designed for these business experiments, operations teams have to rely on data scientists. These data science teams are extremely busy, which causes the pace of experimentation to be painfully slow

An example

Let’s walk through an example of how an ops experiment is run.

An operations manager at a dog-walking marketplace is in need of more dog walkers. She wants to test a referral incentive: providing existing dog walkers with $100 for every new dog walker they refer. The ops manager wants to see how this incentive would affect one-month retention of the new dog walkers.

Based on our research and experiences, here’s what it would look like to run this experiment: 

Setup

Setting up these tests correctly feels like you need a PhD in statistics. The first step is setting up the right cohorts to test with. This requires a bunch of data work: determining the right sample size, pulling the list of walkers from the database, and randomizing the control and test groups. These tasks require advanced data skills, so the ops manager puts in a request for assistance to the data science team. The data science team takes 1-2 weeks to follow up with her. They have so much on their plate that this dog walker experiment simply isn’t a priority.

Execution

This is the most independent step of the workflow. Here, the ops manager sends an email or SMS about the referral incentive to the test group of dog walkers.  This communication is sent through a marketing platform like Iterable or Intercom. She then needs to wait at least a month to analyze the results, since she’s interested in one-month retention.

Analysis

After that month, the painful waiting game really begins. Because the results of the experiment require analysis of the dog walkers’ behavior on the platform, the data team once again needs to get involved. On average it takes data teams 2-4 weeks after the experiment is over to get the results back to the ops team.

Overall, this operations manager could be waiting 10 weeks from experiment launch to sharing results with her team! This is simply not a fast enough pace of learning for a rapidly scaling marketplace to succeed. 

There must be a better way   

It doesn’t have to be this hard. Operations teams deserve the same high-quality A/B testing tools that product and engineering teams use, designed specifically for their needs. This is a problem our team has experienced in our careers, so we’re passionate about solving it. We’re working on a no-code tool that enables operations teams to run ops experiments on their own, giving them the ability to quickly find the most impactful growth strategies for their marketplace.

We’ll be sharing much more info soon. Feel free to send feedback or thoughts to us at info@sage.link, and click the 'Join Waitlist' button at the top of the page to join our email list.