Product Management

How much data is enough?

It's hard to find a balance between 'quick and dirty' research and more time-consuming in-depth studies. Where do you draw the line to make product decisions?

  • Product Manager at Enablon

    When I started my first job, I was always waiting to have 100% of the information before making a decision and taking action. Why? Maybe because I am French — but let’s not make a stereotype.

    As I evolved, I learned about the 80 / 20 rule, and this completely changed the way I approach things.

    Long story short, in this context the 80 / 20 rule means that: 80% of the data is gathered in 20% of the time, and the remaining 20% data will take you 80% of the time.

    Well, this is the theory. In practice, there is no one way of doing things, so you have to find the way that suits you best, and adapt to the situation.

    Here is what I typically do:

    1) Define the stakes.

    • If the stakes are low, I definitely don’t wait to have all the data (but at least 40%)
    • If the stakes are high, I will be more careful to gather significant data (but never more than 80%)

    2) Gather the “low hanging fruit” data.

    Do you feel this is enough data?

    • Yes -> Go go go! Test it, learn, adapt. And adopt the “fail fast” approach.

    • No -> How easily can I get the additional data? How important is that data? Can I start making progress and complete the missing information on the way? Again, what are the stakes of making a decision without the missing data?

    3) Trust your gut, but not blindly

    This way, I learned to make a decision when I have the critical amount of data I need. This allows me to avoid making poor decisions without basing it on enough data, and stop loosing time and delaying a decision while waiting for additional data to appear by magic.

    Ask Océane a follow-up question!

    Ask a question
  • The Team at JAM

    For a healthy gut feeling, invest in good prebiotics. With regards to a gut feeling in product, it comes with practice. Here is your PM prebiotic regimen.

    Collect the right data.

    Depending what you’re researching one type of data might be better than another. Learn what the data is telling you. For example, A/B testing performance of a landing page. If you have high enough traffic, you can rely on quantitative data like number of clicks. But, for assessing intuitiveness of a feature, it will be better to talk to the users.

    Maintain balance.

    More often than not you’ll need both qualitative and quantitative data. But be sure you know how they interact. Three out of five people you interviewed might find your pin-to-top feature useless. But, if the numbers show 65% of app users pin daily, you know to take your interviewees' opinion with a grain of salt.

    Set a cut off point.

    Research, like editing or perfecting UX, can be an endless process. Before you start decide how much time you will devote to research, and how much data you will collect. Predetermine the number of customers to talk to. Use a calculator to establish the right sample size and ensure statistically significant results. Yes, you might need to refresh your high school stats for that. But hey, this time it’s actually for a better cause than getting a pat on the back from your math teacher.

    Separate collection from analysis.

    These are two different processes. Analysing data mid-collection will introduce bias to your process, for example, confirmation bias in seeking our results to prove what you want to see.

    Initially err on the side of having too much data, rather than too little. And, in case you didn’t yet have enough to learn here is another thing to add to your list. Investigate how others use data to arrived to their decisions: read case studies, and talk to the PMs (how about at JAM afterparty?).

    Ask JAM a follow-up question!

    Ask a question
  • Freelance UX consultant at Eurostar

    User interviews are great for helping you make a call on what features to test. I often use insights from just one round of interviews as the starting point for a brainstorm, where we get down all our ideas for solving a user problem, then narrow them down into what ideas to test first, live on our site or app.

    For example: At the Guardian, we ran some user interviews to learn about what people who read the news find “relevant”. We learned that “relevance” meant a number of things, from recommendations, to editor’s picks, to the ability to control news alerts you receive, and much more. I summed up what we learned in a simple illustration to help the team keep it front of mind, and we used this as a starting point for a brainstorm on how we could make the Guardian more relevant.

    We narrowed our ideas down to our five favourites, which were rapidly prototyped and shown to users. Of these, three ideas showed promise, so we turned those three into into live tests.

    I always try to ensure we test multiple ideas, each with a clear hypothesis and success metric. This helps us make a call - rather than testing just one idea and having to decide whether or not to progress it further, we can choose the best performing idea of the bunch and throw the others away. The fact that we’re keeping our tests lean, without too much code or intense design work, means it’s not a big deal to test a few things at once, and decisively throw away the losers.

    I use this cycle with my product teams often, to ensure we’re taking action on what we’re learning rather than getting bogged down in indecision. User research sessions always result in a decision about what to prototype and test; the prototypes and tests are always as lean as we can make them, so that we can get them out there and make our ultimate decision.

    Ask Andrea a follow-up question!

    Ask a question
  • Product Manager at V&A

    For the product team at the Victoria & Albert Museum (V&A), this really depends on what we're testing and what for exactly. For anything to do with our programme, we find it easiest to do ‘guerilla’ research and speak to visitors just outside of our office. For a project like Search, we recruited a number of users according to our target audience segmentation to see if our search results have been organised in an easily understood way. For Collections online, we needed users to help us validate if our categorisation made sense way before we started any dev work - to a regular punter, not just our target audience. For understanding and improving UX, we have found Hotjar really helpful. For Search the Collections, we're doing a 5 question survey to help us identify our users and what they're after from the site.

    Ask Eva a follow-up question!

    Ask a question

Post your tip

Have something to add to this? We'd love to hear from you!

Post a tip

More tips like this...

People & Teams
How do you keep your design team members growing?
Do you consider your company design led? Design is at the centre of every experience we create. So how do you ensure your design team has a path to improve, up-skill and deliver even more value?
People & Teams
Product managers, how do you work well with designers and engineers?
Many teams are built around these core roles and all need to have their seat at the table. But how do you ensure the relationship thrives? Do you take a specific approach?
Product Management
How do you transition from engineering to product management?
As an engineer you're creating the product. Your knowledge and experience sets you up nicely to move into product management. So how do you make the transition? What should you look out for? What should you avoid doing?
Product Management
What are the most overlooked skills a PM should work on?
You can probably list core skills needed to perform well as a PM: ability to see a big picture, understand user data, perform research, and communicate the vision to different stakeholders. Is this all that makes a great PM? What other skills should you d