Hot Tips is a constantly growing, curated collection of candid advice by and for product people.
Think of it as a precious piece of advice you wish you had received when you started building products. It’s a short snippet of wisdom that helps you do things differently.
Contributing a Hot Tip it the fastest way to reach 3,000+ makers from all over Europe. Your daily grind might be their ‘aha moment’!
1. Write your Tip following the guidelines below.👇
2. Submit the Tip through Typeform.
3. Wait patiently! The Tip will undergo some scrutiny by our Hot Tip Catcher, who will then decide whether to publish it (we may tweak the content for clarity).
4. Watch out! Every week we’ll pick the best Hot Tips and share them with the community in the JAM newsletter. Look out for yours! 👀
Your Tip can belong to one of the three categories.
📖 Be as open as you can: share insider knowledge, something people won’t have come across before. A Hot Tip reveals how you do things.
🎨 Show, don’t (just) tell: talking about your roadmapping process? How about including a screenshot of the tool you use? There’s nothing better than seeing your ‘behind-the-scenes’.
💌 Keep it short and personal: aim for 200 words max, and word it like you’re helping a friend out.
🔧 Share tools: offer readers an opportunity to explore the topic. Link to at least one helpful ebook or article that helped you in the past.
It's hard to find a balance between 'quick and dirty' research and more time-consuming in-depth studies. Where do you draw the line to make product decisions?
When I started my first job, I was always waiting to have 100% of the information before making a decision and taking action. Why? Maybe because I am French — but let’s not make a stereotype.
As I evolved, I learned about the 80 / 20 rule, and this completely changed the way I approach things.
Long story short, in this context the 80 / 20 rule means that: 80% of the data is gathered in 20% of the time, and the remaining 20% data will take you 80% of the time.
Well, this is the theory. In practice, there is no one way of doing things, so you have to find the way that suits you best, and adapt to the situation.
Here is what I typically do:
Do you feel this is enough data?
Yes -> Go go go! Test it, learn, adapt. And adopt the “fail fast” approach.
No -> How easily can I get the additional data? How important is that data? Can I start making progress and complete the missing information on the way? Again, what are the stakes of making a decision without the missing data?
This way, I learned to make a decision when I have the critical amount of data I need. This allows me to avoid making poor decisions without basing it on enough data, and stop loosing time and delaying a decision while waiting for additional data to appear by magic.
For a healthy gut feeling, invest in good prebiotics. With regards to a gut feeling in product, it comes with practice. Here is your PM prebiotic regimen.
Depending what you’re researching one type of data might be better than another. Learn what the data is telling you. For example, A/B testing performance of a landing page. If you have high enough traffic, you can rely on quantitative data like number of clicks. But, for assessing intuitiveness of a feature, it will be better to talk to the users.
More often than not you’ll need both qualitative and quantitative data. But be sure you know how they interact. Three out of five people you interviewed might find your pin-to-top feature useless. But, if the numbers show 65% of app users pin daily, you know to take your interviewees' opinion with a grain of salt.
Research, like editing or perfecting UX, can be an endless process. Before you start decide how much time you will devote to research, and how much data you will collect. Predetermine the number of customers to talk to. Use a calculator to establish the right sample size and ensure statistically significant results. Yes, you might need to refresh your high school stats for that. But hey, this time it’s actually for a better cause than getting a pat on the back from your math teacher.
These are two different processes. Analysing data mid-collection will introduce bias to your process, for example, confirmation bias in seeking our results to prove what you want to see.
Initially err on the side of having too much data, rather than too little. And, in case you didn’t yet have enough to learn here is another thing to add to your list. Investigate how others use data to arrived to their decisions: read case studies, and talk to the PMs (how about at JAM afterparty?).
User interviews are great for helping you make a call on what features to test. I often use insights from just one round of interviews as the starting point for a brainstorm, where we get down all our ideas for solving a user problem, then narrow them down into what ideas to test first, live on our site or app.
For example: At the Guardian, we ran some user interviews to learn about what people who read the news find “relevant”. We learned that “relevance” meant a number of things, from recommendations, to editor’s picks, to the ability to control news alerts you receive, and much more. I summed up what we learned in a simple illustration to help the team keep it front of mind, and we used this as a starting point for a brainstorm on how we could make the Guardian more relevant.
We narrowed our ideas down to our five favourites, which were rapidly prototyped and shown to users. Of these, three ideas showed promise, so we turned those three into into live tests.
I always try to ensure we test multiple ideas, each with a clear hypothesis and success metric. This helps us make a call - rather than testing just one idea and having to decide whether or not to progress it further, we can choose the best performing idea of the bunch and throw the others away. The fact that we’re keeping our tests lean, without too much code or intense design work, means it’s not a big deal to test a few things at once, and decisively throw away the losers.
I use this cycle with my product teams often, to ensure we’re taking action on what we’re learning rather than getting bogged down in indecision. User research sessions always result in a decision about what to prototype and test; the prototypes and tests are always as lean as we can make them, so that we can get them out there and make our ultimate decision.
For the product team at the Victoria & Albert Museum (V&A), this really depends on what we're testing and what for exactly. For anything to do with our programme, we find it easiest to do ‘guerilla’ research and speak to visitors just outside of our office. For a project like Search, we recruited a number of users according to our target audience segmentation to see if our search results have been organised in an easily understood way. For Collections online, we needed users to help us validate if our categorisation made sense way before we started any dev work - to a regular punter, not just our target audience. For understanding and improving UX, we have found Hotjar really helpful. For Search the Collections, we're doing a 5 question survey to help us identify our users and what they're after from the site.