A critical part of product managers’ work includes analyzing user feedback and reviews in order to iterate on the product suite. Of course, working with user feedback can be quite complex—you may find yourself faced with differing or even directly contradictory viewpoints, or you may struggle to find the best way to organize and manage reviews.

To learn how successful PMs work with user feedback, we spoke with:

  • Ashley Jefferson, MBA, Founder & Fractional Product Leader, Ashley Grace Consultancy (former VP at JP Morgan, Product Manager at Prudential)
  • Kat Li, COO at Height (former PM of Growth and Shopping at Instagram, Head of Product at Digit) 
  • Tiffany Lo, Product Lead, Tribaja (former Product Manager at EverDriven) 

Read on to learn how each of these PMs worked with user reviews in current and prior positions, how they prioritize user feedback, how to handle conflicting reviews, and more! 

Thanks to Nikki Carter for crafting this piece for GoPractice.

Nikki Carter is a journalist and editor. 

She’s worked with companies like Indeed, Skillshare, and Wistia. Her articles have been published in Business Insider, The Muse, and more.

The role of user reviews in B2C product work

User feedback is important for many reasons. First, user reviews help PMs understand user needs so that the product can be tailored to meet them. Feedback can also reveal issues and bugs, which can hopefully be addressed quickly to avoid major issues. 

Ultimately, user reviews are tied to the viability of your product. The majority of B2C site users (82%) who responded to one survey indicated that the reviews influence their decision-making more than marketing and sales claims. 

Gathering reviews from your user community and communicating changes to them after they’re implemented is also a great way to strengthen relationships with your audience. When people feel like their voice matters, they’re more likely to remain loyal to your product and recommend it to others. 

We spoke to three expert PMs to learn more about how they work with user reviews to maximize positive impact on the product.


When should you gather user reviews?

“There are five times to get feedback from users.” 

Here are the key times to gather feedback: 

  1. In discovery: to understand a problem/potential problem space more and be able to craft your own insights and opportunities
  2. After the first concept is drafted: pull together a user interview or survey to get feedback. It is important for this to happen early on, as going too far down the rabbit hole without validation can cause more work down the line.
  3. Post first concept and internal revisions, before any development begins: the best testing is the product in the wild, so we don’t want to overtest it. However, (especially depending on how severe the feedback was in the first round), throw the concept back into user testing one final time.
  4. Beta: this is post major development. This period of time allows a small amount of users in and the feedback here is mostly about functionality and UX.
  5. Post launch: based on what success goals are, monitor social media channels, all feedback mechanisms throughout the site/brand of sites (surveys, polls, etc.), and analytics are very important (calculating drop off and understanding common paths through the site to the product)

How have user reviews improved products you’ve worked on? 

“We received the inevitable, ‘Now what?’ question about our tool feature.” 

During my time at JP Morgan Chase, we built a feature that allowed you to measure whether or not you were prepared for emergencies. We did this by allowing users to connect all their cash accounts across institutions and, based on what they indicated as their target emergency fund amount, we would perform a gap analysis. 

Our core assumption in developing this feature was that as part of financial health, it is important for customers to know, regardless of their income level or net worth, how liquid they are when the inevitable unexpected expense (and sometimes more than one) comes along. 

While people appreciated having the insight (“You met your target!” or “You have a $9,000 gap to your goals”), we received the inevitable “now what?” in our feedback about the tool.

We could tell this from internal feedback as well as analytics we were tracking in the experience. With the eligible population to use the tool, the adoption wasn’t anywhere we wanted it to be.

To begin to solve this problem, we knew we had to refine a few things in our strategy and get to know our target persona better. The team took a week to tackle the problem (via the Google Ventures Design Sprint process). We looked at past feedback from user research, analytics, industry trends, and interviews from experts. We drafted a revision concept that took into consideration how we would solve for the “now what” and performed live interviews. 

As expected, some items of our concept went over really well, and others did not resonate in the way we thought (example: some of the content we had in the experience resonated differently based on cultural context — something we would not have known without user testing). 

Because the changes needed post-feedback were not drastically different from what was put into user research, and the time to build was relatively low, we proceeded in putting the finishing touches on what was needed to bring it to life!

→ Test your product management and data skills with this free Growth Skills Assessment Test.

Learn data-driven product management in Simulator by GoPractice.

Learn growth and realize the maximum potential of your product in Product Growth Simulator.

→ Learn to apply generative AI to create products and automate processes in Generative AI for Product Managers – Mini Simulator.

→ Learn AI/ML through practice by completing four projects around the most common AI problems in AI/ML Simulator for Product Managers.

During your career, have you encountered an effective way of working with user reviews to improve products?

The best process, and who’s involved, depends on the type of user feedback.” 

This is how it has worked at the big companies I have been in.

When we have elected to seek user feedback via survey or live/pre-recorded interview:

  1. PM (sometimes designer) gets the working group together and expresses a desire to seek user feedback. The working group has been UX designer, UX researcher, Product, Tech, and Marketing. 
  2. PM comes up with the screen criteria, objectives of the survey/interview, assumptions to be validated, and questions that will be asked. 
  3. Feedback is recorded (working team is encouraged to attend live interviews and view raw results from survey/pre-recorded interview), and UX researcher takes the results to synthesize and pick out major themes.
  4. Team meets to walk through the results and talk about next steps.
  5. Sometimes, there is a separate meeting to understand what we will go forward with from the feedback.

When feedback is from Analytics:

  1. PM works with Analytics teams to make sure everyone is on the same page on the data that needs to be tracked, visualized, and accessible via analytics dashboards.
  2. PM connects the analytics and tech teams to make sure technical implementation questions are answered.
  3. Once all technical questions are answered and development is complete, Analytics team tests to make sure implementation works in the non production environment.
  4. The Analytics team then creates a dashboard (most of the time, it was Adobe or Tableau).
  5. PM works with the Analytics team whenever an update needs to be made.
  6. As a PM, depending on what was happening with my product, I would check this report daily (around launch time, important marketing campaigns) or weekly, the further we were from a big launch or campaign.
  7. As a PM reviews the analytics, it is always important to keep in mind product strategy and progress toward desired outcomes to know if any immediate changes need to be made or you can proceed with your roadmap as is.

The two processes above have worked fairly well!

When it comes to feedback sourced from social media, this is where it becomes more manual. I haven’t thought about it too deeply to know whether there is a better way to do it or not. But I am sure there is. 

As a PM, I believe keeping your pulse on feedback on social media is so important. I usually go on X, Facebook groups, Reddit, Tiktok, Apple, and Google stores. 

  1. Look at the feedback from your company’s app specifically. As you are reading through the feedback, note any common themes.
  2. Take screenshots! Screenshots are so important as you share this type of information with other teams and leaders internally.
  3. Assemble it. I make a presentable deck and include screenshots and insights. (I don’t worry about making it beautiful, just presentable enough to where if it was sent to a senior leader, it would have enough context.

How do you prioritize user feedback from most to least urgent?

“There are two ‘buckets’ that warrant the highest sense of urgency.” 

I look for a couple of things (mostly in this order):

  1. Is there a legal/compliance issue that is coming through in the feedback? —> needs to be handled ASAP
  2. Are individuals able to use the app? Are the load times so slow or outages so frequent that the app is becoming unusable? —> needs to be handled ASAP
  3. Is the feedback I am reading supporting the product strategy and the outcomes we are looking to achieve?

If it doesn’t fall within the first two buckets, and you are clear on the strategy and the desired outcomes for your product, it will be easy to decide whether something should be tackled or shelved. 

How do you handle contradictory or conflicting user feedback? 

“First, find out if the feedback is coming from your target persona.” 

If conflicting feedback is being received from your target persona (in an interview or through a survey), then use that as an opportunity to ask more questions. Try to get as much clarity as possible. It may be that you have to toe a fine line, and the feedback is exposing that. 

If the feedback you are receiving is not from your target persona, it is great to make yourself aware of it and file it away. You never know when that feedback will be helpful (strategy shift, persona expansion).

Overall, I think it’s always challenging to understand if feedback is a preference or a trend worth paying attention to. I will say those patterns tend to become clear and over time, you will know.

Should PMs be in charge of communicating changes to end users?

“I see PMs as storytellers.” 

Internally, I think that it is important to let all your stakeholders understand what the feedback is and why you feel it is important to prioritize, how it connects back strategically, and take questions. Externally, someone from the PM team is usually involved in creating release notes for internal and external populations.

At Chase, there was a whole process where when users logged in the web and mobile applications, we made users aware of the updates as soon as they opened the app and were on the new version. It is not always stated that changes came from their feedback.

Brian Chesky from Airbnb does an amazing job at communicating directly with his customers, and I wished more product teams were intimately connected to their audiences.

Here is an example of when he solicits feedback, and an example of how he announces changes made based on feedback. 


During your career, have you encountered an effective way of working with user reviews to improve products? 

At Instagram, user research happened in person across the globe.” 

Just like with other disciplines like sales, typically it’s product folks (CEO, PMs, etc) who originally own and drive user research before hiring dedicated folks to run point. At pre-product-market-fit companies, this makes a lot of sense; it ensures the people creating product roadmaps are intimately familiar with their customers and building from a place of deep understanding of pain points.

As you grow, dedicated user research teams emerge who are specialized experts on running research and relaying insights back to product teams. Although some of PM’s deep understanding may be lost in this shift, one understated advantage is researchers are both fully trained to conduct sound studies and are often able to have more objective perspectives, since they are not the ones directly building a feature or responsible for a product’s success.

At Digit, user research reported into product, and we found a lot of success by partnering together to define research focus areas. For instance, user research and product would develop their roadmaps in lock-step, so any research insights would be primed to immediately influence what we were building.

Rather than trying to come up with either solutions or new opportunity areas, our research team stayed attuned to users’ needs and prioritized conveying those in approachable ways to product, engineering, and design.

At Instagram, user research happened across the globe, where PMs, engineers, product designers would travel to be on-location and listen live to interviews (with live translation as needed!). This style of research allowed the product team to request immediate clarifications or follow-ups during interviews, and gave us the opportunity to deeply internalize customers’ needs and frustrations because we were there in-person, saw their expressions, and heard the emotion in their voices. 

I remember going to New Delhi to talk to teenage girls and understand their unique privacy and safety concerns, which were different from mine not only as an adult, but as a woman who grew up in the US with a totally different context and set of social media needs.

In terms of what works well at B2C companies, normally PMs and then dedicated user researchers are in charge of running the customer research process. User researchers will work directly with PM when the product roadmap is being set to create a user research roadmap that dovetails hand-in-hand with the product roadmap.

And even after the user research roadmap is set, I like to work closely with the lead user researcher when they begin planning a new study, to make sure research is fully knowledgeable about what our current product plan is and what types of questions or answers would be most useful, now. Researchers don’t want to do busy work either; their goal is to provide insights that will directly influence what we build. 

Fail states I’ve seen are ones where user research is left to their own devices and comes up with a really theoretically interesting roadmap of surveys and studies, but is not directly linked to anything the product team is already committed to working on, so their insights and reports go nowhere.

How can B2C user feedback best be organized? 

When I first started in tech, we paid people to transcribe interviews. Now we have AI tools to transcribe interviews very accurately, within minutes. This really unlocks a fast feedback loop because it reduces the lag between collecting user feedback and acting on it. There are

also now tools available to help sort findings thematically and supplement your own manual effort.

Historically, my personal process is typically listening to user research reports/presentations, but also reading every transcript myself to really familiarize myself with the insights and make sure I have full context of quotes and insights. I love user research—using it to validate beliefs, uncover new potential product opportunities, identify pain points, and run successful betas can absolutely be a superpower when used well by a PM.

To maximize the chances that user feedback affects product building, I’ve found the feedback should be organized into extremely approachable reports or presentations. The more a report showcases direct user quotes vs paraphrases, the more effective the emotional response of the product team will be. Bundling impactful quotes directly next to graphs and charts that illustrate the sentiment is widespread is a simple but effective way to convey insights.

Research reports should also separate out underlying needs vs customers’ suggested solutions. It’s not your users’ job to understand all the product complexity and business needs that go into feature design. By highlighting both customers’ pain points and goals with the full context, product teams are freed up to creatively problem solve and explore multiple ways of addressing these needs.

How do you handle contradictory or conflicting user feedback? 

“Hearing conflicting or contradictory feedback is something PMs are uniquely situated to handle.”

This is when you must make a call based on the product sense you’ve developed over time, and be accountable for the results. The hardest cases are ones in which you or your team are torn on what to do, and the user feedback goes in both directions.

A good PM must develop an informed hypothesis and move forward so as to not cause their team to come to a standstill and waste precious development time. 

Once you’ve decided on your hypothesis, you must then let data validate or invalidate over time (and of course complement the data results with user research to understand the underlying why).

How can B2C user feedback best be organized? 

“My system works by sorting feedback by theme, feature, or user type.” 

The key for me is setting up a system that allows me to really dig into the feedback and prioritize what matters most. User feedback can come from everywhere—surveys, app reviews, support tickets—and it can easily become overwhelming if it’s not organized.

My system works by sorting feedback by theme, feature, or user type, I can identify patterns and understand what’s truly resonating with users. This helps me move beyond isolated comments and grasp broader trends to uncover the big picture.

I’ve learned that not all feedback will hold the same weight. So, I analyze the frequency of issues, how much they impact users, and how well they align with product goals. This prioritization ensures I’m tackling the most critical problems that will have the biggest, most positive impact.

What tools and teams are helpful in this process?  

“I’ve found several tools and colleagues invaluable when analyzing user feedback.”

Survey tools like SurveyMonkey have been great for gathering quantitative data to complement user reviews. The data visualization features help me understand trends and user sentiment at a glance.

I often collaborate with the data support team to run complex queries and extract specific data points that might be hiding within existing datasets. 

Customer support is a key partner. Their direct interaction with users allows them to provide context and insights behind the reviews, helping me understand the “why” behind the feedback. 

I know that user researchers play a crucial role by conducting user research studies like interviews and usability testing. This qualitative data validates or clarifies user needs mentioned in the reviews. In some cases, I’ve also worked with UX designers who possess strong user research skills and can contribute in this area.

Having the right tools and team helps me gain a comprehensive understanding of user feedback, ultimately leading to actionable insights for product improvement.

How do you prioritize user feedback from most to least urgent?

“Sifting through user reviews to prioritize the most impactful ones requires a multi-step approach.”

First, I leverage sentiment analysis tools to gauge the emotional tone. This helps identify areas of user frustration or delight, highlighting critical issues.

Next, I categorize reviews by theme, feature, or user type. This surfaces recurring pain points and distinguishes widespread problems from one-off occurrences. It’s crucial to separate the “many” from the “few.”

Prioritization involves a balancing act. I consider the impact of the issue on the user experience and how frequently it’s mentioned. High-impact issues affecting many users become top priorities.

Alignment with product goals is also key. Issues hindering core functionalities or user acquisition take precedence over nice-to-haves.

Actionable feedback suggesting specific improvements is more valuable than general sentiment. I prioritize suggestions that provide a clear path forward.

Finally, I assess the feasibility of implementing the suggestion within our resource and timeline constraints.  While valuable, some improvements might need to wait for future iterations.

By following these steps, I can effectively separate the signal from the noise in user reviews and prioritize the feedback that will have the most significant positive impact on the product and user experience.

How do you handle contradictory or conflicting user feedback? 

“I look for the underlying needs and motivations driving each viewpoint.” 

As a Product Manager and a naturally curious person, understanding the “why” behind the disagreement is what motivates my work.

One thing to consider could be user segments—sometimes, what works for one user group might not be ideal for another. This data, combined with existing user research data or A/B testing, can help validate which approach aligns better with overall user behavior. 

Sometimes, conflicting feedback can point towards a broader issue. When possible, I have analyzed the common ground between opposing views to reveal a better solution that caters to both needs. 

Should PMs be in charge of communicating changes to end users?

Communicating changes based on user feedback is a crucial part of my job.”

I believe it’s important to maintain transparency and trust with users by informing them about how their feedback has shaped the product. This demonstrates the value I have for their input and continually builds trust.

While not all feedback can be implemented immediately or at all, by clearly communicating about upcoming changes and the rationale behind them, we help to manage user expectations. 

This can further encourage continued engagement as well. I like to show people how their feedback has led to improvements and encourage them to keep providing valuable input in the future.

The way I typically communicate changes based on user feedback is by detailing out new features, bug fixes, and improvements in release notes accompanying product updates.

In past organizations, I played a large part in working with the marketing team on crafting blog posts that outlined significant changes driven by user feedback and explained the thought process behind the implementation.

I have also utilized in-app messages to highlight key changes and how they address user needs.


Learn more

— Qualitative research in product management: the guide

— Analyzing qualitative research results effectively