Show HN: I built a chatbot to converse with 3M SaaS product reviews

reviewradar.ai

90 points by andreict 2 days ago

Hey HN,

I'm Andrei, and together with my co-founder Roman have been working on a tool called Reviewradar. Working with startups, doing interviews and sifting through endless software reviews are daunting tasks. So, we built a chatbot that lets you chat with over 3 million reviews from more than 100K SaaS products.

With Reviewradar, you can ask questions like:

- create a comprehensive SWOT analysis for both Notion and Obsidian

- give me negative feedback and complaints you have about Postmark

- summarise the reviews you have on products in the OCR category

I would love to get your feedback on it. Check it out here: https://reviewradar.ai

Looking forward to your thoughts / suggestions,

Andrei and Roman

altdataseller a day ago

The problem is I don't trust all those SaaS reviews in sites like G2, TrustRadius, Gartner etc. A lot of them are paid, and incentivized. Heck, I personally wrote a few nonsense ones just to get free gift cards. But I know some people give our free credits or discounts if you write a review for them.

Even a lot of "free" reviews on Reddit are secretly posted by the company, or someone who got paid by the company.

[UPDATE] Just tried the tool. 7 of the 8 bullet points your tool gave me was also given to me by Claude. I think both are using similar data sources, so I fail to see what differentiates this tool

  • romanleeb 17 hours ago

    Hi this is Roman, the co-founder Andrei mentioned in the post.

    I agree with you, a good chunk of these reviews come about through incentivizing people and not all of them are organic in nature. However, many are genuine and even the ones that are from people receiving some sort of credits do contain some very interesting things. But I get your point, would be better if there was no carrot in front of their faces urging them to leave a positive review.

    Also we do not take them at face value but instruct the model to distill out the essence of what is mentioned when analyzing the user feedback and formulate a response based on patterns embedded in the findings across multiple reviews.

    Curious on what you got as a response from Claude - can you share the prompts with me?

  • guru4consulting 15 hours ago

    I tend to ignore positive reviews in general as they can be bought. Negative reviews are more reliable in identifying any common pattern of problems with a particular product. I guess this tool will be helpful in that case.

CuriouslyC 2 days ago

Your landing page is doing the heavy lifting here. Nice design.

You should do side by side comparisons of output from GPT4o/Claude and your product to sell me on why I should care about the RAG you added and your tuned prompts.

If you're giving away inference you should probably tell people how you're intending to make money from them. Free services that use expensive APIs are a big red flag for me.

  • romanleeb 17 hours ago

    This is a very good idea, thanks a lot for this!

    Yes we do intend to monetize this at some point, currently offering access as a free beta to first validate whether this is useful as well as learn more about the specific prompts that the model is expected to answer (and then obviously improve it based on that knowledge). But what you proposed with side-by-side comparisons on the LP itself is a fantastic idea.

yawnxyz 2 days ago

Wow incredible, could you give a small glimpse in how you collected/normalized/vectorized all the different reviews + information? Just lots of scraping + chucking it into Pinecone etc?

> Me: Give a list of the new and old gene sequencing tools

> Reply: Based on the available reviews, I’ve compiled lists of new and old sequencing products mentioned by users: Genemod, Mixmax (email sequencing...?), HubSpot Sales Hub (????)

Are you guys using a pure RAG for your responses? I think you should have a function caller traverse a tree of topics before retrieving data from the right area/space or you'll have lots of "contamination" of similar phrases across fields

  • romanleeb 17 hours ago

    Currently we do not have a function caller that checks the topic of the prompt and then direct the query accordingly - this is a very good input thank you! We did some work to add relevant details to the review chunks like e.g. the category of the tool in question but we know we got work to do here to make this even smoother. Kudos for bringing this up here, we'll 100% look into what you suggested.

thih9 a day ago

How is it different from asking chatgpt (or other popular LLM)?

  • romanleeb 17 hours ago

    So the difference is we inject software review text chunks into the conversation as hidden context for the LLM to answer the query. Based on your input we do a cosine similarity search in the vector DB and retrieve the most relevant results which are then analyzed and subsequently the model formulates a response.

    Maybe we need to make this even more clear on the LP - like the comment earlier where we show responses side-by-side from e.g. plain vanilla OpenAI/Claude and Reviewradar.

tcdent 2 days ago

Cool idea, but fails one well known fact:

Customers don't know what they want.

  • falcor84 a day ago

    That's somewhat true, but trite, and shouldn't imply that you shouldn't listen to customers. It's up to you as a product manager to analyze and synthesize people's feedback and thoughts, integrating it with your other knowledge and constraints, in order to arrive at (as much as possible) well-founded product plans.

    If it works well, I think this tool could really help accelerate the process, especially if you can get it to give you its sources, like Perplexity does.

    • romanleeb 10 hours ago

      That was exactly the reason we built this - to facilitate the gathering of this intel. Will it be a panacea for doing research - of course not. But it can speed things up a bit and throws another (probably faster) research methodology into the mix.

      If you're interested we wrote a longer article about how this idea came to be: https://medium.com/@roman_leeb/how-to-figure-out-what-users-...

  • inSenCite a day ago

    This statement is a russian doll as in it potentially includes the customers using this product ;)

  • dc_rog a day ago

    Maybe not, but they’ll tell you what they don’t like about an incumbent and that’s sometimes all you need to enter a market.

    • romanleeb 11 hours ago

      I agree, there is a lot of opportunities you can find when looking at a subset of frustrated customers. Ironically they tell you in greater detail what they dislike vs the super happy customers on why they like something.

  • louthy a day ago

    This is why you listen to their problems and not their solutions

    • romanleeb 10 hours ago

      Precisely, problems/complaints/negative comments unveil a lot of interesting opportunities.

      For this very reason we instructed the model to pay a bit more attention to expressions with a negative sentiment.

code51 a day ago

Landing page yes but is it actually working with production performance? I have strong doubts after executing your own demo query of "Summarize the reviews you have on products in the OCR category." in the system.

If the idea is just demo and collecting leads, then you could have cached at least your demo queries.

Everybody's too landing-page focused these days.

  • romanleeb 17 hours ago

    Hm can you elaborate on this - I do not fully understand. Was there no response for "Summarize the reviews you have on products in the OCR category"?

    The demo is not "cached" or anything this is production.

    Would be cool if you could give us more info on this.

marmaduke 2 days ago

Very cool. Can you describe any fine tuning or RAG you did? Did you have to go through a bootstrapping process?

Totally get it if you don’t want to share details.

  • romanleeb 10 hours ago

    Currently we have a quite basic RAG setup, no fine-tuning.

    We wanted to get this out fast so decided to use Assistants API but definitely needs some polishing in this RAG pipeline (e.g. having more control over the exact search queries). For the MVP it was convenient as it all comes out of the box. Metadata filtering also better with Pinecone for instance.

fakedang 2 days ago

Well this product is completely useless if it's trained on G2 and Capterra reviews.

  • ta988 2 days ago

    Why is that are they mostly fake? Or paid for?

    • nhggfu a day ago

      their biz is affiliate marketing. so yea, paid for, effectively.

piterrro a day ago

Would you mind sharing a source of the reviews? How your users would know whether your product could be trusted?

superasn a day ago

Here is some feedback. Your landing page needs samples. After spending 30s I do have some idea about what you're trying to do but some real life samples of input and output will show me exactly what to expect and how good the quality is and eventually if it's worth my time to sign up for it.

  • romanleeb 10 hours ago

    Very good input, thanks! We'll add some examples on the page.

fkilaiwi 2 days ago

neat! what is your data source?

nextworddev a day ago

I have some MBA friends who wanted to do a “AI powered G2 or Gartner” startup. Which makes me feel like this is a tarpit idea

  • thisisit a day ago

    I believe it is a good idea, especially because there is a huge CYA industry out there and Gartner owns a huge chunk of it. But very difficult to implement correctly.

    For example this one has a serious issue. It doesn't consider that things might have changed. So, a review might be complaining about a missing feature in a particular product but now fully implemented.

    I asked it for a side by side comparison for two enterprise products. It told me that a particular feature was missing from the product. I googled and found the referenced review from Jun 2022. The feature in question was implemented in 2023. My first experience with exploring Gartner is going to be next week so I am curious how they handle such issues.

abraxas a day ago

Good stuff, it gave me a very fair review of the SaaS I tested it on and suggested good alternatives. This is useful.

hoomank3 a day ago

Looks like a useful product. Does it support Facebook and Google My Business reviews?

lijianmin a day ago

How to participate in the trial

bschmidt1 2 days ago

This one seems like a blatant GPT wrapper. Why not just use 4o?

For example, "Ask any question" the use case shown is:

> "What specific python scripts are users running inside HubSpot?"

I could just ask ChatGPT that too, why would I need this product to answer the question?

Seems like everyone is just wrapping ChatGPT, creating even the identical UI down to the icon on the send button, and then making a Loom showcasing essentially white-labeled OpenAI functionality.

  • ramon156 2 days ago

    For future devs, what makes a product stand out (for devs) is how you built it. What's the data source? Did you fine-tune your data? Do you have people who already use it? Where does it shine? (no, a prompt isn't good enough).

    Also, I don't care about Hubspot reviews when I know the product. Reviews are there for unknown software that are less popular, but even then why are you considering using some half-maintained solution for your enterprise?

amelius 2 days ago

Upon reading the title, my brain took a garden path in the direction of the 3M company.

TekMol 2 days ago

Me:

What do users think about Reviewradar?

Reviewradar:

User feedback on ReviewRadar varies, with many appreciating its functionality while some indicate areas that could use improvement. Here’s a detailed breakdown of sentiments based on user reviews:

Positive Aspects:

User Experience: Many users noted that the interface is intuitive and easy to navigate. One user expressed that ReviewRadar is "the best source of software reviews," highlighting the visibility and transparency it provides.

Comprehensive Review Collection: Users value the centralized nature of reviews, stating it allows for easy tracking and management of their online reputation. One respondent emphasized, "I can point possible customers to ReviewRadar, and they know that I haven't falsified any reviews".

Support and Updates: Some users appreciated the support team, remarking that they were responsive and helpful during the onboarding process. There’s a notable emphasis on the quick resolution time for any issues.

Critical Feedback:

Integration Limitations: A recurring theme in feedback pertains to a need for better integration with other applications. For instance, users have mentioned, "It would be great to see Google My Business integration" as a valuable addition.

Clarity in Reviews: While many find the review process thorough, some users have pointed out that there were instances of what they perceived as biased or unreliable reviews. This is particularly concerning in a landscape where fake reviews can distort genuine customer experiences.

Feature Requests: Certain users expressed the desire for more customization options in dashboards and reporting features, indicating that the current setup could be streamlined to improve usability.

General Sentiment:

The overall impression of ReviewRadar is largely positive, with a community of users who find it helpful in gathering insights and managing their online reviews. However, users desire enhancements in integrations and feature sets to maximize its potential.

Conclusion:

In summary, if you’re contemplating using ReviewRadar, the platform seems to be effective for managing software reviews and tracking customer feedback. Yet, be aware of the requests for improvements in integration and functionality from existing users.

If you have any specific features or areas in mind that you'd like to explore further, let me know!

kshmir a day ago

What is this trained on? It doesn't seem to look on the web for my product at all... doesn't quite the reviews...

Are you just using an LLM? I asked to write code, and it did...

vasco 2 days ago

[flagged]

rsp1984 a day ago

Just a nitpick: please use lowercase 'm' for 'million'. When I read the title I thought it had something to do with the company 3M (not really known for making SaaS).

  • Diti a day ago

    The SI unit for million is M, so that should be 3 M (with a space). 3 m would mean 3 minutes.

    • remram a day ago

      M is the SI prefix for million (not a unit).

      m is the SI unit symbol for meter. Minute is not a SI unit, as the SI unit of time is the second.

      • Diti a day ago

        Woah, I feel dumb (I knew all this, but my tired brain did a silly while I commented). Thanks for correcting me! I can’t edit my message.

    • osrec a day ago

      It could also mean 3 meters. Context is everything. The lowercase m would be more widely understood in this context.