Show HN: AI agents that validate your product idea by talking to real users

app.holyshift.ai

4 points by Matzalar 3 hours ago

I built a tool to solve a problem I kept running into: I was making product decisions based on guessing instead of real users. I kept building stuff nobody wanted as I was usually wrong.

So, I built HolyShift: AI agents that validate product ideas by talking to real people on Reddit, HN, X, and LinkedIn … then generate a detailed GTM and “Should we build this?” report.

No synthetic data (ChatGPT). No predictions. Only real conversations from real people.

What it does • Posts platform-native questions (where allowed) • Collects real reactions, objections, pricing signals • Clusters feedback into themes (pain, demand, adoption, pricing …) • Runs a monitoring agent for sentiment analysis • Produces a short validation report (PRD + GTM)

All actions are rate limited and reviewed by a human for compliance.

How it works (technicals) • Multi-agent pipeline (intake → landscape → engagement → monitoring → synthesis → report) • Platform specific prompting (HN vs Reddit vs LinkedIn …) • Real-time sentiment + clustering via embeddings

Link https://www.holyshift.ai (Early beta)

What I’m looking for • What should stay human vs automated? Should we automate this 100%? • How do you do your product validation? Do you talk to your potential users (and who?) before you build?

Happy to answer anything.

lovrok23 3 hours ago

I'm curious about the guardrails here. In my experience trying to use LLMs for user research, they tend to be "yes man" often hallucinating features or agreeing to user requests that aren't actually on the roadmap just to keep the conversation flowing.

how do you constrain the agent to stick strictly to the facts of the product hypothesis without making stuff up to please the potential customer?

  • Matzalar 3 hours ago

    We ran into the same issue early on. Our fix was to lock each agent to a small JSON snapshot of the idea (no other knowledge), plus strict response templates. They can only ask questions, never describe features or promise anything. If a user asks for something outside scope, the agent replies with “not in the current hypothesis, why is that important to you?” rather than making stuff up. We also have a human review step before anything goes live.

tene80i 2 hours ago

Interesting idea. Nice design. But usability issue: on mobile I hit your yellow chat CTA thinking it was submitting the app text input. You might want to move that out of the way.

thebiggodzzila an hour ago

Chat always boosts my confidence, but reality isn’t always as kind. How can I really tell if my idea is any good beyond what Chat says, and how many people do you actually interact with?

likethejade87 3 hours ago

Are agent pitching ideas or do actual research? Sounds super interesting though

  • Matzalar 2 hours ago

    They’re not pitching or selling anything they only do research. The agents ask structured questions in relevant communities and collect real reactions, pain points, objections ... No selling, no marketing language.