top | item 47146444

(no title)

jcalloway_dev | 6 days ago

Good instinct to ask before building — you're already doing the thing you're selling.

On your questions:

1. Figma clickthroughs + a Loom walkthrough sent to 10-15 target users. Messy but cheap.

2. "Fake" video data is fine if you're measuring the right thing. Click-through on a landing page beats survey intent every time. People lie to be nice; clicks don't.

3. Honest answer: I'd pay if you could show me a case where the video said "no" and saved someone real money. One solid example beats a hundred testimonials.

Bigger concern — 2 devs saying they'd use it isn't validation, it's encouragement. Before you build anything, I'd run your own video prototype of this service and see if strangers convert. Meta, but you'd learn fast.

What's the actual customer you're picturing — solo devs, or teams with some budget?

discuss

order

zhongyongxu|6 days ago

Thanks for the brutal honesty — exactly what I needed.

Made a video demonstrating the method — comparing two paths: https://youtu.be/C2bAB-s-lb4

Still need that "video said no, saved $20k" real case. Running 3 free pilots now to get it.

48h in: 12 comments, 3 DMs, 0 "shut up and take my money" yet. Learning that method demos get "interesting" but case studies get buyers.

Know anyone with a feature they're debating? First validation is free in exchange for case study rights.

Re: solo vs seed — still torn. Who actually acts on validation data vs just wants reassurance?

XCSme|5 days ago

Strong agree on clicks over surveys. Once I move from a video prototype to a live MVP, the real signal comes from watching what people actually do.

I built UXWizz mainly for this. Self-hosted heatmaps and recordings make it pretty obvious where people get confused or drop off, and you don’t have to rely on polite feedback [0].

[0] https://www.uxwizz.com

zhongyongxu|5 days ago

Strong agree — clicks > surveys, and behavior > opinions.

The gap I'm trying to fill: before you have the live MVP (and before you invest in heatmaps/recording infrastructure), how do you know which workflow is worth building?

Video prototypes are the "pre-MVP" behavior test — show the experience, see if they click "I'd pay for this" vs just "interesting."

Curious: When building UXWizz, did you validate the "self-hosted vs cloud" decision with video prototypes, or did you ship and learn from early user behavior?

Feels like your tool captures the truth post-launch, mine tries to predict it pre-launch. Complementary approaches.