JRFuentes7 | 2 years ago | on: Show HN: Oasis AI – Craft Emails, Essays and Notes, Just by Talking
JRFuentes7's comments
JRFuentes7 | 12 years ago | on: Built an art porn site in a drunken stupor
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
Our product helps software teams “make something people want.”
When teams build or improve software, they crave data to inform their decisions, but calling, emailing, and surveying customers falls short. Product teams can’t afford to get this wrong given limited and expensive engineering resources.
With FeatureKicker, a team can quickly add a “call to action” for a new feature on a website. When a user clicks on the experimental button, our tech opens a modal window and gets user input on the new feature.
Similarly, teams can get feedback on existing features of their website, so they can improve their product.
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
I hear ya... we worry about that sometimes. But then I think about what PG says re "it's our Altair." We gotta start somewhere.
How big is the market? There are 5M product managers in the US, according to LinkedIn. Assuming 13 PMs per company, there are many, many companies that should be using this.
I don't think you could or should replace our tech with a click event. I'll agree that 404 tests are a good start... but they lack qualitative data and a rules engine. I believe that's an inferior customer experience.
We've interviewed 100+ product teams at this point. We're getting good feedback -- especially re "getting data on existing features."
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
As for the internal scoring system, we integrate with JIRA so you can send your data to your existing prod-dev mgmt tools. We hear KANO analysis is really powerful when it comes to scoring your prod roadmap.
JRFuentes7 | 12 years ago | on: Ask HN: This is our YC W2014 application. Can you give me feedback?
JRFuentes7 | 12 years ago | on: Ask HN: My attempt at empowering companies to make better product decisions
We also have javaScript API's which can be used to show an overlay. Here is an example of it being used after a customer does a search: "It asks customers, do you like the search results? and can we improve our search feature?"
I have one beta customer showing overlay after 90 seconds have passed on the "buy-now" page and another beta customer showing overlay when anything is clicked besides the "buy" button.
I hope these examples show how beta customers are using FeatureKicker to solicit feedback.
JRFuentes7 | 12 years ago | on: Ask HN: Did your YC (or other incubator) startup fail? What are you doing now?
My main squeeze is http://featurekicker.com -- software that helps teams “make something people want.”
With FeatureKicker, you can quickly add a button representing a new feature on a website. When a user clicks on the experimental button, our tech opens a modal window and gets user input on the new feature.
You can also get feedback on existing features of their website, so they can improve their product.
Any and all feedback is welcome...
JRFuentes7 | 12 years ago | on: Ask HN: My attempt at empowering companies to make better product decisions
JRFuentes7 | 12 years ago | on: Ask HN: My attempt at empowering companies to make better product decisions
>> "I checked the website, looks promising so congratulations on that. I do have a comment, your entire website is still linking to your Heroku subdomain."
I hear you. We were being as cheap as possible and didn't want to pay for SSL certificates, etc. while building early version of the product. But we're hearing this more and more, so perhaps it's time to move on :)
>> "If I'm not mistaken, this pretty much feels like A/B testing with the implementation being a survey."
It's interesting and refreshing to see someone else boil down our product. Thank you!
So I agree that it "feels" like A/B testing because you're adding a new element (and potentially removing it based on our rules engine). But it's unlike A/B testing because our tool is not designed to split your traffic across page variations.
Here's what I would add to your distillation: it's being able to ask the right user, the right question, at the right time. And this is where our product is like a "hyper targeted survey".
>> "So have you tested users' sentiments about trying to use a feature only to realize it isn't implemented and then being promoted to answer a few questions? I'd speak for myself, I'd rather not see a button to for example Authenticate with my Twitter account if it doesn't work, than to see one of which when I try to use it, I get asked a few questions. I'd feel that's a bummer."
Absolutely. This was our primary risk. After user testing, we're finding that this concern is more of a theoretical anxiety than actual feeling in practice.
>> "To that point, why would I use FeatureKicker instead of a service like Qualaroo, which works fairly well."
I think the tools and use cases are different.
With Qualaroo, you get a pop-in question based on a timeout, and those questions are typically related to overall customer satisfaction or net promoter score. But we believe that using Qualaroo to ask a specific question about a specific feature on a page will not work as well as FeatureKicker. Why? Because the question may be irrelevant to whatever the user is doing at that time. This gets even trickier when you want to ask a question about an unbuilt feature. Then you have to worry about showing the experimental feature only sometimes and coordinating your Qualaroo question to pop-in when you're selectively displaying the experimental feature.
In contrast, FeatureKicker allows you to ask the right user, the right question, at the right time. Let me unpack that. It's the right user because it's the person using a particular (built or unbuilt) feature. It's the right question because you're going to ask something relevant, specific about that feature. And it's the right time, because you're capturing the user at the point of interaction, which is the peak of their curiosity. We believe this explains why we're seeing up to 64% response rates to our clients' questions.
JRFuentes7 | 12 years ago | on: How we got to $1,000 in recurring revenue
JRFuentes7 | 12 years ago | on: Logo, Bullshit & Co., Inc.
JRFuentes7 | 12 years ago | on: A horrifying startup accelerator story
First, we must define what is "good". After participating in DreamIt Ventures in 2012, I can say that we extracted immense value by quickly invalidating a B2B software concept. It was taking me months to get one meeting with an enterprise client on my own. With the help of DreamIt's mentors, I secured dozens of meetings in less than six weeks. We learned that the software we were building was ill-fated through those meetings. This was undoubtedly "good" for us, and we paid a mere 6% for that kind of access. In retrospect, I'd do it again in a heartbeat.
But your mileage may vary. The value we obtained was highly correlated to the fact that we were pursuing a B2B venture, a sector where DreamIt's mentors could best leverage their networks. In contrast, I saw some consumer-facing companies extract less value from the accelerator program.
In sum, you have to carefully (and honestly) weigh the value that "paid access" can yield. Feelings of frustration in connection to "paid access" likely stem from a miscalculation of the cost/benefits that the access could provide.
JRFuentes7 | 12 years ago | on: We hid the 'demo' button. Our signups quadrupled.
The real test is "which cohort achieves higher [metric you care about]?" For example, let's say you care about revenue. Does the "demo button present" cohort generate more or less revenue than the "demo button absent" cohort?
What do your analytics say about that?
JRFuentes7 | 12 years ago | on: Reverse Engineering Marketing For Startups