top | item 34409429

(no title)

wespiser_2018 | 3 years ago

eNPS is a bad metric for a lot of reasons, but in this the concern would be that in low sample sizes, you are unable to quantify uncertainty. The results for small (around 30 but depends on what the mean is) will also be noisy. Don't use NPS unless you have to, ie, you want to release a number for a marketing campaign because your score is a selling point.

"Our future vision is building towards an ideal mix of automated analysis working with machine learning, natural language processing and sentiment analysis, alongside best-in-class consultancy via human-in-the-loop systems when issues or opportunities are identified." - from the blog.

Interesting. I could see this tech being a good on ramp to a consulting practice, but the applications of sentiment analysis are much trickier, since almost anyone writing in a corporate environment is going to have roughly the same detectable sentiment, independent of content. (look at sentiment samples if you need to convince yourself). I'm sure there's signal there, but reliably deriving insights for heterogeneous companies out of anonymous survey data is definitely a tall order. The big consultancy groups in the US certainly are using deep learning to determine topic trends in their client communications. That's a very straightforward thing to measure compared to sentiment, which is trying to measure how people feel based off word choice, so I'd encourage you to focus on that first.

The only other issue see with this, from a survey product perspective, is how you get people to actually complete the survey if it's anonymous. If not enough people complete the survey, it'll skew towards the people most motivated to complete it, but without knowing who completed the survey and who didn't, you can't follow up individually with reminders to ensures surveys get completed so the data is representative. Maybe non-responses aren't actually a big deal, but those response rates (and your incentive schemes) will likely impact any downstream analysis on that data set.

Anyway, fun to think about this stuff. Good luck folks!

discuss

order

techdiff|3 years ago

Thanks so much for this one - great insight! We use eNPS as shorthand, it's the easiest way for people to grasp what we're trying to achieve but we're actually measuring through a binary initial question, followed by an open comment. The binary along with the timeframe seems to force an "on balance" measure of the week, so sentiment fluctuates with the business activity, initiatives, wins and losses. Within this, we're building features to gain context by asking the exec team on their comment against the week ahead of results, which is useful when taking a long view on sentiment moves. The best way to view the results you get from this is absolutely about detecting emerging issues, resolving ongoing challenges and improving communication. But the dream for us would be to tie business data into the commentary from staff, clients and stakeholders to form a rounded view of problems, and potential solutions.

On your last paragraph - we're looking at incentive programs but we don't desperately need them. Over the long test timeframe, we've averaged 60+% each week for responses (40% within the first 60 mins) and one in two leave a comment with it. In all honesty, our current data is skewed by some companies in the system testing with us and not pressing their team to actively participate which has dropped the weekly average by ~10%. We're going to add a disengaged score (people who haven't responded for 3 consecutive weeks) and a super-engaged score (the opposite of that). Our customers have said they're more worried by the disengagement level than the negative sentiment because it means the team don't care at all.

Thanks again, this is some great stuff for us to think on!