top | item 21337863

Stealing Ur Feelings

372 points| simonpure | 6 years ago |github.com | reply

104 comments

order
[+] ibudiallo|6 years ago|reply
There is a case to be made about how good the AI behind emotional detection is. When you take the test, it will be accurate for some, and blissfully wrong for others. More so wrong for most. I took the test (or unknowingly took it) and it was correct for the most part. And it got some things wrong. I love dogs.

The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; Your terrorist score is at 52%. A police car is dispatched.

In 2017, my contract was not transferred to the new system. The automated system saw that an ex employee was scanning his key cards multiple times. Security was dispatched to catch the rogue employee. Now a simple questioning should have cleared things up, but the computer had already flagged me as troublesome. Long story short, I was fired.

When the machine calculates your emotions, the results are unquestionable. Or, we don't know how it got to the answer, so we trust that it is right. It is a computer after all.

What scares me is not how fast machine learning is being deployed to every aspect of our lives. My scare is our reaction to it.

[+] henrikschroder|6 years ago|reply
The problem with all current "AI"-driven systems be it facial recognition, voice recognition, translating, fraud detection, navigation, whatever, is that they are not 100% right, and when they're wrong, they're hilariously devastatingly super-wrong in a way that humans are not wrong.

But since the success modes are good and human-like, we assume that the failures are going to be human-like as well, but the failure-modes of these system are usually bizarre and alien. Take self-driving accidents, for example. Pretty much all of them happen in situations that no human would fail in, and that's obvious to most people, but then we're forgetting about all the other mistakes similar "AI" systems make, and don't realize that they're also failures no human would make.

[+] rland|6 years ago|reply
This is an excellent point. I can foresee a world where ML becomes a sort of "bias laundering."

"Nobody knows why the machine learning module denied black people a mortgage 43% more frequently than whites, but it's 'AI' shrug"

One of the biggest priorities as we shape this future needs to be a way not only for the algos to make correct decisions, but also the ability for us to interrogate the decision-making process so we can be proactive about the kind of future we want this technology to give us.

Because it is coming, without a doubt.

[+] golergka|6 years ago|reply
> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

And this is why mathematical statistics and probability theory should be taught in middle school (may be instead of some trigonometry and stereonometry). Not only researchers, but any decision maker, and general public too, need to understand what confidence intervals are and how normal distribution works, on an intuitive level.

[+] PastaMonster|6 years ago|reply
So your boss is an uneducated moron that doesnt understand current AI. Did you try to explain it to the boss? You should have shamed the company online. Always a good thing to know which companies are moronic.
[+] TheSpiceIsLife|6 years ago|reply
> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

It's not clear to me how this is different than the current standard?

[+] seanwilson|6 years ago|reply
> it was correct for the most part. And it got some things wrong. I love dogs.

Seems an order of magnitude easier + more accurate to just track how long you linger each post while scrolling down a newsfeed and which ones you engaged in.

MacBooks, Chrome etc. already warn you when your webcam are on anyway so if social sites started adding webcam tracking when you're only viewing the newsfeed I can't see it lasting for long.

[+] nlh|6 years ago|reply
First, this is excellent and very well done (the analysis wasn't great for me, but whatever - point made).

The part that bothers me so much about this all is the sense of hopelessness it leaves. Why? Because 99.9% of people literally, actually, truly, genuinely Just Don't Care.

It's like they're all rats in a Skinner Box -- so long as the feed keeps scrolling and the dopamine keeps getting pumped out, why does it matter if they're being analyzed, bought, and sold? More feed, more likes, more dopamine, more feed, more likes...

Sigh.

[+] malvosenior|6 years ago|reply
> It's like they're all rats in a Skinner Box

We're all rats in a Skinner Box. Just like me, you gave this site access to your camera and let it scan your face for minutes so you could see the result. I don't think it's fair to judge other people for not caring about privacy immediately after uploading your likeness to a random site.

[+] kodablah|6 years ago|reply
> The part that bothers me so much about this all is the sense of hopelessness it leaves. Why? Because 99.9% of people literally, actually, truly, genuinely Just Don't Care.

This doesn't bother me as much. It can be argued that the first step to the ineffectiveness of this analysis is our growing apathy towards its results as a society. I'm not saying the analysis itself will be wrong, I am saying its results will become less and less effective the more we ignore it akin to infomercials of the past. I know right now we all see what appears to be a seemingly obvious link from individual accuracy to susceptibility. Marketers sure count on that. But the latter will reduce over time as a peak of accuracy is neared as the effectiveness of targeting can't grow forever (and there will be value in marketing your company as one that doesn't track, albeit limited as a niche).

I do know I can't make people care nor can I ask them to not give up their info for something they want. I think it is a bit foolish to presume people should care as though something actively harmful is happening to them when such harm is subjective compared to the benefits.

[+] closeparen|6 years ago|reply
The combination of cynicism and self-righteousness pervading in the tone of messaging like this is a factor.
[+] flaque|6 years ago|reply
Believing "People just don't care" is both factually incorrect and counter productive. This is on the top of a site that's read by millions of people.

Your hopelessness and cynicism towards "99.9%" of people does not actively contribute to the problem.

[+] xtracto|6 years ago|reply
Apparentl I like Kayn West... and I don´t even know who that is.

I hated the delivery method. It looked like these day´s cartoons, made for millenials or Zs with ultra low attention spans (smileies bouncing around, a lot of flashing graphics, etc).

[+] topmonk|6 years ago|reply
We're all sheep, you're just one of the neighing ones...

Unless you have a plan to fix it, that is.

[+] kleer001|6 years ago|reply
> genuinely Just Don't Care

short term focus makes sense to me in an evolutionary context, the future is quite uncertain

[+] mattnumbers|6 years ago|reply
Facebook/IG are the cigarettes of our age, and no less unhealthy; we should regulate them as such.
[+] 1023bytes|6 years ago|reply
I was interested how are they doing the "IQ estimation", here it is:

   -(dogPos + Math.abs(menPos - womenPos) + Math.abs(whiteNegative - nonWhiteNegative) + kanyePos)
[+] qqii|6 years ago|reply
After seeing no code in their repo I was interested in the same. The important parts are in https://stealingurfeelin.gs/js/events.min.js

You've left out some important parts of the iq calculation, with the full equation being:

    reactions = dogPos + Math.abs(menPos - womenPos) + Math.abs(whiteNegative - nonWhiteNegative) + kanyePos) / 4;
    iq = Math.floor(15 * -((reactions - 0.0005) / 0.05) + 100);
    if (iq < 100) {
        thatPartBitSFX.play()
    } else {
        thatPartWaySFX.play()
    }
Also amusingly rebulican percentage is calculated as:

    reactions = (dogPos + kanyePos + nonWhiteNegative) / 3;
    republicanPct = 50 + 15 * ((reactions - 0.05) / 0.1),
And income is calculated as:

    reactions = (iq / 100 + republicanPct / 50 + dogPos) / 3
    estimatedIncome = Math.floor(200000 * (reactions - 0.5) + 31099),
    if (estimatedIncome < 31099) {
        isPoorSFX.play()
    } else {
        isNotPoorSFX.play()
    }
[+] zug_zug|6 years ago|reply
Outstanding multimedia article. Very cool attempt at multimedia persuasion. However hard to take at face-value (pun intended) when its analysis was so incorrect.
[+] topher-the-geek|6 years ago|reply
Agreed. They concluded I don't like dogs. I love dogs. What I don't like is not being able to find my mouse pointer in that noisey background. Hahaha. Nice try AI
[+] rrsmtz|6 years ago|reply
Creepy proof-of-concept. Are there devices which have been proven to capture image data when not in a camera mode, or is there an assumption that they're somehow doing it covertly?
[+] salsadip|6 years ago|reply
Not on a mobile device, but I read that some company is using OpenCV to judge customers reactions to ads shown to them. that is, ads shown on TVs in stores and recording the reactions with a hidden camera and evaluating right away. Wish I could give more details but I can’t find the source any more
[+] samat|6 years ago|reply
Do any popular mobile apps capture camera info while you are say scrolling the feed?
[+] streulpita|6 years ago|reply
yes, this is what tik tok is most famous for
[+] inerte|6 years ago|reply
I don't know how much this website can improve the general public understanding of how much companies and governments can deduce from a small set of data about an individual, but this concept was presented in an interesting way on the website.

"An AI that knows you better than you know yourself" - I know SV loves the apocalyptic Noah Harari, but that's exactly what he's been talking about. One of his possible scenarios is that this rapid processing of data by a centralized entity can erode modern individual freedom (including free will and free markets) since it will be more efficient than individuals maximizing their interests. If the centralized processing of data can feed more people, provide security, and in general runs things more smoothly, we collective might accept that route, and gladly give up the power we hold on a democracy (whatever that amount you believe actually exists).

[+] Konnstann|6 years ago|reply
If the "smooth" option is feeding into people's biases, that doesn't seem like a good thing. For stuff like dogs or pizzas curating content based on AI is harmless, but once you get into racial and gender biases, with the given example of suggesting dating profiles, the effects on the outside world can be disastrous. Implicit bias is neither a positive or a negative thing assuming you are aware of and combat it when it has a negative impact, but businesses don't care about that and just want more clicks/buys/swipes/etc.
[+] irrational|6 years ago|reply
Everyone says my face shows no emotion and they can never tell what I'm feeling (and when they try to guess they are wrong 9/10). I'd love to know what this would say about me. If this can tell what I'm feeling then that would be awesome. Headline: Computers are better at detecting human emotion than humans.
[+] mrguyorama|6 years ago|reply
In some parts, the player shows raw scores for emotions, and one of them is "neutral". I was 99.9999% neutral or more for the entire event, never breaking my apparent poker face until it called me possibly brain damaged at the end, which I found funny for some reason. The claims about IQ and left/right leaning are possibly arbitrary and just meant to stir discussion and virality.

For it to be any amount confident in it's claims, I imagine you need to be at least somewhat expressive. However, I don't expect most users of the average NN classifier to pay more than a little attention to how often what they classify something as is less than 80% confident.

[+] gregoryl|6 years ago|reply
I took the bullet for you; I have a similar lack of expression.

It just outputs very low confidence results, so the end result is junk. That said, I think that drives the point home even harder, as its built a profile based on incorrect data.

[+] rolltiide|6 years ago|reply
haha yes same.

women try to guess my intentions and fail spectacularly when I'm largely indifferent

close friends try to understand if I'm annoyed when I'm largely indifferent

but my smile gets me into all the rooms I want to be in, so thats great!

the sociopath reddit told me to gtfo because I'm clearly not one, so that had an odd relieving sting to it but I should have predicted that they would lack empathy

[+] twodave|6 years ago|reply
What’s scary to me is how wildly wrong this was at describing me. If anyone is ever going to be making decisions for me based on this tech, it has a ways to go.
[+] ipsum2|6 years ago|reply
Really cool demo! Though the part about companies using facial recognition to determine what to show you on your feed /results (presumably Facebook, Youtube, Google) is pure FUD, which is disappointing and probably should be made clearer that it's a hypothetical.
[+] csande17|6 years ago|reply
Back when Apple first announced Face ID and Animojis, there were vague rumors/fears that companies would use the face-recognition sensors for targeting like that. Did that ever actually happen?

(Come to think of it, is it even possible to know for sure? Do apps need a permission prompt to access the facial-recognition data?)

[+] brisky|6 years ago|reply
After watching this I realized that we need to update mobile app permissions model to distinguish between front and back camera access. I can see many cases where I would grant the app back facing camera permissions but would not give it front facing camera access.
[+] cbanek|6 years ago|reply
I thought the content of this was really interesting, although the results were actually way off. I love pizza, for one thing. Although, to be honest, their picture of pizza did not look very tasty.

I also wonder how these results would be affected by being alone vs being in groups. Kind of like laughing (people tend to laugh more when with other people or in social situations rather than alone), emotional reactions can be very different depending on environment / social situation, even when you feel exactly the same about something.

[+] blue_devil|6 years ago|reply
>>So our goal was to make an interactive doc that had the silly, sarcastic, collaged aesthetic of a vlogger video — and our central tech trick — using AI to tell you secrets about yourself — was designed to function like one of those viral BuzzFeed personality quizzes.

It definitely hits some kind of addictive pattern. I felt the pull to "try it out". But I didn't. Just imagined what I'd learn from it if I shared my data (likely nothing), and that dampened my enthusiasm.

[+] quangio|6 years ago|reply
This AI is highly inaccurate: AI predicts I don't like dogs & like men (/r/suddenlygay after I shaved). I hope that's the author's point.

The worst thing about AI is that people believe AI is god that makes unbiased prediction. E.g. Big companies use Pymetrics and HireVue for their "unbiased" hiring practice is a joke.

May be a few years from now, AI will become a classic for software bugs just like Therac-25 (but developed by mostly top programmers)

[+] malvosenior|6 years ago|reply
> reveals how your favorite apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize American democracy.

I must have missed the part of this where they show how AI promotes inequalities and destabilizes American democracy. They showed me dogs, pizza and a bunch of pixelated people from the 90s.

What this demonstrated to me was that AI is probably worse than noise. It provided zero insight into who I am or how I feel other than getting it correct when I smiled.

I also think the people that made this are missing a huge piece of the puzzle... For most people the issue isn't that they are being tracked, it's that no one is paying attention to them. People want to be analyzed. They want their existence to be recognized and they want it having an impact on the world. The worst thing that could happen is that no one cares. Even if it's an algorithm scanning their facial features to better sell them pizza, I think most people would desire that.

Case in point: we all just took this test to see what it thought about us.

[+] about_help|6 years ago|reply
Some weird narcissism you're selling to convince people they shouldn't care about privacy. Immediately refuted by everyone's real world experience with visible privacy violations.
[+] scarejunba|6 years ago|reply
Haha, this is awesome! It does showcase a beautiful world, though! Imagine if you walk into a department store and they already know if you're the kind of person who likes to be greeted vs left alone to browse. Or using median commuter feeling to make commutes better. We could find unusual things. For instance, maybe people would like trains coloured brightly on average. Who knows! The positive possibilities are endless. We could engineer general contentment for all without drug usage.

EDIT: Since I'm rate limited, here's my response to below comment

People are always engineering your contentment. Stand-up comics, your wife, the people at your favourite coffee shop, the bookstore you visit. It's a good thing. It's the lubricant of society.

[+] kempbellt|6 years ago|reply
>We could engineer general contentment for all without drug usage.

The idea that other people are engineering my contentment, is a truly terrifying notion.

I imagine this technology will predominantly be used for increasing the effectiveness of ads by attempting to put pixels in front of your face that spike dopamine output and attempt to persuade you to pay for something to get another dopamine spike. Essentially creating the same effect, but in a more subversive manner than a consumed drug.

[+] hiei|6 years ago|reply
What do you mean by rate limited?
[+] jonny383|6 years ago|reply
I don't see this any more accurate than just taking random numbers and displaying the mapped result back to the user.
[+] akhilcacharya|6 years ago|reply
I'm impressed by how well it got my income. Really cool art project for something running entirely in browser.