I don't work for Amazon or Google, but I agree with them in this case.
I read through the full text of the bill (http://www.ilga.gov/legislation/fulltext.asp?DocName=&Sessio...) and it seems like it sounds like the companies can be sued if one user agrees to a written policy, but then is used by another user (e.g. a spouse or sibling or friend), which makes smart speakers basically impossible to exist. (User identification isn't good enough, and even if it were, mistakes can happen)
> No private
entity may turn on or enable, cause to be turned on or enabled,
or otherwise use a digital device's microphone to listen for or
collect information, including spoken words or other audible or
inaudible sounds, unless a user first agrees to a written
policy informing the user[..]
"Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere."
"Amazon, in its marketing and privacy policy materials, doesn’t explicitly say humans are listening to recordings of some conversations picked up by Alexa. “We use your requests to Alexa to train our speech recognition and natural language understanding systems,” the company says in a list of frequently asked questions."
Whatever the Amazon company policy is, Amazon can't legally demand workers not to report crimes to authorities. No amount of NDA or legalese makes that possible. If the workers suspect that there is violent crime going on, especially if children are harmed, they should report it to authorities, no matter what the corporate policy is.
It seems that workers are willing to sing-off their basic humanity and dignity to corporate authorities just like they did in Milgram experiments.
If Amazon does not want to get involved, they should not get involved by listening.
I hope they provide counseling and adequate protection for staff, eg frequent polling and 1:1s to early find signs of when it becomes a problem.
Wifes sister is an EMT and she's really a special kind of person. Whether natural for her or a consequence of her profession, things just tend to slip off her like water on a goose. One has to wonder though how much sticks on a deeper level, perhaps surfacing during personal distress years later.
I mean, is this really news? We know that Amazon records the clips of what you say to Alexa for the purposes of improving the recognition. Everything after you say "Alexa..."
How would they tell whether their models are right or wrong without listening and having someone compare?
I see nothing in this article to suggest the clips they're listening to are related to an always-on microphone.
They should not save recordings by default. They only should save recordings of people who opted-in to be a beta tester or people who spotted a bug and want to send a bug report. But companies instead make every user a guinea pig by default and try not to disclose it clearly hoping the user won't realise it.
It is interesting that while there are several manufacturers, all of them opt-in users as testers by default, no matter what product you choose. So maybe such market needs a little stricter regulation.
One major issue that you're glossing over is that Alexa is easily triggered, even when no one explicitly says "Alexa". So audio clips of private conversations are being sent to Amazon.
For an example of accidental triggering, look up news on "Alexa creepy laughter".
There are "smart-apartments" going up in my area that have Alexas and whatnot preinstalled. They're some of the nicer apartments in the area and I'd love to live there. However, it's against the leasing agreement (which is pretty firm) to remove / disable them. Insane.
Agreed. It's almost spooky how friends, family, hotels, and other places that used to be considered "private/semi-private" now have microphones listening all the time, in plain sight. Most people don't even notice or seem to care.
"A lot of people said" is not enough for a proof. What might be happening, is that Alice and Bob are friends. Bob spends some time researching a topic, say looking for a new car of brand X. He calls Alice and tells her about him wanting to buy a car of brand X. Some time later, Alice sees ads for car of brand X. Now, it could have been FB listening on the phone call. But there's another possibility - FB knows that Alice and Bob are friends, and it knows Bob likes brand X, so it assumes that a friend of Bob will be interested in brand X.
This is the magical thinking explanation. What is much more plausible is that they have such enormous datasets that they are able to train machine learning models to predict interests of people on the basis of any number of online behaviors that they record.
The behavior doesn't even need to "make sense" - it could be that people who log on at these times, live here, travel by train, like dog photos and belong to certain groups are highly to be interested in a particular product. It doesn't matter, the system will learn these relations anyway. It might seem spooky and eavesdropping-y from a naive user's perspective, but the simple fact is that when you have that much data you don't even need to eavesdrop.
Seems like a false equivalence. First, a corporation is not the same as the government. Second, the government wiretapping your house is a violation of your constitutional rights, whereas you freely choosing to add a smart device into your home is not.
Shouldn't people be having fun with or trolling these always-listening systems by speaking gibberish, in tongues or reciting custom sea shanties that reenact purely fictional accounts of high crimes on the open seas?
It’s called “supervised” machine learning for a reason...
The important part is that tens of millions of people used Alexa everyday and the utterances are anonymized before being used as training data, so you don’t know who said what, just that somewhere someone said “blah”.
No joke, part of me believes that bathrooms (and kitchens, to a lesser degree) are the first places voice-controlled smart home tech will really take off. My place is about as smart as it can be today but it's still mostly a novelty.
Once people can perform actually useful tasks with their voice - "hey siri, turn the shower on to 40 degrees" or "alexa, preheat the oven to 300" - while their hands are full doing something else it'll kickstart the whole field.
It is in my opinion absolutely unacceptable that this submission was originally the article title and that was subsequently changed, presumably to defuse it. Someone needs to clarify when the norm of using the submission title applies, when it’s thrown out, why it’s thrown out, and who gets to decide.
In my opinion, the only sensible approach is for the title policy to be unilaterally enforced. Any departure from it will invariably involve someone’s subjective ‘political’ stance on a matter.
As it stands, it looks as if someone at Amazon applied pressure to have this changed. I really hope that isn’t the case because it’s almost too shady to be believed.
I agree with you to certain extent. I think the problem with titles directly out of the article is that many times they are bait.
The writers of those articles are purposefully leaving key information out, to grab people's attention and likely force them to read the entire article to find the missing information or fulfill the title entice.
In this case I think the change of title is definitely defusing the article, but it's also giving the key information that was left out in the bait title.
I agree with you because changing the original title feels like a disruption of the discourse and opens a bias door for whoever changes the title. But, I also believe that bait titles erode the quality of the content and make harder to consume and evaluate information. It's a hard problem.
Don't the people developing these speech/AI methods generally have advanced degrees?
In that case, shouldn't they be aware, from the same grad school training that prepared them for this work, that a (genuine) human subjects board would require informed consent for this, at the least?
I accept that Amazon, Google, etc. are likely listening to me with computer AI, but the fact the employees could pass these around causes some worry. It's like when I worked my first job at AOL and people would read Rosie O'Donnell's mail and chats. The human element is more troubling than the AI.
I was reading a while ago about how many nukes we set off in the 50s and 60s and thought: what are we doing today that future generations will think "what were they thinking?"
Surveillance capitalism was the first thing to come to mind.
Amazon products are the worst if you are concerned about privacy (their tablets are awful)... Google probably comes at a close 2nd and Huawei maybe 20th? (that's a wild guess but Amazon is hard to beat)
Wow - really? I'd say that Facebook is far, far, far worse than Amazon. And Google is certainly worse as well. Look at the data collection on Android phones, for example.
At least Amazon, from day 1 of the Echo/Alexa launch, noted clearly (not in small print, but right there in pictures on the product detail page) that the user's voice was going up to the cloud. And provided a microphone-off switch right on top of the device. Where's the mic-off button on your (typical) Android or iOS phone?
[+] [-] Despegar|7 years ago|reply
https://twitter.com/matthewstoller/status/111599165366219571...
[+] [-] ipsum2|7 years ago|reply
I read through the full text of the bill (http://www.ilga.gov/legislation/fulltext.asp?DocName=&Sessio...) and it seems like it sounds like the companies can be sued if one user agrees to a written policy, but then is used by another user (e.g. a spouse or sibling or friend), which makes smart speakers basically impossible to exist. (User identification isn't good enough, and even if it were, mistakes can happen)
> No private entity may turn on or enable, cause to be turned on or enabled, or otherwise use a digital device's microphone to listen for or collect information, including spoken words or other audible or inaudible sounds, unless a user first agrees to a written policy informing the user[..]
[+] [-] qiqing|7 years ago|reply
"Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere."
"Amazon, in its marketing and privacy policy materials, doesn’t explicitly say humans are listening to recordings of some conversations picked up by Alexa. “We use your requests to Alexa to train our speech recognition and natural language understanding systems,” the company says in a list of frequently asked questions."
[+] [-] nabla9|7 years ago|reply
It seems that workers are willing to sing-off their basic humanity and dignity to corporate authorities just like they did in Milgram experiments.
If Amazon does not want to get involved, they should not get involved by listening.
[+] [-] mirimir|7 years ago|reply
[+] [-] mintplant|7 years ago|reply
> https://en.m.wikipedia.org/wiki/Sorry,_Wrong_Number
[+] [-] retSava|7 years ago|reply
Wifes sister is an EMT and she's really a special kind of person. Whether natural for her or a consequence of her profession, things just tend to slip off her like water on a goose. One has to wonder though how much sticks on a deeper level, perhaps surfacing during personal distress years later.
[+] [-] nathankunicki|7 years ago|reply
How would they tell whether their models are right or wrong without listening and having someone compare?
I see nothing in this article to suggest the clips they're listening to are related to an always-on microphone.
[+] [-] Anechoic|7 years ago|reply
It's everything after the product thinks you said "Alexa".
[+] [-] codedokode|7 years ago|reply
It is interesting that while there are several manufacturers, all of them opt-in users as testers by default, no matter what product you choose. So maybe such market needs a little stricter regulation.
[+] [-] ipsum2|7 years ago|reply
For an example of accidental triggering, look up news on "Alexa creepy laughter".
[+] [-] AlexandrB|7 years ago|reply
If they can't do it, maybe their product shouldn't exist yet.
[+] [-] localhostdotdev|7 years ago|reply
unlike the location history where I understand the use case and maybe search history. but all those should be disabled by default.
[+] [-] tomrod|7 years ago|reply
I unplugged it for one flyin.
The housekeeper plugged it back in while I was out.
I unplugged it again.
Why in the world would I want any of these smart speakers?
[+] [-] leetbulb|7 years ago|reply
[+] [-] stock_toaster|7 years ago|reply
[+] [-] paulcarroty|7 years ago|reply
A lot of people said Facebook ads suggest them products or services debated on phone calls.
If you have Echo or Kindle Fire TV, guess it can be easily reproduced when Amazon really listening and analyze your voice.
[+] [-] skocznymroczny|7 years ago|reply
[+] [-] shatnersbassoon|7 years ago|reply
The behavior doesn't even need to "make sense" - it could be that people who log on at these times, live here, travel by train, like dog photos and belong to certain groups are highly to be interested in a particular product. It doesn't matter, the system will learn these relations anyway. It might seem spooky and eavesdropping-y from a naive user's perspective, but the simple fact is that when you have that much data you don't even need to eavesdrop.
[+] [-] dekhn|7 years ago|reply
[+] [-] jpm_sd|7 years ago|reply
Today: "Hey, wiretap, do you have a recipe for pancakes?"
https://twitter.com/andreacoravos/status/999761670540025856?...
[+] [-] booleandilemma|7 years ago|reply
[+] [-] thatoneuser|7 years ago|reply
Guys I think we lost to the tech overlords.
[+] [-] wise_young_man|7 years ago|reply
[+] [-] torqueTorrent|7 years ago|reply
[+] [-] sys_64738|7 years ago|reply
[+] [-] PherricOxide|7 years ago|reply
The important part is that tens of millions of people used Alexa everyday and the utterances are anonymized before being used as training data, so you don’t know who said what, just that somewhere someone said “blah”.
[+] [-] whyaduck|7 years ago|reply
[+] [-] culturestate|7 years ago|reply
Once people can perform actually useful tasks with their voice - "hey siri, turn the shower on to 40 degrees" or "alexa, preheat the oven to 300" - while their hands are full doing something else it'll kickstart the whole field.
[+] [-] throwawayjay01|7 years ago|reply
[+] [-] glhaynes|7 years ago|reply
[+] [-] Arn_Thor|7 years ago|reply
[+] [-] matz1|7 years ago|reply
[+] [-] sp527|7 years ago|reply
In my opinion, the only sensible approach is for the title policy to be unilaterally enforced. Any departure from it will invariably involve someone’s subjective ‘political’ stance on a matter.
As it stands, it looks as if someone at Amazon applied pressure to have this changed. I really hope that isn’t the case because it’s almost too shady to be believed.
[+] [-] whoisjuan|7 years ago|reply
The writers of those articles are purposefully leaving key information out, to grab people's attention and likely force them to read the entire article to find the missing information or fulfill the title entice.
In this case I think the change of title is definitely defusing the article, but it's also giving the key information that was left out in the bait title.
I agree with you because changing the original title feels like a disruption of the discourse and opens a bias door for whoever changes the title. But, I also believe that bait titles erode the quality of the content and make harder to consume and evaluate information. It's a hard problem.
[+] [-] HNthrow22|7 years ago|reply
[+] [-] dekhn|7 years ago|reply
[+] [-] neilv|7 years ago|reply
In that case, shouldn't they be aware, from the same grad school training that prepared them for this work, that a (genuine) human subjects board would require informed consent for this, at the least?
Do they have informed consent?
[+] [-] rogerdickey|7 years ago|reply
[+] [-] Simulacra|7 years ago|reply
[+] [-] davidhyde|7 years ago|reply
[+] [-] api|7 years ago|reply
Surveillance capitalism was the first thing to come to mind.
[+] [-] gipp|7 years ago|reply
The modern Western conceptions of privacy are relatively anomalous from a historical perspective. There's nothing to stop them from changing again.
[+] [-] novia|7 years ago|reply
https://youtu.be/LLCF7vPanrY
Shows every nuke set off between 1945 to 1998.
[+] [-] julienreszka|7 years ago|reply
[+] [-] OrgNet|7 years ago|reply
[+] [-] sib|7 years ago|reply
At least Amazon, from day 1 of the Echo/Alexa launch, noted clearly (not in small print, but right there in pictures on the product detail page) that the user's voice was going up to the cloud. And provided a microphone-off switch right on top of the device. Where's the mic-off button on your (typical) Android or iOS phone?
[+] [-] SmellyGeekBoy|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]