I like the forced UX of typing something, though "continue" might be glossed over. It would be an interesting study to determine if typing "I know the risk" is a better safety mechanism for users (can be A/B tested for less pass-through events) than "continue".
Because you associate it with an error that you have no control over? I find that I'm tuned to recognise patterns of behaviour, so when the patterns look similar to other things, but aren't the same, it's quite confusing.
I just can't shrug off the thought that manual review approach is a lost game in the long run. It's a process than requires skilled human and can't be fully automated while generating malicious code perfectly can.
I think of it in a different perspective: You combine humans and programmers to do better than each could do alone.
Start all-manual. Perhaps you only do it with a subsection of applications. Pay an extra fee and you get "certified" with special app placement. Then you start all-manual. You look for the people who are the best at finding issues, and pair them with programmers, and make the tools for the things that are gruntwork for them.
Build more and more tools, and you pull more and more people into the program as you build more and more intelligence into the machine.
Let the humans do the NP-hard portions. I'm sure this is what Apple has to be doing behind the scenes.
Tasks of classifying things (in this case into "approved" or "rejected") that humans can routinely do but machines find difficult are areas where ML shines.
Human reviewers today, but once the training set is large enough you can start to let computers take over with human reviewers reviewing the lower certainty cases until the certainties rise further.
They already do manual review for many of their high-serving ads, and people shouldn't shy away from some human intervention in these processes.
AI and machine learning are most effective these days when they help assist people (flagging potentially malicious code, bubbling up anomalies, etc.), and it isn't that expensive to get a pair of eyeballs to double check conclusions!
It actually can be automated to a quite some degree. This is basically a something called expert system. You could create automated system to do these reviews from human reviewers and their expert domain knowledge. It might never be 100% accurate and might require human intervention from time to time but a high level of automation of this process can be achieved.
Current Machine Learning techniques rely on being trained by manually generated data. Google's virtual Assistant learned to understand voice because Google setup Google 411 and had millions of participants train it. Google developed OCR models by setting up Google reCAPTCHA and having millions of people train it.
Manual prompts to users are a great way to develop training data and being able to distribute them at scale to millions of users means you can develop training data in very short order.
On the contrary, the lack of human review is why both the Play Store and the Chrome Web Store (especially this one) are dumpster fires with rampant malware.
Automated tasks are not good at outsmarting humans. When you want to review a human's work for security, you need humans somewhere along the process.
Automation can help those humans do their jobs, but it's simply not a solution here.
But they still don't let you create app-specific passwords/tokens without enabling 2FA. How they think enabling "less secure apps" is better is beyond me. Trying to force an office full of luddites into 2FA does not go down well.
No point 0Auth apps if google has access to it. Rather pay for my e-mail service than to use google, whose source of revenue is directly in conflict with my interest of privacy and security.
I highly recommend protonemail.com. Has all the bells and whistles and its major feature is user privacy and security.
Google has become the judge, the jury and the executioner of the internet. Recently a malicious user embedded an image from a site that is on Google's Safe Browsing list in a forum that is itself embedded on a third party site. This nuked a popular third party site where the forum is embedded: it is now flashing red (malicious software detected) in Chrome.
I was wondering how HN would spin this into Evil-Google. It's just tiring at this point. This is a perfectly valid security guard that protects their users.
[+] [-] otp124|8 years ago|reply
[+] [-] askvictor|8 years ago|reply
[+] [-] amelius|8 years ago|reply
[+] [-] philo23|8 years ago|reply
[+] [-] Spivak|8 years ago|reply
[+] [-] martin-adams|8 years ago|reply
[+] [-] noway421|8 years ago|reply
[+] [-] ComodoHacker|8 years ago|reply
[+] [-] ebiester|8 years ago|reply
Start all-manual. Perhaps you only do it with a subsection of applications. Pay an extra fee and you get "certified" with special app placement. Then you start all-manual. You look for the people who are the best at finding issues, and pair them with programmers, and make the tools for the things that are gruntwork for them.
Build more and more tools, and you pull more and more people into the program as you build more and more intelligence into the machine.
Let the humans do the NP-hard portions. I'm sure this is what Apple has to be doing behind the scenes.
[+] [-] eterm|8 years ago|reply
Human reviewers today, but once the training set is large enough you can start to let computers take over with human reviewers reviewing the lower certainty cases until the certainties rise further.
[+] [-] londons_explore|8 years ago|reply
If an app takes days to make, requiring 5 minutes extra review effort to get it whitelisted seems fine.
[+] [-] hari_seldon_|8 years ago|reply
AI and machine learning are most effective these days when they help assist people (flagging potentially malicious code, bubbling up anomalies, etc.), and it isn't that expensive to get a pair of eyeballs to double check conclusions!
[+] [-] richardknop|8 years ago|reply
[+] [-] cptskippy|8 years ago|reply
Manual prompts to users are a great way to develop training data and being able to distribute them at scale to millions of users means you can develop training data in very short order.
[+] [-] ocdtrekkie|8 years ago|reply
Automated tasks are not good at outsmarting humans. When you want to review a human's work for security, you need humans somewhere along the process.
Automation can help those humans do their jobs, but it's simply not a solution here.
[+] [-] ameister14|8 years ago|reply
[+] [-] pietroalbini|8 years ago|reply
[+] [-] eeveewoofwoof|8 years ago|reply
[+] [-] Walf|8 years ago|reply
[+] [-] pkamb|8 years ago|reply
User types "continue"
[+] [-] pbhjpbhj|8 years ago|reply
[+] [-] mrkrabo|8 years ago|reply
[+] [-] kyrra|8 years ago|reply
https://www.theverge.com/2017/5/3/15534768/google-docs-phish...
https://news.ycombinator.com/item?id=14258918
[+] [-] dovdovdov|8 years ago|reply
[+] [-] rallycarre|8 years ago|reply
I highly recommend protonemail.com. Has all the bells and whistles and its major feature is user privacy and security.
[+] [-] paradite|8 years ago|reply
[+] [-] cft|8 years ago|reply
[+] [-] adtac|8 years ago|reply
[+] [-] andruby|8 years ago|reply