deviationblue's comments

deviationblue | 7 years ago | on: Bullshit-sensitivity predicts prosocial behavior

It's telling that the profound statements are inherently prosaic. So are they really 'profound' or categorized so because they can be tied to some kind of natural or observable or relatable phenomena. I guess that doesn't matter since we are tying this to prosocial behavior, the idea here is to identify what can induce the feeling of 'birds of a feather' (at least, that's my take of it).

The bullshit statements were easy to pick out because they're too abstract or require lateral thinking.

deviationblue | 7 years ago | on: Why we downplay Fermi’s paradox

This cannot be overstated. Also, we are treating this planet like we've got options elsewhere. This is setting people up for a rude awakening, whenever that may be down the line.

deviationblue | 7 years ago | on: The End of Employees

I am not sure how viable this hiring model is for companies that want to develop bleeding edge IP. Speaking only about development jobs: contractors are nomadic, they're not tied or married to your success or failure. From a developer POV, that's not a bad thing. You can just come in, do your thing, and move on to the next great adventure. From the employer side, each time you are training someone new to become familiar with your code base, your company intricacies and peculiarities, etc. which takes time and effort, and learning from failure, time for which may not be available. And the end result is that you may end up with a mish mash that doesn't work well together because the people who made it are gone or moved on. Also, developing is mentally exhausting. Not sure what dev, unless they're really desperate, will want to indulge this and expand that kind of mental effort for less payout or benefits. Or maybe the company only hires these for less innovative stuff or CRUD jobs.

deviationblue | 7 years ago | on: Learning Dexterity

Well, if you're thinking sentient, AI beings, then no-- they are still a long way off. Unless we can give any meaning behind why a robot should do something, for example, have and use this kind of dexterity, it's all mechanical tricks. Cool tricks, nonetheless.

deviationblue | 7 years ago | on: IBM's Watson recommended 'unsafe and incorrect' cancer treatments

It's not dangerous and unethical if humans are there to pass final judgment on the answers. I was saying we can get to point where we might not even need that. This technology is in development, it has a lot of potential it can, will possibly, reach. Stories like these are good for caution, but doesn't mean we shouldn't use these tool there.

deviationblue | 7 years ago | on: IBM's Watson recommended 'unsafe and incorrect' cancer treatments

I suggest holding on to the knee jerk reaction till you read the whole post..? I never said these tools will never be worthy of trust and performance at the levels we attribute to humans, but at their current point, there are some tasks that they're not good at because either the tool is limited by design or hardware, or we just haven't found the right solution yet.

deviationblue | 7 years ago | on: IBM's Watson recommended 'unsafe and incorrect' cancer treatments

We're putting a lot of faith in tools that can best be described as immature. I don't think it's out of the question to get A.I. (not speaking of Watson, which is A.I. adjacent) to the point where it can perform perfectly at human intelligence tasks. The point is to have these systems perform better than their human counterparts (at least, I think that should be aim for something like Watson for Health). The people who built these things are learning themselves. As we make more progress in technology and AI, I don't doubt that we could break that barrier. There may be an upper limit because maybe the human brain is incapable of solving some problems, but even there I think it's not difficult to imagine workarounds. Simply put, I don't think the answer is to think these problems could never be solved.

Additionally, human-assisted A.I. is not the solution, it's a non-answer to the problem of creating systems that can think and perform at human levels of intelligence. It's okay to admit if we don't have the ability to make these things, but its disingenuous to believe that human involvement in helping computers get to the right answer is the right answer. Though yes, we need this right now to move things along where they otherwise might stand still.

deviationblue | 7 years ago | on: YC’s 2018 Summer Reading List

Interesting choice, Wizard of Oz. One question I've always wondered, what to do about narcissists who think someone else is the narcissist? Also, funny thing I've seen happen at work: there was someone this one (but influential) person didn't like because of office politics, and wanted to have that person be dismissed in social circles in the office. Easy job for them- label them as a narcissist. Honestly, everyone is a little selfish, so it's not hard to pick and choose behavior from any one person, and make it fit into that narcissism box.
page 1