inverse_pi's comments

inverse_pi | 8 years ago | on: Tesla Was Kicked Off Fatal Crash Probe by NTSB

Tesla's autopilot system is no different than a self driving car with a safety driver and should be subjected to the same regulation. In a self driving car (Waymo, Cruise, etc) driver turns on autonomous mode when the driver thinks it's safe to do so. All drivers are thoroughly trained for the safety of themselves and other people on the road. Companies must obtain permits to have cars on the road and report annually on safety of the system. I'm not comfortable driving in the road knowing Tesla's drivers are not trained, the company is not subjected to the same laws as self driving system even though the drivers can turn on Autopilot ANYTIME they want, and the car can do WHATEVER.

inverse_pi | 8 years ago | on: Internal Facebook posts of employees discussing leaked memo

I'm opening a can of worms by saying what I'm about to say but here goes nothing: Why are people criticizing individual companies for "growing at all cost"? Every company was once a start-up fighting to survive, every large company was once medium size and had to fight with the Google, Amazon of their days. It is not the Nash equilibrium to NOT try growing at all cost. If they don't, they'll lose. So at what point should a company stop trying to grow at all cost? Is it when they've won all battles, like Google? So why are we looking into the past to criticize Facebook, Uber, and potentially many more startups that are "growing at all cost"? Of course, the definition of "all cost" shouldn't be taken too literally.

inverse_pi | 8 years ago | on: Uber’s Self-Driving Cars Were Struggling Before Arizona Crash

In science, when there's a two orders of magnitude difference, it's more indicative of distribution mismatch between the test sets. The logarithmic property of the learning curve suggests that, unless one approach is a completely random walk, two learning algorithms after a period of training must converge to their maximum capacity. As a scientist, I'm refused to be clouded by my prior judgement of the companies behind the approaches and must question the nature of the metrics and the various definitions that were used.

inverse_pi | 8 years ago | on: Evolution Is the New Deep Learning

why does the existence of such problems disprove the existence of a mathematical foundation? A well-founded mathematical foundation would prove/predict/explain why such problems don't "fit" with the "structure of NNs" with precise lower/upper bounds. Anything that works, and especially everything that doesn't work, must have an explanation. God doesn't play dice.

inverse_pi | 8 years ago | on: Evolution Is the New Deep Learning

My thesis was on Generic Algorithm. I stopped and started working on Deep Learning mainly because like you said, GAs don't really have a strong mathematical foundation. Ironically, no one could really explain why CNNs work mathematically either. I've heard a lot of hand-wavy arguments about local search, local sensitivity, etc. However, no one could really prove anything meaningful. There are some papers around certain types of architecture is invariant under certain types of affine transformations. But all of them sounds like trying to convince ourselves rather than putting a firm mathematical framework to guide our research. Maybe that's why natural inspired algorithms are getting attention, the community is throwing stuff on the wall to see what sticks. It's funny to me because Genetic Algorithms were once frowned upon by majority of the community. I guess the moral lesson is stop chasing what's trendy.

inverse_pi | 8 years ago | on: Police Say Uber Is Likely Not at Fault for Self-Driving Car Fatality in Arizona

Here's what my hypothesis: when it's physically impossible to stop (which sounds like it was the case, car moving at 38mph and you have a split second), my hypothesis is that the robot would plan to swerve around the obstacle instead of trying to stop. But the car can't turn 90 degrees immediately either so that could also be dynamically infeasible, but maybe it has higher probability to avoid the collision?

inverse_pi | 8 years ago | on: Introducing Uber Health, Removing Transportation as a Barrier to Care

What you probably don't know is that this is a problem with almost ALL tech companies in general. Early Facebook engineers can look up ANYONE's personal information. Same with Google. There was an article a couple weeks back about accusing Lyft employees of doing the same thing. This is the problem much bigger than one pre-ipo transportation company.

inverse_pi | 8 years ago | on: Uber and Waymo Reach Settlement

If you look at the settlement number, 0.34% of Uber at 72B (yes, 72B, not 45B as the recent down round which would equate to 150M), then you'll see that Google did not expect to win this case.

inverse_pi | 8 years ago | on: Google A.I. researchers develop alternative architecture for neural networks

The idea might be cute but performance is not there yet. Specifically they were able to achieve state of the art performance on MNIST, but got 10.6% test error on CIFAR 10 which is comparable to state of the art of 4 years ago (and if you're in the field, 4 years is like a century ago). It's important to stress that there's ABSOLUTELY NO theory backing anything so everything we're doing including this idea of capsules and dynamic routing is just brute-forcing, trial-and-error. Even though the idea is cute, there's still ALOT to be proven for this method. So when I see all these articles, I feel a little bit uneasy.
page 2