top | item 17469032

(no title)

robius | 7 years ago

What I find unreasonable is doing all this without knowing what the model is doing. It's blind with no way to steer and correct it.

That is what feed forward networks and back propagation do for us. So why do we keep using them?

Then there's the statistics of it all.. what are we actually modeling? 'The real world' you say? Think again.

Data has to be changed and manipulated into i.i.d. form, or the algorithms won't work. How does an independent set of random variables give us a model of the actual dataset which is a very limited representation of the real world? It doesn't. It's modeling something else.

Okay, why don't we take dependence into account? Surely that would represent the real world better. Good question! (Shirley has nothing to do with it.)

It's because there is no formal definition of dependence in statistics. Let that sink in for a minute.

So the math needs work, statistics needs a revolution, and then we can begin to change AI enough for it to finally start making sense. Focus on explainable algorithms and actual ability to validate that what models generate make sense and will not be unlawfully biased or have outliers that will cause harm.

There appears to be only one company who has something like this. But few actually care.

discuss

order

nerdponx|7 years ago

It's because there is no formal definition of dependence in statistics. Let that sink in for a minute.

What? Statistical dependence (of random variables) is defined clearly and precisely.

Data has to be changed and manipulated into i.i.d. form, or the algorithms won't work

Neural networks don't use the iid assumption.

I downvoted you because it seems like you don't really know what you're talking about and you're currently the top post in the thread. Please don't spread misinformation.

srean|7 years ago

Strongly agreed. It seems robius really is clue less when he/she's talking about modeling independence or modeling the lack of independence.

avaku|7 years ago

Agree. I can't downvote, so I just agree :)

etaioinshrdlu|7 years ago

I would try to remember... Just because you don't like it, doesn't mean it doesn't work. Deep learning is creating an awful lot of actual value right now, and I think we're just getting started.

kornish|7 years ago

Which company, and what do they have?

robius|7 years ago

The company is a small startup with an amazing breakthrough called Optimizing Mind.

They have magical ways of 'explaining' black box models.

But it's not what DARPA is pushing (box remains black), rather the opposite, illuminating what's inside the box, making it a transparent open box. So much so, that the models they make you can edit by hand, since they make sense (to mere humans). Has rather immense implications.

Here's their crappy website: https://optimizingmind.com

wadkar|7 years ago

> So the math needs work

Finally! I thought I was alone (and stupid) for thinking like this.

Is there any literature or any meta-work that discusses the notion of probability itself? What is expectation? What is dependence?

soVeryTired|7 years ago

> What is expectation?

There is a formal mathematical definition:

Let (\Omega, \mathcal{F}, P) be a probability space, and let X: \Omega -> S be a random variable taking values in some measurable space (S, \mathcal{S}).

Then the expectation is \int X(\omegs)dP

In computer science terms, do an experiment with every possible random seed and average the outcome (set \Omega to be the set of all seeds, and set P to be the uniform measure on them).

nerdponx|7 years ago

Any probability textbook would answer that.

I would be surprised if Khan Academy didn't cover at least expectation.

meikos|7 years ago

what do you mean by the notion of probability itself?

probability was mastered far before computers were a thing

ssivark|7 years ago

Er... maybe not that much reworking?