top | item 12288417

(no title)

bazqux2 | 9 years ago

There is usually something wrong with the ontologies approach as it rarely works. There is roughly two decades of evidence for this for anyone who cares to look. Five decades if you loosen the definition to include the family of logic and constraint programming - see AI Winter. There is nothing new about these ideas. It always looks and feels like it's going to work which is why humanity has persisted with it for so long and will likely continue to persist for some time to come.

There is a whole generation of better techniques that have come out of machine learning that totally eclipses ontologies and I know Palantir isn't using them. Their corporate culture isn't set up for fostering that kind of applied research.

No-one is advocating for a fully automated approaches. I don't know where that notion came from.

In my view is that Palantir is a consulting company that is pretending to be a tooling company. And their consultants are not worth the money they charge. Just one of many Silicon Valley based frauds.

discuss

order

dredmorbius|9 years ago

Is this ontologies within the field of AI not working, or more generally?

Do you have references to any specific discussions on this?

Curious as I'm doing some work of my own (well outside AI) in which developing ontologies strikes me as useful, though I'd prefer not falling into any well-worn traps.

(My use is largely comping up with useful descriptive models of otherwise hairy concepts.)

nickpsecurity|9 years ago

What bazqux2 said is accurate. I'll go further to say that the kinds of work Palantir is involved in is mostly probabilistic. Especially intelligence work. So, use of models requiring certainty or straight logic in areas rife with uncertainty & degrees of truth seems set up to fail outside easy inferences. One can encode the logical stuff in probability models but harder to do reverse. Hence, their underlying tech should be probabilistic, fuzzy logic, or something similar for best results instead of just some results.

Far as ontologies in general, they have a mixed, track record. They take a lot of work to create. Then, they have to be mapped to real world inputs and outputs. One way they got applied is so called business rules engines or business process management. It's like a subset of ontology approaches of past. Here's a company that uses the real thing for enterprise software with Mercury language for execution part:

http://www.missioncriticalit.com/development.html

Also, Franz Inc, of Allegro Common LISP, covers many of the same use cases as Palantir with their ontological tooling.

http://allegrograph.com/solutions-by-use/

So, there's definitely companies using it for long periods of time for real-world, use cases. Palantir just seemed to be mixing it with hype and secrecy to maximize their sale price later. ;)

bazqux2|9 years ago

I meant generally they are generally not useful. Sometimes they are. It depends on the purpose and what you want to build and who it's for.

Given that you're building a descriptive model it would depend if you're working with facts or with probabilities. If it's facts then Ontologies should work fine, for probabilities I'd recommend Bayesian techniques.

The input for these are usually small. From the sounds of it you're generating the input yourself so you should be safe.

argonaut|9 years ago

Again, Palantir is not an AI company. They are a data visualization and analytics company. So all your perfectly fine points about ontologies and AI winter are not relevant.

bazqux2|9 years ago

It is an ontology company - see their website. This is how they derive their analytics and visualizations. So my points are relevant.