top | item 8510586

(no title)

mqsiuser | 11 years ago

The company of my professor went bankrupt in 2012 (Ontoprise). Not sure if they dissolved totally by now.

AI failed (again). I never understood where "Intelligence" lies if the only thing you can do is infer: If A --> B & B --> C, then also A --> C ("we don't do anything else since that won't be logic" & then bloating it and naming it "Reasoner").

If you can't spin of quickly from academic ideas (like Google search) it will just be ongoing research binding masses of people on the wrong things (to pursue). Don't tell me they chose to,... still influenced and finding out later that it wasn't worthwile.

Academia thought it's the next web, but it wasn't. The Web 2.0 was the next web then, leaving the semantic web in the dust.

"When I see the semantic web (of trust) be done (properly), this is basically when I can retire" (Tim Berners-Lee ~2004).

Just my (honest) thoughts (as s/w who spend a significant amount of time on RDF/OWL et al at university).

discuss

order

bemused|11 years ago

no idea why this comment gets downvoted -> I came to the same conclusion, having done quite some research at university on this topic as well. The community does a great job on getting huge funding from the government/eu but the results are mostly pathetic from a CS pov. eg I came across papers/PhD thesis where people were fabulating about all the great things you can do when automatically merging ontologies would be feasible without the slightest understanding of computational complexity and the semantics of natural language in general

that said I still see the advantage of semantic web style technology in cathedral style environments eg corporate knowledge DBs or wikidata IF you can afford the bloat. most of the time its much more straightforward to just use your own schematics and call it a day (like the KDE folks did this year, finally giving up on getting their rdf-database to perform reasonably well and going back to a relational model for the desktop search)

jerven|11 years ago

Not sure KDE went back to a relational model. The main change I feel was going from a single central database (Virtuoso) which had a copy of everything to keeping the data where it is as much as possible and only store copies where needed for performance.

Virtuoso being an impressive DB is not the most stable or resource use friendly datastore for desktop use.

This decentralised data storage actually makes a lot of sense. And I hope to work on something similar for life science data except that the unifying API will be SPARQL instead of a C++ API. (Not to say that a single C++ API does make a lot of sense for the KDE project, where it does not for life science data.)