top | item 42914456

(no title)

michaelgiba | 1 year ago

Gemini has had this for a month or two, also named "Deep Research" https://blog.google/products/gemini/google-gemini-deep-resea...

Meta question: what's with all of the naming overlap in the AI world? Triton (Nvidia, OpenAI) and Gro{k,q} (X.ai, groq, OpenAI) all come to mind

discuss

order

shihab|1 year ago

From the creator of Triton (OpenAI)-

"PS: The name Triton was coined in mid-2019 when I released my PhD paper on the subject. I chose not to rename the project when the "TensorRT Inference Server" was rebranded as "Triton Inference Server" a year later since it's the only thing that ties my helpful PhD advisors to the project."

samplatt|1 year ago

>Meta question

I think you have to prefix the query with "@Meta AI", hope this helps

stonogo|1 year ago

It's a sort of unofficial trade association where they coalesce on specific redefinitions of terms to meet their sales and PR efforts. First they came for "intelligence," then "open source," then "reason," and it will continue. Any word which the PR wants but they can't achieve gets redefined -- "grok" is a perfect example, since in the original sci-fi book it meant "total understanding." The mythological Triton ruled the deeps, so the "deep learning" sales copy immediately co-opted it.

albert_e|1 year ago

Also "accuracy" as a measure of model's performance used to mean something objective in the traditional ML world.

Now with LLMs it is what human evaluators feel about the LLM output?

svara|1 year ago

> Gemini has had this for a month or two,

Would have loved to try it when they released it, but I'm apparently in the wrong country. I think it's not available outside the US (?). OpenAI and DeepSeek have no such issues. It's a bummer really, I'm happy paying for this but they don't want me to.

hmottestad|1 year ago

OpenAI Deep Research isn’t available in Norway at least, or the rest of Europe basically :(

kavalerov|1 year ago

I am afraid Gemini's version is not really very "deep" - it surfaces a lot of information, but on a quite superficial level. OAIs version seems to make that one step forward to proper depth.

We found in our experience it is pretty hard to force LLM to do something in proper depth, and OAI's deep research definitely feels like one of the first examples from big labs on how this can be done. What we typically see is that it is not even the "agent" part that is hard to do, but how to force model to not "forget" to go deep...

james_promoted|1 year ago

I've always thought the Triton situation was intentional since the name isn't generic and because the companies are stepping on each others toes here (Nvidia's Triton simplifying owning your inference; OpenAI's Triton eroding the need for familiarity with CUDA). I couldn't figure out who publicly used the name first though.

chabes|1 year ago

> what's with all of the naming overlap in the AI world? Triton (Nvidia, OpenAI) and Gro{k,q} (X.ai, groq, OpenAI) all come to mind

They seem to be ok with outsourcing any and all creativity to a language model, so it’s not surprising that they can’t come up with unique names themselves.