top | item 40475983

(no title)

nicklecompte | 1 year ago

There has been a lot of excitement recently about how using lower precision floats only slightly degrades LLM performance. I am wondering if Google took those results at face value to offer a low-cost mass-use transformer LLM, but didn’t test it since according to the benchmarks (lol) the lower precision shouldn’t matter very much.

But there is a more general problem: Big Tech is high on their own supply when it comes to LLMs, and AI generally. Microsoft and Google didn’t fact-check their AI even in high-profile public demos; that strongly suggests they sincerely believed it could answer “simple” factual questions with high reliability. Another example: I don’t think Sundar Pichai was lying when he said Gemini taught itself Sanskrit, I think he was given bad info and didn’t question it because motivated reasoning gives him no incentive to be skeptical.

discuss

order

flyingspaceship|1 year ago

Well yeah imagine how much money there is to make in information when you can cut literally everyone else involved out, take all of the information and sell it with ads and only give people a link at the bottom, if that is even needed at all