top | item 45919703

(no title)

Marshferm | 3 months ago

Take great offense at being called a bot, especially considering any glance can spot my numerous typos. And the weakness of your search capability: that quote is from a discussion of Ev’s following the pub of this paper

https://pmc.ncbi.nlm.nih.gov/articles/PMC4874898/

And btw, that’s not a blanket statement that’s an empirical statement that wipes away quite a bit of LLM relevance. I’d say it destroys the approach.

Do the research. And an apology is in good order.

discuss

order

CrackerNews|3 months ago

That exact quote does not appear in the paper. You cannot attack me for your lack of due diligence.

This paper does little to dismiss LLMs. LLMs can use a different medium than text and that would not take away from its underlying mathematical models based on neuroscience. LLMs only understand language representations implicitly through statistical analysis, and that may instead show a commonality with how the human brain thinks as written in this paper.

I will not apologize for how you keep pushing an agenda despite how poorly supported it is. I have tried to be intellectually honest about the state of the industry and its flaws. I would implore you to instead do the research about LLMs so you can better refine your critique of them.

Marshferm|3 months ago

Your intellectual insecurity doesn’t mean I offer due diligence for existing information, nor does it give you any protocol to shift apologies especially since we evaluate software for special effects in high budget streaming. And none of our research indicates LLMs RL or frontier approaches will work in spatially specific ways. It’s a wash, we can see it.

https://docs.google.com/document/d/1cXtU97SCjxaHCrf8UVeQGYaj...