top | item 45921273

(no title)

CrackerNews | 3 months ago

That exact quote does not appear in the paper. You cannot attack me for your lack of due diligence.

This paper does little to dismiss LLMs. LLMs can use a different medium than text and that would not take away from its underlying mathematical models based on neuroscience. LLMs only understand language representations implicitly through statistical analysis, and that may instead show a commonality with how the human brain thinks as written in this paper.

I will not apologize for how you keep pushing an agenda despite how poorly supported it is. I have tried to be intellectually honest about the state of the industry and its flaws. I would implore you to instead do the research about LLMs so you can better refine your critique of them.

discuss

order

Marshferm|3 months ago

Your intellectual insecurity doesn’t mean I offer due diligence for existing information, nor does it give you any protocol to shift apologies especially since we evaluate software for special effects in high budget streaming. And none of our research indicates LLMs RL or frontier approaches will work in spatially specific ways. It’s a wash, we can see it.

https://docs.google.com/document/d/1cXtU97SCjxaHCrf8UVeQGYaj...

CrackerNews|3 months ago

You appear to be using random words and phrases to intentionally obfuscate the lack of substance in your responses.

There is a baseline expectation of how quotes and citations are supposed to work within Western intellectual circles. The fact that you do not know them and refuse to accept it means either you are not familiar with Western academia or you are an intellectually dishonest Internet troll or an LLM bot.

Spatial reasoning and world models are a research topic because elements of them were found in video and agentic models, and investors want to further refine either of them.

I do not have the time to read through this entire Google doc, but from what I have skimmed, I can see that the most substantial critiques are from academia being honest of the current state of AI and its limitations. That is fine.

However, the opening paragraphs aren't impressive. Language is arbitrary, yes, but they must also be intelligible by other humans. It is like a canvas to pattern match and create all sorts of inductive reasonings. There isn't much to explain how pattern matching math would be inherently incapable of pattern matching the written language. This reads like a basic understanding of postmodernist philosophy as if it is proof of math becoming a failure when applied to a socially constructed reality. However, philosophy and other social sciences do not surrender and give up as if their fields are fundamentally flawed. They make do and continue matching patterns to make observations of social reality.

The burden is ultimately on you to prove that the limitations of current AI/LLM cannot be overcome or that there is something that cannot make world models or spatial reasoning possible. Simply having a mountain of text to read is not an argument. There has to be some summary or point that can be used at the thrust of your position. As they say, brevity is the soul of wit.