top | item 42118559

(no title)

gmaster1440 | 1 year ago

The paper argues against using LLMs for military strategy, claiming "no textbook contains the right answers" and strategy can't be learned from text alone (the "Virtual Clausewitz" Problem). But this seems to underestimate LLMs' demonstrated ability to reason through novel situations. Rather than just pattern-matching historical examples, modern LLMs can synthesize insights across domains, identify non-obvious patterns, and generate novel strategic approaches. The real question isn't whether perfect answers exist in training data, but whether LLMs can engage in effective strategic reasoning—which increasingly appears to be the case, especially with reasoning models like o1.

discuss

order

ben_w|1 year ago

LLMs can combine cross-domain insights, but the insights they have — that I've seen them have in the models I've used — are around the level of a second year university student.

I would concur with what the abstract says: incredibly valuable (IMO the breadth of easily discoverable knowledge is a huge plus all by itself), but don't put them in charge.

gmaster1440|1 year ago

The "second year university student" analogy is interesting, but might not fully capture what's unique about LLMs in strategic analysis. Unlike students, LLMs can simultaneously process and synthesize insights from thousands of historical conflicts, military doctrines, and real-time data points without human cognitive limitations or biases.

The paper actually makes a stronger case for using LLMs to enhance rather than replace human strategists - imagine a military commander with instant access to an aide that has deeply analyzed every military campaign in history and can spot relevant patterns. The question isn't about putting LLMs "in charge," but whether we're fully leveraging their unique capabilities for strategic innovation while maintaining human oversight.

beardedwizard|1 year ago

A language model isn't a model of strategic conflict or reasoning, but may contain text in its training data related to these concepts. I'm unclear why (and it seems the paper agrees) you would use the llm to reason when there are better models for reasoning about the problem domain - and the main value from llm is ability to consume unstructured data to populate the other models.

nyrikki|1 year ago

You are using a different definition of strategic than the DoD uses, what you are describing is closer to tactical decisions.

They are talking about typically Org wide scope, long-term direction .

They aren't talking about planning hidden as 'strategic planning' in the biz world.

LLMs are powerful, but are by definition past focused, and are still in-context learners.

As they covered, hallucinations, adverse actions, unexplainable models, etc are problematic.

The "novel strategic approaches" is what in this domain would be tactics, not stratagy which is focused on the unknowable or unknown knowable.

They are talking about issues way past methods like circumscription and the ability to determine if a problem can be answered as true or false in a reasonable amount of time.

Here is a recent primer on the complexity of circumscription as it is a bit of a obscure concept.

https://www.arxiv.org/abs/2407.20822

Remember, finding an effective choice function is hard no matter what your problem domain is for non trivial issues, setting a durable shared direction to communicate in the presence of the unknowable future that can't be gamed or predictable by an advisory is even more so.

Researching what mission command is may help understand the nuances that are lost with overloaded terms.

Strategy being distinct from stratagem is also an important distinction in this domain.

paganel|1 year ago

> but are by definition past focused,

To add to that, and because the GP had mentioned (a "virtual") Clausewitz, "human"/irl strategy itself has in many cases been too focused on said past and, because of that, has caused defeats for the adopters of those "past-focused" strategies. Look at the Clausewitzian concept of "decisive victory" which was adopted by German WW1 strategists who, in so doing, ended up causing defeat for their country.

Good strategy is an art, the same as war, no LLM nor any other computer code would be ever able to replicate it or improve on it.