top | item 42118683

(no title)

gmaster1440 | 1 year ago

The "second year university student" analogy is interesting, but might not fully capture what's unique about LLMs in strategic analysis. Unlike students, LLMs can simultaneously process and synthesize insights from thousands of historical conflicts, military doctrines, and real-time data points without human cognitive limitations or biases.

The paper actually makes a stronger case for using LLMs to enhance rather than replace human strategists - imagine a military commander with instant access to an aide that has deeply analyzed every military campaign in history and can spot relevant patterns. The question isn't about putting LLMs "in charge," but whether we're fully leveraging their unique capabilities for strategic innovation while maintaining human oversight.

discuss

order

ben_w|1 year ago

> Unlike students, LLMs can simultaneously process and synthesize insights from thousands of historical conflicts, military doctrines, and real-time data points without human cognitive limitations or biases.

Yes, indeed. Unfortunately (/fortunately depending on who you ask) despite this the actual quality of the output is merely "ok" rather than "fantastic".

If you need an answer immediately on any topic where "second year university student" is good enough, these are amazing tools. I don't have that skill level in, say, Chinese, where I can't tell 你好 (hello) from 泥壕 (mud hole/trench)* but ChatGPT can at least manage mediocre jokes that Google Translate turns back into English:

问: 什么东西越洗越脏? 答: 水!

But! My experience with LLM translation is much the same as with LLM code generation or GenAI images: anyone with actual skill in whatever field you're asking for support with, can easily do better than the AI.

It's a fantastic help when you would otherwise have an intern, and that's a lot of things, but it's not the right tool for every job.

* I assume this is grammatically gibberish in Chinese, I'm relying on Google Translate here: https://translate.google.com/?sl=zh-TW&tl=en&text=泥%20壕%20%2...

psunavy03|1 year ago

But the aide won't have deeply analyzed every military campaign in history; it will only spout off answers from books about those campaigns. It will have little to no insight on how to apply principles and lessons learned from similar campaigns in the current problem. Wars are not won by lines on maps. They're not won by cool gear. They're won by psychologically beating down the enemy until they're ready to surrender or open peace negotiations. Can LLMs get in an enemy's head?

ben_w|1 year ago

> Can LLMs get in an enemy's head?

That may be much easier for an LLM than all the other things you listed.

Read their socials, write a script that grabs the voices and faces of their loved ones from videos they've shared, synthesise a video call… And yes, they can write the scripts even if they don't have the power to clone voices and faces themselves.

I have no idea what's coming. But this is going to be a wild decade even if nothing new gets invented.

fragmede|1 year ago

Only if the enemy has provided a large corpus of writing and other data to submit to train the LLM on.

JohnMakin|1 year ago

The person you are responding to seems to be promoting a concept that is frequently spouted here and other places, but to me lacking sufficient or any evidence - that AI models, particularly LLMs, are both capable of reasoning (or what we consider reasoning) around problems and generating novel insights that it hasn't been trained on.

numpad0|1 year ago

> Unlike students, LLMs can simultaneously process and synthesize insights from thousands of historical

They can't. Anything multivariate LLMs gloss over and prioritize flow of words over hard facts. Which makes sense considering LLMs are language models, not thinking engines, but that doesn't make them useful for serious(above "second year") intelectual tasks.

They don't have any such unique capabilities, other than that they come free of charge.

ben_w|1 year ago

Kinda. Yes they have flaws, absolutely they do.

But it's not a mere coincidence that history contains the substring "story" (nor that in German, both "history" and "story" are "Geschichte") — these are tales of the past, narratives constructed based on evidence (usually), but still narratives.

Language models may well be superhuman at teasing apart the biases that are woven into the minds writing the narratives… At least in principle, though unfortunately RLHF means they're also likely sycophantically adding whatever set of biases they estimate that the user has.