top | item 47204773

(no title)

emil-lp | 17 hours ago

[flagged]

discuss

order

small_model|16 hours ago

Well it's the dominant and most successful implemented AI, would a comp sci course teach every failed computer architecture or focus on the ones that are in wide use today.

gield|12 hours ago

Your analogy to computer architectures doesn't make sense, unless comparing GPT-like LLMs to different LLM architectures like Mamba or RWKV. It indeed wouldn't make sense to not teach about Mamba or RWKV in an introductory AI or LLM course.

AI is much broader than LLMs alone. Computer vision, RL, classical ML, recommender systems, speech recognition, ... are still part of AI, just not very visible to the average consumer.

utopiah|12 hours ago

> most successful implemented AI

According to what? Spent money? Number of users? Outcomes and if so which ones?

suddenlybananas|16 hours ago

I think comp sci courses focuse on fundamentals rather than what's popular. Besides, other kinds of AI are not "failures", they have plenty of uses.

smokel|16 hours ago

Don't trip over words. The course offers quite a range of knowledge that is suitable outside LLMs. It's an introduction.

axseem|17 hours ago

It really depends on the target audience, because a lot of people have no idea what they are using is called an LLM or that there are various types of generative AI.

gignico|17 hours ago

I think the problem is the under representation of other branches of AI research: knowledge representation, automated reasoning, planning, etc.

These are important topics with important industrial applications which have the only downsides to not be suitable for implementing friendly chatbots and for raising the stocks of Silicon Valley companies.

cubefox|11 hours ago

This is a perfectly reasonable take. It's quite outrageous that this was flagged.

jccx70|16 hours ago

[deleted]