top | item 42994948

(no title)

ctbergstrom | 1 year ago

Yes!

First, thank you for the link about CoT misrepresentation. I've written a fair bit about this on Bluesky etc but I don't think much if any of that made it into the course yet. We should add this to lesson 6, "They're Not Doing That!"

Your point about humanities courses is just right and encapsulates what we are trying to do. If someone takes the course and engages in the dialectical process and decides we are much too skeptical, great! If they decide we aren't skeptical enough, also great. As we say in the instructor guide:

"We view this as a course in the humanities, because it is a course about what it means to be human in a world where LLMs are becoming ubiquitous, and it is a course about how to live and thrive in such a world. This is not a how-to course for using generative AI. It's a when-to course, and perhaps more importantly a why-not-to course.

"We think that the way to teach these lessons is through a dialectical approach.

"Students have a first-hand appreciation for the power of AI chatbots; they use them daily.

"Students also carry a lot of anxiety. Many students feel conflicted about using AI in their schoolwork. Their teachers have probably scolded them about doing so, or prohibited it entirely. Some students have an intuition that these machines don't have the integrity of human writers.

"Our aim is to provide a framework in which students can explore the benefits and the harms of ChatGPT and other LLM assistants. We want to help them grapple with the contradictions inherent in this new technology, and allow them to forge their own understanding of what it means to be a student, a thinker, and a scholar in a generative AI world."

discuss

order

globalnode|1 year ago

I'll give it a read. I must admit, the more I learn about the inner workings of LLM's the more I see them as simply the sum of their parts and nothing more. The rest is just anthropomorphism and marketing.

maccaw|1 year ago

Funny, I feel the same way about humans.

mr_toad|1 year ago

Current LLMs are not the end-all of LLMs, and chain of thought frontier models are not the end-all of AI.

I’d be wary of confidently claiming what AI can and can’t do, at the risk of looking foolish in a decade, or a year, or at the pace things are moving, even a month.

ctbergstrom|1 year ago

That's entirely true. We've tried hard to stick with general principles that we don't think will readily be overturned. But doubtless we've been too assertive for some people's taste and doubtless we'll be wrong in places. Hence the choice to develop not a static book but rather living document that will evolve with time. The field is developing too fast for anything else.

With respect to what the future brings, we do try to address a bit of that in Lesson 16: https://thebullshitmachines.com/lesson-16-the-first-step-fal...

kykeonaut|1 year ago

The post seems to be talking about the current capabilities of large language models. We can certainly talk about what they can or cannot do as of today, as that is pretty much evidence based.

dullcrisp|1 year ago

They saw you coming in part 16.

beezlewax|1 year ago

That shouldn't give them any more merit that their current iteration deserves.

You could say the same thing about spaceships or self diving cars.