top | item 44166560

(no title)

quanto | 9 months ago

> Today, engineers working on AI systems also need to think deeply and critically about the relationship between language and culture and the history and philosophy of technology. When they fail to do so, their systems literally start to break down.

Perhaps so. But not in the (quasi-)academic sense that the author is thinking. It's not the lack of an engineer's academic knowledge in history and philosophy that makes an AI system fail.

> Then there’s the newfound ability of non-technical people in the humanities to write their own code. This is a bigger deal than many in my field seem to recognize. I suspect this will change soon. The emerging generation of historians will simply take it for granted that they can create their own custom research and teaching tools and deploy them at will, more or less for free.

This is the lede buried deep inside the article. When the basic coding skill (or any skill) is commoditized, it's the people with complementary skills that benefit the most.

discuss

order

treyd|9 months ago

> When the basic coding skill (or any skill) is commoditized, it's the people with complementary skills that benefit the most.

I think that "knowing how to ask good questions" that you then solve has always been a valuable skill.

lwo32k|9 months ago

The kids are not even trying to do either. They are already gravitating towards multidisciplinary teams, cause unlike past generations they are dealing with a rate of change at a totally different level. In such an environment, people get to see their own limitations much faster no matter the quality of their training and they end up having to rely on others.

The big challenge is getting very different people with ever growing different skillsets and interests to coordinate, stay in sync and row in one direction.

ReptileMan|9 months ago

>The emerging generation of historians will simply take it for granted that they can create their own custom research and teaching tools and deploy them at will, more or less for free.

And they will spend 12 hours trying to figure out which is the fake python library and the citation that the LLM has hallucinated from the real one. Vibe coding is just WYSIWYG on steroids in good and bad. WYSIWYG didn't go anywhere.

mastazi|9 months ago

> which is the fake python library and the citation that the LLM has hallucinated from the real one. Vibe coding is just WYSIWYG on steroids

Maybe you haven't used AI coding tools in a while, the latest ones can run build tools, write and run unit tests, run linters, and will try and fix any errors that may arise during those steps. Of course it's possible that a library may have been been hallucinated, but this will just trigger an error during the build job and the AI agent will go back and fix it. Same thing for failing unit tests.

Just last week I saw Copilot fixing a failing unit test, then running the test, then making some more changes and repeating the process until the test was running successfully. At some point during this process, it asked me if it could install a VS Code extension so that it could run the test by itself, I agreed then it went from there until the issue was resolved. This was with the bottom-tier free version of Copilot.

Of course there are limits to what AI tools can do, but they are evolving all the time and at some point in a not too distant future they will be good enough in most cases.

Regarding hallucinated citations, I imagine that the problem can be solved by allowing the LLM to access and verify citations, then the agent can fix its own hallucinations just like most coding agents already do.

vineyardmike|9 months ago

> WYSIWYG didn't go anywhere

Like MS Word?

These are pretty easy to solve problems tbh. LLM tools already exist that can work around “hallucinating libraries” effectively, not that this a real concern. It’s not magic, but these tired skeptic takes are just not based on reality.

It’s much more likely that LLMs will be used to supercharge visualizations with custom UIs and widgets, or in conjunction with things like MS excel for data analysis. Non-engineers won’t be vibe-coding a database anytime soon, but they could be making a PWA that marketing can use to add a filter on photos or help guide a biologist towards a python script to sort pictures of specimens based on an OpenCV model.

bonoboTP|9 months ago

It's really embarrassing to put a stake in these. It's like when Gary Marcus said that an image generator won't be able to draw a horse riding an astronaut and will make the astronaut ride the horse because that is a more frequent pattern in the training set. Or when an urban legend (academic legend?) started that state of the art classifiers misclassify cows when they are on sandy beaches and it only works on grass (it happened in some cases with small datasets and shortcut learning but there was no sota image classifier with such glaring and straightforward errors, but it trickled down to popular consciousness and grade school teaching materials on AI limitations.) Now it's about hallucinating nonexistent libraries. But reasoning models and RAG and large contexts and web search make this much less of an issue. The limitations everyone point at that trickle down to a soundbite that everyone repeats usually don't turn out to be fundamental limitations at all.

UncleMeat|9 months ago

I am generally pretty skeptical of ai coding. At my job I’m resisting it pretty constantly. However, my wife is a historian with minimal programming skills and these tools have allowed her to build some things that she’d never have been able to build without them (without spending a thoroughly unreasonable amount of time learning). Sometimes she comes to me with some code that is clearly just totally wrong but most of the time she’s able to stumble her way to something useful for exploring her documents and data. Then we just do a code review before anything goes into a publication to check that it is actually doing what she intends.

sdoering|9 months ago

The field of history had their - oh, this is a thing - moment quite a few decades ago with Hayden White. It is a good example of the underlying issue.

Hayden White's assertion that "history is fiction" was (and still is) a complex one, not intended to dismiss the factual accuracy of historical narratives (as it is more often than not portraied).

Instead, it highlights the interpretive nature of historical writing and the way historians shape their accounts of the past through literary and rhetorical techniques. White argues that historians, like novelists, use narrative structures and stylistic devices to construct meaning from historical events.

HexPhantom|9 months ago

The failures we see in AI systems usually come from neglecting real-world complexity

amelius|9 months ago

Basically a reformulation of Joel Spolsky's "commoditize your complement".