(no title)
quanto | 9 months ago
Perhaps so. But not in the (quasi-)academic sense that the author is thinking. It's not the lack of an engineer's academic knowledge in history and philosophy that makes an AI system fail.
> Then there’s the newfound ability of non-technical people in the humanities to write their own code. This is a bigger deal than many in my field seem to recognize. I suspect this will change soon. The emerging generation of historians will simply take it for granted that they can create their own custom research and teaching tools and deploy them at will, more or less for free.
This is the lede buried deep inside the article. When the basic coding skill (or any skill) is commoditized, it's the people with complementary skills that benefit the most.
treyd|9 months ago
I think that "knowing how to ask good questions" that you then solve has always been a valuable skill.
lwo32k|9 months ago
The big challenge is getting very different people with ever growing different skillsets and interests to coordinate, stay in sync and row in one direction.
ReptileMan|9 months ago
And they will spend 12 hours trying to figure out which is the fake python library and the citation that the LLM has hallucinated from the real one. Vibe coding is just WYSIWYG on steroids in good and bad. WYSIWYG didn't go anywhere.
mastazi|9 months ago
Maybe you haven't used AI coding tools in a while, the latest ones can run build tools, write and run unit tests, run linters, and will try and fix any errors that may arise during those steps. Of course it's possible that a library may have been been hallucinated, but this will just trigger an error during the build job and the AI agent will go back and fix it. Same thing for failing unit tests.
Just last week I saw Copilot fixing a failing unit test, then running the test, then making some more changes and repeating the process until the test was running successfully. At some point during this process, it asked me if it could install a VS Code extension so that it could run the test by itself, I agreed then it went from there until the issue was resolved. This was with the bottom-tier free version of Copilot.
Of course there are limits to what AI tools can do, but they are evolving all the time and at some point in a not too distant future they will be good enough in most cases.
Regarding hallucinated citations, I imagine that the problem can be solved by allowing the LLM to access and verify citations, then the agent can fix its own hallucinations just like most coding agents already do.
vineyardmike|9 months ago
Like MS Word?
These are pretty easy to solve problems tbh. LLM tools already exist that can work around “hallucinating libraries” effectively, not that this a real concern. It’s not magic, but these tired skeptic takes are just not based on reality.
It’s much more likely that LLMs will be used to supercharge visualizations with custom UIs and widgets, or in conjunction with things like MS excel for data analysis. Non-engineers won’t be vibe-coding a database anytime soon, but they could be making a PWA that marketing can use to add a filter on photos or help guide a biologist towards a python script to sort pictures of specimens based on an OpenCV model.
bonoboTP|9 months ago
UncleMeat|9 months ago
sdoering|9 months ago
Hayden White's assertion that "history is fiction" was (and still is) a complex one, not intended to dismiss the factual accuracy of historical narratives (as it is more often than not portraied).
Instead, it highlights the interpretive nature of historical writing and the way historians shape their accounts of the past through literary and rhetorical techniques. White argues that historians, like novelists, use narrative structures and stylistic devices to construct meaning from historical events.
HexPhantom|9 months ago
amelius|9 months ago