I've found this to be one of the most useful ways to use (at least) GPT-4 for programming. Instead of telling it how an API works, I make it guess, maybe starting with some example code to which a feature needs to be added. Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.
Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.
These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.
(The best thing about this is that I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code, which often takes longer than just writing the code the usual way.)
There are multiple ways that an interface can be bad, and being unintuitive is the only one that this will fix. It could also be inherently inefficient or unreliable, for example, or lack composability. The AI won't help with those. But it can make sure your API is guessable and understandable, and that's very valuable.
Unfortunately, this only works with APIs that aren't already super popular.
> Sometimes it comes up with a better approach than I had thought of.
IMO this has always been the killer use case for AI—from Google Maps to Grammarly.
I discovered Grammarly at the very last phase of writing my book. I accepted maybe 1/3 of its suggestions, which is pretty damn good considering my book had already been edited by me dozens of times AND professionally copy-edited.
But if I'd have accepted all of Grammarly's changes, the book would have been much worse. Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.
The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.
I used this to great success just this morning. I told the AI to write me some unit tests. It flailed and failed badly at that task. But how it failed was instructive, and uncovered a bug in the code I wanted to test.
That's closer to simply observing the mean. For an analogy, it's like waiting to pave a path until people tread the grass in a specific pattern. (Some courtyard designers used to do just that. Wait to see where people were walking first.)
Making things easy for Chat GPT means making things close to ordinary, average, or mainstream. Not creative, but can still be valuable.
I've played with a similar idea for writing technical papers. I'll give an LLM my draft and ask it to explain back to me what a section means, or otherwise quiz it about things in the draft.
I've found that LLMs can be kind of dumb about understanding things, and are particularly bad at reading between the lines for anything subtle. In this aspect, I find they make good proxies for inattentive anonymous reviewers, and so will try to revise my text until even the LLM can grasp the key points that I'm trying to make.
Many many python image-processing libraries have an `imread()` function. I didn't know about this when designing our own bespoke image-lib at work, and went with an esoteric `image_get()` that I never bothered to refactor.
When I ask ChatGPT for help writing one-off scripts using the internal library I often forget to give it more context than just `import mylib` at the top, and it almost always defaults to `mylib.imread()`.
This is similar to an old HCI design technique called Wizard of Oz by the way, where a human operator pretends to be the app that doesn’t exist yet. It’s great for discovering new features.
> and being unintuitive is the only one that this will fix
That's also how I'm approaching it. If all the condensed common wisdom poured into the model's parameters says that this is how my API is supposed to work to be intuitive, how on earth do I think it should work differently? There needs to be a good reason (like composability, for example). I break expectations otherwise.
> Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.
“Sometimes” being a very important qualifier to that statement.
Claude 4 naturally doesn’t write code with any kind of long term maintenance in-mind, especially if it’s trying to make things look like what the less experienced developers wrote in the same repo.
Please don’t assume just because it looks smart that it is. That will bite you hard.
Even with well-intentional rules, terrible things happen. It took me weeks to see some of it.
In a similar vein, some of my colleagues have been feeding their scientific paper methods sections to LLMs and asking them to implement the method in code, using the LLM's degree of success/failure as a vague indicator of the clarity of the method description.
> I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code
If anyone is stuck in this situation, give me a holler. My Gmail username is the same as my HN username. I've always been the one to hunt down my coworkers' bugs, and I think I'm the only person on the planet will finds it enjoyable to find ChatGPT'S oversights and sometimes seemingly malicious intent.
I'll charge you, don't get me wrong, but I'll save you time, money, and frustration. And future bug reports and security issues.
In essence, a LLM is a crystallisation of a large corpus human opinion and you are using that to focus group your API as it is representative of a reasonable third party perspective?
This was a big problem starting out writing MCP servers for me.
Having an LLM demo your tool, then taking what it does wrong or uses incorrectly and adjusting the API works very very well. Updating the docs to instruct the LLM on how to use your tool does not work well.
Great point. Also, it may not be the best possible API designer in the world, but it sure sounds like a good way to forecast what an _average_ developer would expect this API to look like.
> These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.
This is also similar to which areas TD-Gammon excelled at in Backgammon.
Which is all pretty amusing, if you compare it to how people usually tended to characterise computers and AI, especially in fiction.
> Any person who has used a computer in the past ten years knows that doing meaningless tasks is just part of the experience. Millions of people create accounts, confirm emails, dismiss notifications, solve captchas, reject cookies, and accept terms and conditions—not because they particularly want to or even need to. They do it because that’s what the computer told them to do. Like it or not, we are already serving the machines. (...)
> You might’ve heard a story of Soundslice [adding a feature because ChatGPT kept telling people it exists](https://www.holovaty.com/writing/chatgpt-fake-feature/). We see the same at Instant: for example, we used `tx.update` for both inserting and updating entities, but LLMs kept writing `tx.create` instead. Guess what: we now have `tx.create`, too.
> Is it good or is it bad? It definitely feels strange. In a sense, it’s helpful: LLMs here have seen millions of other APIs and are suggesting the most obvious thing, something every developer would think of first, too.
> It’s also a unique testing device: if developers use your API wrong, they blame themselves, read the documentation, and fix their code. In the end, you might never learn that they even had the problem. But with ChatGPT, you yourself can experience “newbie’s POV” at any time.
From my perspective that’s fascinatingly upside down thinking that leads to you asking to lose your job.
AI is going to get the hang of coding to fill in the spaces (i.e. the part you’re doing) long before it’s able to intelligently design an API. Correct API design requires a lot of contextual information and forward planning for things that don’t exist today.
Right now it’s throwing spaghetti at the wall and you’re drawing around it.
> Hallucinations can sometimes serve the same role as TDD. If an LLM hallucinates a method that doesn’t exist, sometimes that’s because it makes sense to have a method like that and you should implement it.
>> Hallucinations can sometimes serve the same role as TDD. If an LLM hallucinates a method that doesn’t exist, sometimes that’s because it makes sense to have a method like that and you should implement it.
A detailed counterargument to this position can be found here[0]. In short, what is colloquially described as "LLM hallucinations" do not serve any plausible role in software design other than to introduce an opportunity for software engineers to stop and think about the problem being solved.
The music notation tool space is balkanized in a variety of ways. One of the key splits is between standard music notation and tablature, which is used for guitar and a few other instruments. People are generally on one side or another, and the notation is not even fully compatible - tablature covers information that standard notation doesn't, and vice versa. This covers fingering, articulations, "step on fuzz pedal now," that sort of thing.
The users are different, the music that is notated is different, and for the most part if you are on one side, you don't feel the need to cross over. Multiple efforts have been made (MusicXML, etc.) to unify these two worlds into a superset of information. But the camps are still different.
So what ChatGPT did is actually very interesting. It hallucinated a world in which tab readers would want to use Soundslice. But, largely, my guess is they probably don't....today. In a future world, they might? Especially if Soundslice then enables additional features that make tab readers get more out of the result.
I don't fully understand your comment, but Soundslice has had first-class support for tablature for more than 10 years now. There's an excellent built-in tab editor, plus importers for various formats. It's just the ASCII tab support that's new.
I think folks have taken the wrong lesson from this.
It’s not that they added a new feature because there was demand.
They added a new feature because technology hallucinated a feature that didn’t exist.
The savior of tech, generative AI, was telling folks a feature existed that didn’t exist.
That’s what the headline is, and in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again, because next time it might not be so benign as it was this time.
> in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again
This would be a world without generative AI available to the public, at the moment. Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists, which are both completely irrational, since many people are finding practical value in its current imperfect state.
The current state of LLM is useful for what it's useful for, warnings of hallucinations are present on every official public interface, and its limitations are quickly understood with any real use.
Nearly everyone in AI research is working on this problem, directly or indirectly.
You sound like all the naysayers when Wikipedia was new. Did you know anybody can go onto Wikipedia and edit a page to add a lie‽ How can you possibly trust what you read on there‽ Do you think Wikipedia should issue groveling apologies every time it happens?
Meanwhile, sensible people have concluded that, even though it isn’t perfect, Wikipedia is still very, very useful – despite the possibility of being misled occasionally.
Yeah my main thought was that ChatGPT is now automating what sales people always do at the companies I've worked at, which is to hone in on what a prospective customer wants, confidently tell them we have it (or will have it next quarter), and then come to us and tell us we need to have it ready for a POV.
Exactly! It is definitely a weird new way of discovering a market need or opportunity. Yet it actually makes a lot of sense this would happen since one of the main strengths of LLMs is to 'see' patterns in large masses of data, and often, those patterns would not have yet been noticed by humans.
And in this case, OP didn't have to take ChatGPT's word for the existence of the pattern, it showed up on their (digital) doorstep in the form of people taking action based on ChatGPT's incorrect information.
So, pattern noticed and surfaced by an LLM as a hallucination, people take action on the "info", nonzero market demand validated, vendor adds feature.
Unless the phantom feature is very costly to implement, seems like the right response.
This is an interesting example of an AI system effecting a change in the physical world.
Some people express concerns about AGI creating swarms of robots to conquer the earth and make humans do its bidding. I think market forces are a much more straightforward tool that AI systems will use to shape the world.
What this immediately makes me realize is how many people are currently trying ot figure out how to intentionally get AI chat bots to send people to their site, like ChatGPT was sending people to this guy's site. SEO for AI. There will be billions in it.
I know nothing about this. I imagine people are already working on it, wonder what they've figured out.
(Alternatively, in the future can I pay OpenAI to get ChatGPT to be more likely to recommend my product than my competitors?)
Anyone who has worked at a B2B startup with a rouge sales team won't be surprised at all by quickly pivoting the backlog in response to a hallucinated missing feature.
Rogue? In the B2B space it is standard practice to sell from powerpoints, then quickly develop not just features but whole products if some slideshow got enough traction to elicit a quote. And it's not just startups. Some very big players in this space do this routinely.
I find it amusing that it's easier to ship a new feature than to get OpenAI to patch ChatGPT to stop pretending that feature exists (not sure how they would even do that, beyond blocking all mentions of SoundSlice entirely.)
If you gave a junior level developer just one or two files of your code, without any ability to look at other code, and asked them to implement a feature, none of them would make ANY reasonable assumptions about what is available?
This seems similar, and like a decent indicator that most people (aka the average developer) would expect X to exist in your API.
> ChatGPT was outright lying to people. And making us look bad in the process, setting false expectations about our service.
I find it interesting that any user would attribute this issue to Soundslice. As a user, I would be annoyed that GPT is lying and wouldn't think twice about Soundslice looking bad in the process
While AI hallucination problems are widely known to the technical crowd, that's not really the case with the general population. Perhaps that applies to the majority of the user base even. I've certainly known folks who place inordinate amount of trust in AI output, and I could see them misplacing the blame when a "promised" feature doesn't work right.
A frighteningly large fraction of non-technical population doesn't know that LLMs hallucinate all the time and takes everything they say totally uncritically. And AI companies do almost nothing to discourage that interpretation, either.
We (others at company, not me) hit this problem, and not with chatgpt but with our own AI chatbot that was doing RAG on our docs. It was occasionally hallucinating a flag that didn't exist. So it was considered as product feedback. Maybe that exact flag wasn't needed, but something was missing and so the LLM hallucinated what it saw as an intuitive option.
I had a smaller version of this when coding on a flight (with no WiFi! The horror!) over the Pacific. Llama hallucinated array-element operations and list-comprehension in C#. I liked the shape of the code otherwise, so, since I was using custom classes, I just went ahead and implemented both features.
I also went back to just sleeping on those flights and using connected models for most of my code generation needs.
I've come across something related when building the indexing tool for my vintage ad archive using OpenAI vision. No matter how I tried to prompt engineer the entity extraction into the defined structure I was looking for, OpenAI simply has its own ideas. Some of those ideas are actually good! For example it was extracting celebrity names, I hadn't thought of that. For other things, it would simply not follow my instructions. So I decided to just mostly match what it chooses to give me. And I have a secondary mapping on my end to get to the final structure.
Here's the thing: I don't think ChatGPT per se was the impetus to develop this new feature. The impetus was learning that your customers desire it. ChatGPT is operating as the kind of "market research" tool here, albeit it in a really unusual, inverted way. That said, if someone could develop a market research tool that worked this way, i.e. users went to it instead of you have to use it to go to users, I can see it making quite a packet.
They only want ASCII tablature parsing because that's what ChatGPT produces. If ChatGPT produced standard music notation, users would not care about ASCII tablature. ChatGPT has created this "market".
People forget that while technology grows, society also grows to support that.
I already strongly suspect that LLMs are just going to magnify the dominance of python as LLMs can remove the most friction from its use. Then will come the second order effects where libraries are explicitly written to be LLM friendly, further removing friction.
LLMs write code best in python -> python gets used more -> python gets optimized for LLMs -> LLMs write code best in python
LLMs removing friction from using coding languages would, at first glance, seem to erode Python's advantage rather than solidify it further. As a specific example LLMs can not only spit out HTML+JS+CSS but the user can interact with the output directly in browser/"app".
In a nice world it should be the other way around. LLMs are better at producing typed code thanks to the added context and diagnostics the types add, while at the same time greatly lowering their initial learning barrier.
We don't live in a nice world, so you'll probably end up right.
A significant number of new signups at my tiny niche SaaS now come from ChatGPT, yet I have no idea what prompts people are using to get it to recommend my product. I can’t get it to recommend my product when trying some obvious prompts on my own, on other people’s accounts (though it does work on my account because it sees my chat history of course).
Pretty good example of how a super-intelligent AI can control human behavior, even if it doesn't "escape" its data center or controllers.
If the super-intelligent AI understands human incentives and is in control of a very popular service, it can subtly influence people to its agenda by using the power of mass usage. Like how a search engine can influence a population's view of an issue by changing the rankings of news sources that it prefers.
There are a few things which could be done in the case of a situation like that:
1. I might consider a thing like that like any other feature request. If not already added to the feature request tracker, it could be done. It might be accepted or rejected, or more discussion may be wanted, and/or other changes made, etc, like any other feature request.
2. I might add a FAQ entry to specify that it does not have such a feature, and that ChatGPT is wrong. This does not necessarily mean that it will not be added in future, if there is a good reason to do so. If there is a good reason to not include it, this will be mentioned, too. It might also be mentioned other programs that can be used instead if this one doesn't work.
Also note that in the article, the second ChatGPT screenshot has a note on the bottom saying that ChatGPT can make mistakes (which, in this case, it does). Their program might also be made to detect ChatGPT screenshots and to display a special error message in that case.
Along these lines, a useful tool might be a BDD framework like Cucumber that instead of relying on written scenarios has an LLM try to "use" your UX or API a significant number of times, with some randomization, in order to expose user behavior that you (or an LLM) wouldn't have thought of when writing unit tests.
More than once GPT-3.5 'hallucinated' an essential and logical function in an API that by all reason should have existed, but for whatever reason had not been included (yet).
I tried asking chat bots about a car problem with a tailgate. They all told me to look for a manual tailgate release. When I responded asking if that model actually had a manual release, they all responded with no, and then some more info suggesting I look for the manual release. None even got close to a useful answer.
Figuring out the paths that users (or LLMs) actually want to take—not based on your original design or model of what paths they should want, but based on the paths that they actually do want and do trod down. Aka, meeting demand.
The comments are kind of concerning. First, ChatGPT did not discover unmet demand in the market. It tried to predict what a user would want and hallucinated a feature that could meet that demand. Both the demand and the feature were hallucinations. Big problem.
The user is not going to understand this. The user may not even need that feature at all to accomplish whatever it is they're doing. Alternatives may exist. The consequences will be severe if companies don't take this seriously.
Been using LLMs to code a bit lately. It's decent with boilerplate. It's pretty good at working out patterns[1]. It does like to ping pong on some edits though - edit this way, no back that way, no this way again. I did have one build an entire iOS app, it made changes to the UI exactly as I described, and it populated sample data for all the different bits and bobs. But it did an abysmal job at organizing the bits and bobs. Need running time for each of the audio files in a list? Guess we need to add a dictionary mapping the audio file ID to length! (For the super juniors out there: this piece of data should be attached to whatever represents the individual audio file, typically a class or struct named 'AudioFile'.)
It really likes to cogitate on code from several versions ago. And it often insists repeatedly on edits unrelated to the current task.
I feel like I'm spending more time educating the LLM. If I can resist the urge to lean on the LLM beyond its capabilities, I think I can be productive with it. If I'm going to stop teaching the thing, the least it can do is monitor my changes and not try to make suggestions from the first draft of code from five days ago, alas ...
1 - e.g. a 500-line text file representing values that will be converted to enums, with varying adherence to some naming scheme - I start typing, and after correcting the first two, it suggests the next few. I accept its suggestions until it makes a mistake because the data changed, start manual edits again ... I repeated this process for about 30 lines and it successfully learned how I wanted the remainder of the file edited.
Adding a feature because ChatGPT incorrectly thinks it exists is essentially design by committee—except this committee is neither your users nor shareholders.
On the other hand, adding a feature because you believe it is a feature your product should have, a feature that fits your vision and strategy, is a pretty sound approach that works regardless of what made you think of that feature in the first place.
I recall that early on a coworker was saying that ChatGPT hallucinated a simpler API than the one we offered, albeit with some easy to fix errors and extra assumptions that could've been nicer defaults in the API. I'm not sure if this ever got implemented though, as he was from a different team.
I am currently working on the bug where ChatGPT expects that if a ball has been placed on a box, and the box is pushed forward, nothing happens to the ball. This one is a doozy.
It's worth noting that behind this hallucination there were real people with ASCII tabs in need of a solution. If the result is a product-led growth channel at some scale, that's a big roadmap green light for me!
That's agentic AI, right? Run the LLM in a loop and give it a tool to publish to arxiv. If it cites a paper that doesn't exist, make it write and upload that one too, recursively. Should work for lawyers, too.
Oh. This happened to me when asking a LLM about a database server feature. It enthusiastically hallucinated that they have it when the correct answer was 'no dice'.
Maybe I'll turn it into a feature request then ...
I wonder if we ever get to the point I remember reading about in a novel ( AI initially based on emails ), where human population is gently nudged towards individuals that in aggregate benefit AI goals.
Sounds like you are referring to book 1 in a series, the book called "Avogadro Corp: The Singularity Is Closer than It Appears" by William Hertling. I read 3-4 of those books, they were entertaining.
If nothing else, I at least get vindication from hallucinations. "Yes, I agree, ChatGPT, that (OpenSSL manpage / ffmpeg flag / Python string function) should exist."
Had something similar happen to us with our dev-tools saas.
Non devs started coming to the product because gpt told them about it. Had to change parts of the onboarding and integration to accommodate it for non-devs who were having a harder time reading the documentation and understanding what to do.
Chatbot advertising has to be one of the most powerful forms of marketing yet. People are basically all the way through the sales pipeline when they land on your page.
This reminds me how the software integraters or implementers worked a couple of decades back. They are IT contractors for implementing a popular software product such as IBM MQ or SAP etc at a client site and maintaining it. They sometimes incorrectly claim that some feature exists, and after finding that it doesn't exist, they create a ticket to the software vendor asking for it as a patch release.
Funny this article is trending today because I had a similar thought over the weekend - if I'm in Ruby and the LLM hallucinates a tool call...why not metaprogram it on the fly and then invoke it?
If that's too scary, the failed tool call could trigger another AI to go draft up a PR with that proposed tool, since hey, it's cheap and might be useful.
We've done varying forms of this to differing degrees of success at work.
Dynamic, on-the-fly generation & execution is definitely fascinating to watch in a sandbox, but is far to scary (from a compliance/security/sanity perspective) without spending a lot more time on guardrails.
We do however take note of hallucinated tool calls and have had it suggest an implementation we start with and have several such tools in production now.
It's also useful to spin up any completed agents and interrogate them about what tools they might have found useful during execution (or really any number of other post-process questionnaire you can think of).
> What made ChatGPT think that this feature is supported?
It was a plausible answer, and the core of what these models do is generate plausible responses to (or continuations of) the prompt they’re given. They’re not databases or oracles.
With errors like this, if you ask a followup question it’ll typically agree that the feature isn’t supported, because the text of that question combined with its training essentially prompts it to reach that conclusion.
Re the follow-up question, it’s almost certainly the direction that advertising in general is going to take.
Id guess the answer is gpt4o is an outdated model that's not as anchored in reality as newer models. It's pretty rare for me to see sonnet or even o3 just outright tell me plausible but wrong things.
"Repeats" may be the term you're looking for. That would be interesting, however in some pieces it could make the overall document MUCH longer. It would be similar to loop unrolling.
Wow! What if we all did this? What is the closure of the feature set that ChatGPT can imagine for your product. Is it one that is easy for ChatGPT to use? Is it one that is sound and complete for your use cases? Is it the best that you can build had you had clear requirements upfront?
I think this is the best way to build features. Build something that people want! If people didn't want it ChatGPT won't recommend it. You got a free ride on the back of a multibillion dollar monster - i can't see what's wrong about that.
Beyond the blog: Going to be an interesting world where these kinds of suggestions become paid results and nobody has a hope of discovering your competitive service exists. At least in that world you'd hope the advertiser actually has the feature already!
The problem with LLMs is that in 99% of cases, they work fine, but in 1% of cases, they can be a huge liability, like sending people to wrong domains or, worse, phishing domains.
Oh my, people complaining about getting free traffic from ChatGPT... While most businesses are worried about all their inbound traffic drying up as search engine use declines.
Pretty goofy but I wonder if LLM code editors could start tallying which methods are hallucinated most often by library. A bad LSP setup would create a lot of noise though.
slightly off topic: but on the topic of AI coding agents making up apis and features that don’t exist, I’ve had good success with Q telling it to “check the sources to make sure the apis actually exist”. sometimes it will even request to read/decompile (java) sources, and do grep and find commands to find out what methods the api actually contains
Sometimes you plan for features that aren’t actually there. I found using mailsAI helped me focus on what’s really available, which made managing expectations easier. It’s a simple way to keep things clear.
> Should we really be developing features in response to misinformation?
Creating the feature means it's no longer misinformation.
The bigger issue isn't that ChatGPT produces misinformation - it's that it takes less effort to update reality to match ChatGPT than it takes to update ChatGPT to match reality. Expect to see even more of this as we match toward accepting ChatGPT's reality over other sources.
This seems like such a negative framing. LLMs are (~approximately) predictors of what's either logical or at least probable. For areas where what's probable is wrong and also harmful, I don't think anybody is motivated to "update reality" as some kind of general rule.
What the hell, we elect world leaders based on misinformation, why not add s/w features for the same reason?
In our new post truth, anti-realism reality, pounding one's head against a brick wall is often instructive in the way the brain damage actually produces great results!
ChatGPT routinely hallucinates API calls. ChatGPT flat-out makes it from whole cloth. "Apple Intelligence" creates variants of existing API calls, Usually, by adding nonexistent arguments.
Both of them will hallucinate API calls that are frequently added by programmers through extensions.
I am a bit conflicted about this story, because this was a case when the hallucination is useful.
Amateur musicians often lack just one or two features in the program they use, and the devs won't respond to their pleas.
Adding support for guitar tabs has made OP's product almost certainly more versatile and useful for a larger set of people. Which, IMHO, is a good thing.
But I also get the resentment of "a darn stupid robot made me do it". We don't take kindly to being bossed around by robots.
Either the user is a non-paying user and it doesn't matter what they think, or the user is a paying customer and you will be happy to make and sell them the feature they want.
This feels like a dangerously slippery slope. Once you start building features based on ChatGPT hallucinations, where do you draw the line? What happens when you build the endpoint in response to the hallucination, and then the LLM starts hallucinating new params / headers for the new endpoint?
- Do you keep bolting on new updates to match these hallucinations, potentially breaking existing behavior?
- Or do you resign yourself to following whatever spec the AI gods invent next?
- And what if different LLMs hallucinate conflicting behavior for the same endpoint?
I don’t have a great solution, but a few options come to mind:
1. Implement the hallucinated endpoint and return a 200 OK or 202 Accepted, but include an X-Warning header like "X-Warning: The endpoint you used was built in response to ChatGPT hallucinations. Always double-check an LLM's advice on building against 3rd-party APIs with the API docs themselves. Refer to https://api.example.com/docs for our docs. We reserve the right to change our approach to building against LLM hallucinations in the future." Most consumers won’t notice the header, but it’s a low-friction way to correct false assumptions while still supporting the request.
2. Fail loudly: Respond with 404 Not Found or 501 Not Implemented, and include a JSON body explaining that the endpoint never existed and may have been incorrectly inferred by an LLM. This is less friendly but more likely to get the developer’s attention.
Normally I'd say that good API versioning would prevent this, but it feels like that all goes out the window unless an LLM user thinks to double-check what the LLM tells them against actual docs. And if that had happened, it seems like they wouldn't have built against a hallucinated endpoint in the first place.
It’s frustrating that teams now have to reshape their product roadmap around misinformation from language models. It feels like there’s real potential here for long-term erosion of product boundaries and spec integrity.
EDIT: for the down-voters, if you've got actual qualms with the technical aspects of the above, I'd love to hear them and am open to learning if / how I'm wrong. I want to be a better engineer!
To me it seems like you're looking at this from a very narrow technical perspective rather than a human- and business-oriented one. In this case ChatGPT is effectively providing them free marketing for a feature that does not yet exist, but that could exist and would be useful. It makes business sense for them to build it, and it would also help people. That doesn't mean they need to build exactly what ChatGPT envisioned—as mentioned in the post, they updated their copy to explain to users how it works; they don't have to follow what ChatGPT imagines exactly. Nor do they need to slavishly update what they've built if ChatGPT's imaginings change.
Also, it's not like ChatGPT or users are directly querying their API. They're submitting images through the Soundslice website. The images just aren't of the sort that was previously expected.
> We ended up deciding: what the heck, we might as well meet the market demand.
this is my general philosophy and, in my case, this is why I deploy things on blockchains
so many people keep wondering about whether there will ever be some mythical unfalsifiable to define “mainstream” use case, and ignoring that crypto natives just … exist. and have problems they will pay (a lot) to solve.
to the author’s burning question about whether any other company has done this. I would say yes. I’ve discovered services recommended by ChatGPT and other LLMs that didnt do what was described of them, and they subsequently tweaked it once they figured out there was new demand
If you build on LLMs you can have unknown features. I was going to add an automatic translation feature to my natural language network scanner at http://www.securday.com but apparently using the ChatGPT 4.1 does automatic translation so I didn’t have to add it.
Plenty of people have English as a second language. Having an LLM help them rewrite their writing to make it better conform to a language they are not fluent in feels entirely appropriate to me.
I don't care if they used an LLM provided they put their best effort in to confirm that it's clearly communicating the message they are intending to communicate.
Does this extend to the heuristic TFA refers to? Where they end up (voluntarily or not) referring to what LLMs hallucinate as a kind of “normative expectation,” then use that to guide their own original work and to minimize the degree to which they’re unintentionally surprising their audience? In this case it feels a little icky and demanding because the ASCII tablature feature feels itself like an artifact of ChatGPT’s limitations. But like some of the commenters upthread, I like the idea of using it for “if you came into my project cold, how would you expect it to work?”
Having wrangled some open-source work that’s the kind of genius that only its mother could love… there’s a place for idiosyncratic interface design (UI-wise and API-wise), but there’s also a whole group of people who are great at that design sensibility. That category of people doesn’t always overlap with people who are great at the underlying engineering. Similarly, as academic writing tends to demonstrate, people with interesting and important ideas aren’t always people with a tremendous facility for writing to be read.
(And then there are people like me who have neither—I agree that you should roll your eyes at anything I ask an LLM to squirt out! :)
But GP’s technique, like TFA’s, sounds to me like something closer to that of a person with something meaningful to say, who now has a patient close-reader alongside them while they hone drafts. It’s not like you’d take half of your test reader’s suggestions, but some of them might be good in a way that didn’t occur to you in the moment, right?
Some comments were deferred for faster rendering.
kragen|7 months ago
Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.
These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.
(The best thing about this is that I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code, which often takes longer than just writing the code the usual way.)
There are multiple ways that an interface can be bad, and being unintuitive is the only one that this will fix. It could also be inherently inefficient or unreliable, for example, or lack composability. The AI won't help with those. But it can make sure your API is guessable and understandable, and that's very valuable.
Unfortunately, this only works with APIs that aren't already super popular.
suzzer99|7 months ago
IMO this has always been the killer use case for AI—from Google Maps to Grammarly.
I discovered Grammarly at the very last phase of writing my book. I accepted maybe 1/3 of its suggestions, which is pretty damn good considering my book had already been edited by me dozens of times AND professionally copy-edited.
But if I'd have accepted all of Grammarly's changes, the book would have been much worse. Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.
The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.
bryanlarsen|7 months ago
slowmovintarget|7 months ago
That's closer to simply observing the mean. For an analogy, it's like waiting to pave a path until people tread the grass in a specific pattern. (Some courtyard designers used to do just that. Wait to see where people were walking first.)
Making things easy for Chat GPT means making things close to ordinary, average, or mainstream. Not creative, but can still be valuable.
a_e_k|7 months ago
I've found that LLMs can be kind of dumb about understanding things, and are particularly bad at reading between the lines for anything subtle. In this aspect, I find they make good proxies for inattentive anonymous reviewers, and so will try to revise my text until even the LLM can grasp the key points that I'm trying to make.
momojo|7 months ago
Many many python image-processing libraries have an `imread()` function. I didn't know about this when designing our own bespoke image-lib at work, and went with an esoteric `image_get()` that I never bothered to refactor.
When I ask ChatGPT for help writing one-off scripts using the internal library I often forget to give it more context than just `import mylib` at the top, and it almost always defaults to `mylib.imread()`.
escapecharacter|7 months ago
https://en.m.wikipedia.org/wiki/Wizard_of_Oz_experiment
groestl|7 months ago
That's also how I'm approaching it. If all the condensed common wisdom poured into the model's parameters says that this is how my API is supposed to work to be intuitive, how on earth do I think it should work differently? There needs to be a good reason (like composability, for example). I break expectations otherwise.
ldeian|7 months ago
“Sometimes” being a very important qualifier to that statement.
Claude 4 naturally doesn’t write code with any kind of long term maintenance in-mind, especially if it’s trying to make things look like what the less experienced developers wrote in the same repo.
Please don’t assume just because it looks smart that it is. That will bite you hard.
Even with well-intentional rules, terrible things happen. It took me weeks to see some of it.
rcthompson|7 months ago
dotancohen|7 months ago
I'll charge you, don't get me wrong, but I'll save you time, money, and frustration. And future bug reports and security issues.
djtango|7 months ago
layer8|7 months ago
data-ottawa|7 months ago
Having an LLM demo your tool, then taking what it does wrong or uses incorrectly and adjusting the API works very very well. Updating the docs to instruct the LLM on how to use your tool does not work well.
golergka|7 months ago
eru|7 months ago
This is also similar to which areas TD-Gammon excelled at in Backgammon.
Which is all pretty amusing, if you compare it to how people usually tended to characterise computers and AI, especially in fiction.
kragen|7 months ago
> Any person who has used a computer in the past ten years knows that doing meaningless tasks is just part of the experience. Millions of people create accounts, confirm emails, dismiss notifications, solve captchas, reject cookies, and accept terms and conditions—not because they particularly want to or even need to. They do it because that’s what the computer told them to do. Like it or not, we are already serving the machines. (...)
> You might’ve heard a story of Soundslice [adding a feature because ChatGPT kept telling people it exists](https://www.holovaty.com/writing/chatgpt-fake-feature/). We see the same at Instant: for example, we used `tx.update` for both inserting and updating entities, but LLMs kept writing `tx.create` instead. Guess what: we now have `tx.create`, too.
> Is it good or is it bad? It definitely feels strange. In a sense, it’s helpful: LLMs here have seen millions of other APIs and are suggesting the most obvious thing, something every developer would think of first, too.
> It’s also a unique testing device: if developers use your API wrong, they blame themselves, read the documentation, and fix their code. In the end, you might never learn that they even had the problem. But with ChatGPT, you yourself can experience “newbie’s POV” at any time.
codingwagie|7 months ago
unknown|7 months ago
[deleted]
skygazer|7 months ago
djsavvy|7 months ago
visarga|7 months ago
unknown|7 months ago
[deleted]
afavour|7 months ago
AI is going to get the hang of coding to fill in the spaces (i.e. the part you’re doing) long before it’s able to intelligently design an API. Correct API design requires a lot of contextual information and forward planning for things that don’t exist today.
Right now it’s throwing spaghetti at the wall and you’re drawing around it.
beefnugs|7 months ago
Insanity driven development: altering your api to accept 7 levels of "broken and different" structures so as to bend to the will of the llms
JimDabell|7 months ago
> Hallucinations can sometimes serve the same role as TDD. If an LLM hallucinates a method that doesn’t exist, sometimes that’s because it makes sense to have a method like that and you should implement it.
— https://www.threads.com/@jimdabell/post/DLek0rbSmEM
I guess it’s true for product features as well.
jjcm|7 months ago
> Maybe hallucinations of vibe coders are just a suggestion those API calls should have existed in the first place.
> Hallucination-driven-development is in.
https://x.com/pwnies/status/1922759748014772488?s=46&t=bwJTI...
AdieuToLogic|7 months ago
>> Hallucinations can sometimes serve the same role as TDD. If an LLM hallucinates a method that doesn’t exist, sometimes that’s because it makes sense to have a method like that and you should implement it.
A detailed counterargument to this position can be found here[0]. In short, what is colloquially described as "LLM hallucinations" do not serve any plausible role in software design other than to introduce an opportunity for software engineers to stop and think about the problem being solved.
See also Clark's third law[1].
0 - https://addxorrol.blogspot.com/2025/07/a-non-anthropomorphiz...
1 - https://en.wikipedia.org/wiki/Clarke%27s_three_laws
shermantanktop|7 months ago
The users are different, the music that is notated is different, and for the most part if you are on one side, you don't feel the need to cross over. Multiple efforts have been made (MusicXML, etc.) to unify these two worlds into a superset of information. But the camps are still different.
So what ChatGPT did is actually very interesting. It hallucinated a world in which tab readers would want to use Soundslice. But, largely, my guess is they probably don't....today. In a future world, they might? Especially if Soundslice then enables additional features that make tab readers get more out of the result.
adrianh|7 months ago
gortok|7 months ago
It’s not that they added a new feature because there was demand.
They added a new feature because technology hallucinated a feature that didn’t exist.
The savior of tech, generative AI, was telling folks a feature existed that didn’t exist.
That’s what the headline is, and in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again, because next time it might not be so benign as it was this time.
nomel|7 months ago
This would be a world without generative AI available to the public, at the moment. Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists, which are both completely irrational, since many people are finding practical value in its current imperfect state.
The current state of LLM is useful for what it's useful for, warnings of hallucinations are present on every official public interface, and its limitations are quickly understood with any real use.
Nearly everyone in AI research is working on this problem, directly or indirectly.
bravesoul2|7 months ago
lexandstuff|7 months ago
rbits|7 months ago
unknown|7 months ago
[deleted]
aaron695|7 months ago
[deleted]
JimDabell|7 months ago
Meanwhile, sensible people have concluded that, even though it isn’t perfect, Wikipedia is still very, very useful – despite the possibility of being misled occasionally.
ahstilde|7 months ago
viccis|7 months ago
toss1|7 months ago
And in this case, OP didn't have to take ChatGPT's word for the existence of the pattern, it showed up on their (digital) doorstep in the form of people taking action based on ChatGPT's incorrect information.
So, pattern noticed and surfaced by an LLM as a hallucination, people take action on the "info", nonzero market demand validated, vendor adds feature.
Unless the phantom feature is very costly to implement, seems like the right response.
bredren|7 months ago
bravesoul2|7 months ago
deweller|7 months ago
Some people express concerns about AGI creating swarms of robots to conquer the earth and make humans do its bidding. I think market forces are a much more straightforward tool that AI systems will use to shape the world.
ACCount36|7 months ago
One of the most dangerous systems an AI can reach and exploit is a human being.
jrochkind1|7 months ago
I know nothing about this. I imagine people are already working on it, wonder what they've figured out.
(Alternatively, in the future can I pay OpenAI to get ChatGPT to be more likely to recommend my product than my competitors?)
londons_explore|7 months ago
So winning AI SEO is not so different than regular SEO.
unknown|7 months ago
[deleted]
oasisbob|7 months ago
toomanyrichies|7 months ago
1. https://en.wikipedia.org/wiki/Rogue
2. https://en.wikipedia.org/wiki/Rouge_(cosmetics)
PeterStuer|7 months ago
NooneAtAll3|7 months ago
simonw|7 months ago
PeterStuer|7 months ago
hnlmorg|7 months ago
Your solution is the equivalent of asking Google to completely delist you because one page you dont want ended up on Googles search results.
mudkipdev|7 months ago
LinXitoW|7 months ago
This seems similar, and like a decent indicator that most people (aka the average developer) would expect X to exist in your API.
felixarba|7 months ago
I find it interesting that any user would attribute this issue to Soundslice. As a user, I would be annoyed that GPT is lying and wouldn't think twice about Soundslice looking bad in the process
romanhn|7 months ago
Sharlin|7 months ago
pphysch|7 months ago
OTOH it's free(?) advertising, as long as that first impression isn't too negative.
adamgordonbell|7 months ago
chaboud|7 months ago
I also went back to just sleeping on those flights and using connected models for most of my code generation needs.
andybak|7 months ago
jivings|7 months ago
We get ~50% of traffic from ChatGPT now, unfortunately a large amount of the features it says we have are made up.
I really don't want to get into a state of ChatGPT-Driven-Development as I imagine that will be never ending!
[1]: https://x.com/JamesIvings/status/1929755402885124154
rorylaitila|7 months ago
colechristensen|7 months ago
Example:
https://llama-cpp-agent.readthedocs.io/en/latest/structured-...
alex-moon|7 months ago
copirate|7 months ago
Workaccount2|7 months ago
I already strongly suspect that LLMs are just going to magnify the dominance of python as LLMs can remove the most friction from its use. Then will come the second order effects where libraries are explicitly written to be LLM friendly, further removing friction.
LLMs write code best in python -> python gets used more -> python gets optimized for LLMs -> LLMs write code best in python
zamadatix|7 months ago
jjani|7 months ago
We don't live in a nice world, so you'll probably end up right.
_1tem|7 months ago
wrsh07|7 months ago
Some users might share it. ChatGPT has so many users it's somewhat mind boggling
jpadkins|7 months ago
If the super-intelligent AI understands human incentives and is in control of a very popular service, it can subtly influence people to its agenda by using the power of mass usage. Like how a search engine can influence a population's view of an issue by changing the rankings of news sources that it prefers.
zzo38computer|7 months ago
1. I might consider a thing like that like any other feature request. If not already added to the feature request tracker, it could be done. It might be accepted or rejected, or more discussion may be wanted, and/or other changes made, etc, like any other feature request.
2. I might add a FAQ entry to specify that it does not have such a feature, and that ChatGPT is wrong. This does not necessarily mean that it will not be added in future, if there is a good reason to do so. If there is a good reason to not include it, this will be mentioned, too. It might also be mentioned other programs that can be used instead if this one doesn't work.
Also note that in the article, the second ChatGPT screenshot has a note on the bottom saying that ChatGPT can make mistakes (which, in this case, it does). Their program might also be made to detect ChatGPT screenshots and to display a special error message in that case.
cactegra|7 months ago
[deleted]
insane_dreamer|7 months ago
insapio|7 months ago
> Correct feature almost exists
> Creator profile: analytical, perceptive, responsive;
> Feature within product scope, creator ability
> Induce demand
> await "That doesn't work" => "Thanks!"
> update memory
PeterStuer|7 months ago
philk10|7 months ago
nosioptar|7 months ago
jonathaneunice|7 months ago
Figuring out the paths that users (or LLMs) actually want to take—not based on your original design or model of what paths they should want, but based on the paths that they actually do want and do trod down. Aka, meeting demand.
amradio1989|7 months ago
The user is not going to understand this. The user may not even need that feature at all to accomplish whatever it is they're doing. Alternatives may exist. The consequences will be severe if companies don't take this seriously.
jagged-chisel|7 months ago
It really likes to cogitate on code from several versions ago. And it often insists repeatedly on edits unrelated to the current task.
I feel like I'm spending more time educating the LLM. If I can resist the urge to lean on the LLM beyond its capabilities, I think I can be productive with it. If I'm going to stop teaching the thing, the least it can do is monitor my changes and not try to make suggestions from the first draft of code from five days ago, alas ...
1 - e.g. a 500-line text file representing values that will be converted to enums, with varying adherence to some naming scheme - I start typing, and after correcting the first two, it suggests the next few. I accept its suggestions until it makes a mistake because the data changed, start manual edits again ... I repeated this process for about 30 lines and it successfully learned how I wanted the remainder of the file edited.
colechristensen|7 months ago
strogonoff|7 months ago
On the other hand, adding a feature because you believe it is a feature your product should have, a feature that fits your vision and strategy, is a pretty sound approach that works regardless of what made you think of that feature in the first place.
dietr1ch|7 months ago
I recall that early on a coworker was saying that ChatGPT hallucinated a simpler API than the one we offered, albeit with some easy to fix errors and extra assumptions that could've been nicer defaults in the API. I'm not sure if this ever got implemented though, as he was from a different team.
oytis|7 months ago
ecshafer|7 months ago
p0nce|7 months ago
sim7c00|7 months ago
what a wonderful incident / bug report my god.
totally sorry for the trouble and amazing find and fix honestly.
sorry i am more amazed than sorry :D. thanks for sharing this !!
sim7c00|7 months ago
so i am happy you implemented this, and will now look at using your service. thx chatgpt, and you.
mrcwinn|7 months ago
dr_dshiv|7 months ago
bux93|7 months ago
nottorp|7 months ago
Maybe I'll turn it into a feature request then ...
iugtmkbdfil834|7 months ago
linsomniac|7 months ago
jbaber|7 months ago
pinter69|7 months ago
Had something similar happen to us with our dev-tools saas. Non devs started coming to the product because gpt told them about it. Had to change parts of the onboarding and integration to accommodate it for non-devs who were having a harder time reading the documentation and understanding what to do.
swalsh|7 months ago
nicbou|7 months ago
spogbiper|7 months ago
mikewarot|7 months ago
It'll all be fine in a few years. :-;
zkmon|7 months ago
kunzhi|7 months ago
If that's too scary, the failed tool call could trigger another AI to go draft up a PR with that proposed tool, since hey, it's cheap and might be useful.
garfij|7 months ago
Dynamic, on-the-fly generation & execution is definitely fascinating to watch in a sandbox, but is far to scary (from a compliance/security/sanity perspective) without spending a lot more time on guardrails.
We do however take note of hallucinated tool calls and have had it suggest an implementation we start with and have several such tools in production now.
It's also useful to spin up any completed agents and interrogate them about what tools they might have found useful during execution (or really any number of other post-process questionnaire you can think of).
thih9|7 months ago
antonvs|7 months ago
It was a plausible answer, and the core of what these models do is generate plausible responses to (or continuations of) the prompt they’re given. They’re not databases or oracles.
With errors like this, if you ask a followup question it’ll typically agree that the feature isn’t supported, because the text of that question combined with its training essentially prompts it to reach that conclusion.
Re the follow-up question, it’s almost certainly the direction that advertising in general is going to take.
poulpy123|7 months ago
swalsh|7 months ago
tosh|7 months ago
amelius|7 months ago
adrianh|7 months ago
https://www.soundslice.com/help/en/player/advanced/17/expand...
That's available for any music in Soundslice, not just music that was created via our scanning feature.
shhsshs|7 months ago
mehulashah|7 months ago
pentagrama|7 months ago
ruperthair|7 months ago
anovikov|7 months ago
pkilgore|7 months ago
ternaus|7 months ago
excalibur|7 months ago
"Would you still have added this feature if ChatGPT hadn't bullied you into it?" Absolutely not.
I feel like this resolves several longstanding time travel paradox tropes.
burnt-resistor|7 months ago
mbf|7 months ago
guluarte|7 months ago
jongjong|7 months ago
lpzimm|7 months ago
Archit_lal_|7 months ago
moomin|7 months ago
iachilo|7 months ago
jedbrooke|7 months ago
sambapa|7 months ago
Ashkee|7 months ago
unknown|7 months ago
[deleted]
kelseyfrog|7 months ago
Creating the feature means it's no longer misinformation.
The bigger issue isn't that ChatGPT produces misinformation - it's that it takes less effort to update reality to match ChatGPT than it takes to update ChatGPT to match reality. Expect to see even more of this as we match toward accepting ChatGPT's reality over other sources.
mnw21cam|7 months ago
pmontra|7 months ago
If a feature has enough customers to pay for itself, develop it.
xp84|7 months ago
petesergeant|7 months ago
Neat
> My feelings on this are conflicted
Doubt
northisup|7 months ago
lofaszvanitt|7 months ago
scinadier|7 months ago
johnea|7 months ago
In our new post truth, anti-realism reality, pounding one's head against a brick wall is often instructive in the way the brain damage actually produces great results!
giancarlostoro|7 months ago
jxjnskkzxxhx|7 months ago
ChrisMarshallNY|7 months ago
ChatGPT routinely hallucinates API calls. ChatGPT flat-out makes it from whole cloth. "Apple Intelligence" creates variants of existing API calls, Usually, by adding nonexistent arguments.
Both of them will hallucinate API calls that are frequently added by programmers through extensions.
josefritzishere|7 months ago
inglor_cz|7 months ago
Amateur musicians often lack just one or two features in the program they use, and the devs won't respond to their pleas.
Adding support for guitar tabs has made OP's product almost certainly more versatile and useful for a larger set of people. Which, IMHO, is a good thing.
But I also get the resentment of "a darn stupid robot made me do it". We don't take kindly to being bossed around by robots.
nottorp|7 months ago
This is generally how you work with LLMs.
marcosdumay|7 months ago
kookamamie|7 months ago
davidmurphy|7 months ago
jedwards1211|7 months ago
careful_ai|7 months ago
[deleted]
aaron695|7 months ago
"We’ve got a steady stream of new users" and it seems like a simple feature to implement.
This is the exact chaos AI brings that's wonderful. Forcing us to evolve in ways we didn't think of.
I can think of a dozen reasons why this might be bad, but I see no reason why they have more weight than the positive here.
Take the positive side of this unknown and run with it.
We have decades more of AI coming up, Debbie Downers will be left behind in the ditch.
wafflebot|7 months ago
[deleted]
b0a04gl|7 months ago
[deleted]
123sereusername|7 months ago
[deleted]
homeless_engi|7 months ago
[deleted]
unknown|7 months ago
[deleted]
sandwiched|7 months ago
[deleted]
Applejinx|7 months ago
No, because you'll be held responsible for the misinformation being accurate: users will say it is YOUR fault when they learn stuff wrong.
carlosjobim|7 months ago
toomanyrichies|7 months ago
- Do you keep bolting on new updates to match these hallucinations, potentially breaking existing behavior?
- Or do you resign yourself to following whatever spec the AI gods invent next?
- And what if different LLMs hallucinate conflicting behavior for the same endpoint?
I don’t have a great solution, but a few options come to mind:
1. Implement the hallucinated endpoint and return a 200 OK or 202 Accepted, but include an X-Warning header like "X-Warning: The endpoint you used was built in response to ChatGPT hallucinations. Always double-check an LLM's advice on building against 3rd-party APIs with the API docs themselves. Refer to https://api.example.com/docs for our docs. We reserve the right to change our approach to building against LLM hallucinations in the future." Most consumers won’t notice the header, but it’s a low-friction way to correct false assumptions while still supporting the request.
2. Fail loudly: Respond with 404 Not Found or 501 Not Implemented, and include a JSON body explaining that the endpoint never existed and may have been incorrectly inferred by an LLM. This is less friendly but more likely to get the developer’s attention.
Normally I'd say that good API versioning would prevent this, but it feels like that all goes out the window unless an LLM user thinks to double-check what the LLM tells them against actual docs. And if that had happened, it seems like they wouldn't have built against a hallucinated endpoint in the first place.
It’s frustrating that teams now have to reshape their product roadmap around misinformation from language models. It feels like there’s real potential here for long-term erosion of product boundaries and spec integrity.
EDIT: for the down-voters, if you've got actual qualms with the technical aspects of the above, I'd love to hear them and am open to learning if / how I'm wrong. I want to be a better engineer!
tempestn|7 months ago
Also, it's not like ChatGPT or users are directly querying their API. They're submitting images through the Soundslice website. The images just aren't of the sort that was previously expected.
SunkBellySamuel|7 months ago
yieldcrv|7 months ago
this is my general philosophy and, in my case, this is why I deploy things on blockchains
so many people keep wondering about whether there will ever be some mythical unfalsifiable to define “mainstream” use case, and ignoring that crypto natives just … exist. and have problems they will pay (a lot) to solve.
to the author’s burning question about whether any other company has done this. I would say yes. I’ve discovered services recommended by ChatGPT and other LLMs that didnt do what was described of them, and they subsequently tweaked it once they figured out there was new demand
unknown|7 months ago
[deleted]
zitterbewegung|7 months ago
dingnuts|7 months ago
tomhow|7 months ago
We detached this subthread from https://news.ycombinator.com/item?id=44492212 and marked it off topic.
simonw|7 months ago
I don't care if they used an LLM provided they put their best effort in to confirm that it's clearly communicating the message they are intending to communicate.
avalys|7 months ago
alwa|7 months ago
Having wrangled some open-source work that’s the kind of genius that only its mother could love… there’s a place for idiosyncratic interface design (UI-wise and API-wise), but there’s also a whole group of people who are great at that design sensibility. That category of people doesn’t always overlap with people who are great at the underlying engineering. Similarly, as academic writing tends to demonstrate, people with interesting and important ideas aren’t always people with a tremendous facility for writing to be read.
(And then there are people like me who have neither—I agree that you should roll your eyes at anything I ask an LLM to squirt out! :)
But GP’s technique, like TFA’s, sounds to me like something closer to that of a person with something meaningful to say, who now has a patient close-reader alongside them while they hone drafts. It’s not like you’d take half of your test reader’s suggestions, but some of them might be good in a way that didn’t occur to you in the moment, right?