top | item 41315138

I'm tired of fixing customers' AI generated code

541 points| BitWiseVibe | 1 year ago |medium.com

330 comments

order

gumby|1 year ago

> Helping a customer solve challenges is often super rewarding, but only when I can remove roadblocks for customers who can do most of the work themselves.

One thing I loved about doing technical enterprise sales is that I’d meet people doing something I knew little or nothing about and who didn’t really understand what we offered but had a problem they could explain and our offering could help with.

They’d have deep technical knowledge of their domain and we had the same in ours, and there was just enough shared knowledge at the interface between the two that we could have fun and useful discussions. Lots of mutual respect. I’ve always enjoyed working with smart people even when I don’t really understand what they do.

Of course there were also idiots, but generally they weren’t interested in paying what we charged, so that was OK.

> Helping a customer solve challenges is often super rewarding, but only when I can remove roadblocks for customers who can do most of the work themselves.

So I feel a lot of sympathy for the author — that would be terribly soul sucking.

I guess generative grammars have increased the number of “I have a great idea for a technical business, I just need a technical co founder” who think that an idea is 90% of it and have no idea what technical work actually is.

alex-moon|1 year ago

This is honestly something I'm grateful for a lot of the time. I'm presently running a tech start-up in a highly technical domain (housebuilding, in a word) which also happens to be pretty hostile to businesses. People look at a planning application like "Why are there hundreds of documents here?" and it's because yeah, it is hard - there are huge numbers of variables to take into account, and the real "art" of urban design is solving for all of them at once. Then you send it to planning and basically no-one is happy, why haven't you done this and what are you going to do about that. You have to be pretty creative to survive.

Before that, I worked in a digital print organisation with a factory site. This factory did huge volumes on a daily basis. It was full of machines. They had built up a tech base over years, decades, and it was hyper-optimised - woe betide any dev who walked into the factory thinking they could see an inefficiency that could be refactored out. It happened multiple times - quite a few devs, myself included, learned this lesson the hard way - on rare occasion thousands of lines of code had to be thrown out because the devs hadn't run it past the factory first.

It's an experience I'd recommend to any dev - build software for people who are not just "users" of technology but builders themselves. It's not as "sexy" as building consumer-facing tech, but it is so much more rewarding.

cl3misch|1 year ago

Your second quote is the same as the first one. Did you copy the same one twice by accident?

mananaysiempre|1 year ago

Please also consider this when a localization contractor advertises lower costs by having human editors go over machine translations, then.

alexeiz|1 year ago

I had a related episode at work when my coworker asked me why his seemingly trivial 10 line piece of code was misbehaving inexplicably. It turned out he had two variables `file_name` and `filename` and used one in place of another. I asked him how he ended up with such code, he said he used copilot to create it. Using code from a generative AI without understanding what it does is never a good idea.

delusional|1 year ago

We hired a new guy at work. In one of his first tasks he had chosen to write some bash, and it was pure nonsense. I mean it contained things like:

if [ -z "${Var}+x" ]

Where I can see what the author was trying to do, but the code is just wrong.

I dont mind people not knowing stuff, especially when it's essentially Bash trivia. But what broke my heart was when I pointed out the problem, linked to the documentation, but recieved the response "I dont know what it means, I just used copilot" followed by him just removing the code.

What a waste of a learning opportunity.

Tainnor|1 year ago

And any decent IDE will highlight a variable that is declared but unused. We already have "artificial intelligence" in the form of IDEs, linters, compilers, etc. but some people apparently think we should just throw it all away now that we have LLMs.

frumper|1 year ago

I knew a guy that made a good living as a freelance web developer decades ago. He would pretty much just copy and paste code from tutorials or stack overflow and had no real idea how anything worked. Using code without understanding it is never a good idea, it doesn’t need to be from AI for that to be true.

yawnxyz|1 year ago

Claude gave me something similar, except these were both used, and somehow global variables, and it got confused about when to use which one.

Asking it to refactor / fix it made it worse bc it'd get confused, and merge them into a single variable — the problem was they had slightly different uses, which broke everything

I had to step through the code line by line to fix it.

Using Claude's still faster for me, as it'd probably take a week for me to write the code in the first place.

BUT there's a lot of traps like this hidden everywhere probably, and those will rear their ugly heads at some point. Wish there was a good test generation tool to go with the code generation tool...

Glyptodon|1 year ago

At least for me stupid bugs like this turn out to be some of the most time wasting to debug, no AI involved. Like accidentally have something quoted somewhere, or add an 's' to a variable by accident and I may not even correctly process what the error message is reporting at first. Always feel a bit silly after.

ben_w|1 year ago

> Using code from a generative AI without understanding what it does is never a good idea.

True, but the anecdote doesn't prove the point.

It's easy to miss that kind of difference even if you wrote the code yourself.

morgango|1 year ago

Interestingly, a great application for GenAI is to copy and paste code and ask it, "Why is this not working?". It works even better if you give it the specific error you are getting (and it is a well understood system).

bckr|1 year ago

> Using code from a generative AI without understanding what it does is never a good idea.

Yes.

AI as a faster way to type: Great!

AI as a way to discover capabilities: OK.

Faster way to think and solve problems: Actively harmful.

berniedurfee|1 year ago

I burned a couple hours debugging some generated code only to finally realize copilot was referencing a variable as ‘variableO1’.

Artificial Incompetence indeed!

planb|1 year ago

In my experience this is exactly the kind of mistake an AI would not make.

mooreds|1 year ago

> Using code from a generative AI without understanding what it does is never a good idea.

Hear hear!

I feel like genAI is turning devs from authors to editors. Anyone who thinks the latter is lesser than the former has not performed both functions. Editing properly, to elevate the meaning of the author, is a worthy and difficult endeavor.

EVa5I7bHFq9mnYK|1 year ago

Sounds like javascript "code". A normal language with proper type system would not allow that.

FanaHOVA|1 year ago

Copilot wouldn't make a typo. He just made that up and / or broke the code himself.

TillE|1 year ago

I'm always a little surprised at how many people out there want to develop software yet haven't put in the effort to gain even the most basic computer nerd programming chops. You see this all the time in the more newbie-friendly game engine communities.

Maybe you don't want to pursue a career in software, but anyone can spend a week learning Python or JavaScript. I suspect/hope a lot of these people are just kids who haven't gotten there yet.

hamandcheese|1 year ago

I think you overestimate the amount of skill a newb can quickly gain on their own. I taught myself to code, but it took a whole summer (aka free time that adults don't get) and I had access to my dad (who was a software engineer himself) to answer lots of questions.

viccis|1 year ago

One of my favorite intern stories was a kid who was a compsci senior, very good university, and who was assigned to write some Python code for my team. He had the very immature "Python is a baby's language" attitude. He wasn't working on my stuff so I don't really keep track of what he's doing, but a few weeks later I look at what he has written. Almost all of his Python functions have one parameter with an asterisk, and he does a len() check on it, printing and return an integer if it's not the right length of function arguments. Turns out this guy learned this behavior from Perl, used an asterisk because why not he always does in C, and was just manually unpacking every function argument and using a C style return error handling process.

Still the most insane thing I've seen, but I know there are a lot of kids out of college who got used to gen AI for code writing who put out a lot of this kind of code. Also, coincidentally, we haven't hired any US college interns in about 3 years or so.

mrbombastic|1 year ago

A week is not anywhere close to enough to learn programming in any meaningful way for someone with no experience.

jprete|1 year ago

Coding requires a willingness to understand and manipulate systems made of unbreakable rules. Most people don't want to deal with such an uncompromising method of communication.

nsonha|1 year ago

> You see this all the time in the more newbie-friendly game engine

games tend to attract young people (read: beginners) but at the same time game programming's barrier to entry is pretty high with maths & physics, low-level graphical programming, memory management, low level language and dependencies, OOP... It's almost obvious that this should be the case, every kid who's interested to coding I talked to wants to do something with games.

tensor|1 year ago

I'm not surprised at all. Honestly the "I don't need to learn that" mentality is common in tech even in people who call themselves senior developers. It's especially noticeable in the hostility of many towards the sorts of information you learn in a good computer science degree.

How many arguments have we heard here along the lines of "why teach algorithms universities should be teaching _insert_fad_technology_of_the_day_." Big Oh and time complexity is a special favourite for people to pick on and accuse of being useless or the like. You see it in arguments around SQL vs document databases, people not being willing to recognize that their for loops are in fact the same as performing joins, people unwilling to recognize that yes they have a data schema even if they don't write it down.

So I'm not surprised at all that people would use AI as a substitute for learning. Those same people have likely gotten by with stackoverflow copypasta before gen AI came about.

soared|1 year ago

Game dev is a fun hobby some people like to mess around with, just like any other hobby. Doesn’t mean they’ll be experts or know what they’re doing, but they’ll try and probably ask some basic questions online.

I mess around in goody and game maker and can write some shitty code there, but I’ve never written a line of code for work. I just like messin around for fun

randomdata|1 year ago

Seems like the natural progression from end goal to breaking it down into the smaller and smaller pieces required to see the goal through, as people have always done.

Before LLMs you'd probably have to reach for learning Python or Javascript sooner, at least if StackOverflow didn't have the right code for you to copy/paste, but I expect anyone who sticks with it will get there eventually either way.

triyambakam|1 year ago

I think people often don't know where to begin.

rurp|1 year ago

I agree with the part that someone who wants to build something technical should gain at least some related knowledge, but a week is underselling the effort needed to learn how to code by a lot. After one week of teaching myself Python I couldn't code my way out of a paper bag, and I'm someone who enjoyed it enough to stick with it. The average person would need at least 10x that amount of time to be able to start building something interesting.

creesch|1 year ago

I am not, I see it happening even within companies. They figure that for some junior tech related roles they don't need to hire people with the education and just teach them in house. Often not developing itself, but things like automated tests in a QA role.

The result is people that have no technical background, no real interest in it either, no basic framework to start from learning to use a specific set of tools and a very basic understanding of programming.

dylan604|1 year ago

There's a common phrase that founders should code, but not all founders are coders. So when the start up is small and the founders want to contribute by testing PoCs, the chatbots are getting used by those founders that can't code. Lucky for me, the PoC is just that and allowed to be implemented without shimming the PoC directly.

I cringe every time they mention using the bots, but luckily it has been controllable.

_xiaz|1 year ago

Anyone who already programs for a couple of years can spend a week learning $lang. Learning programming for the first time takes a long while and a lot of effort. I'd say a couple of months if you're bright and motivated. Possibly a year or two if you're not.

stavros|1 year ago

> I'm always a little surprised at how many people out there want to develop software yet haven't put in the effort to gain even the most basic computer nerd programming chops.

If you're surprised by reality, that says something about your mental model, not about reality. People don't want to "learn programming", they want to "make a thing". Some people learn programming while making a thing, but why learn about how the sausage is made when all you want is to eat it?

userbinator|1 year ago

Cryptocurrent trading tools? The susceptibility of people to get-rich-quick scams and the desire to not do even the minimum of work is surely correlated.

Stop poisoning the well and then complaining that you have to drink from it.

jonplackett|1 year ago

I would suggest adding a help section with advice for people using ChatGPT that sets expectations and also gives a pre-written prompt for them to use.

Something like.

Some of you may use ChatGPT/Copilot to help with your coding. Please understand this service is designed for professional programmers and we cannot provide extensive support for basic coding issues, or code your app for you.

However if you do want to use ChatGPT here is a useful starting prompt to help prevent hallucinations - though they still could happen.

Prompt:

I need you to generate code in ____ language using only the endpoints described below

[describe the endpoints and include all your docs]

Do not use any additional variables or endpoints. Only use these exactly as described. Do not create any new endpoints or assume any other variables exist. This is very important.

Then give it some examples in in curl / fetch / axios / python / etc.

Maybe also add some instructions to separate out code into multiple files / endpoints. ChatGPT loves to make one humungous file that’s really hard to debug.

ChatGPT works fairly well if you know how to use it correctly. I would defo not trust it with my crypto though, but I guess if some people wanna that’s up to them. May as well try and help them!

delifue|1 year ago

Someone mentioned "hallucination-based API design" on twitter (I cannot find it now). It's designing API by LLM hallucination. If there is a common hallucination API call, just add that API. This will make the API more "friendly" and resemble common similar APIs.

Considering that LLM can hallucinate in different ways unpredictably, not sure whether it will work in practice.

Too|1 year ago

Internet explorer tried that 20 years ago. Allowing all kinds of errors in the html to just slip through and attempt to do something good out of it, forcing everyone else to be bug-for-bug compatible.

Also commonly referred to as the robustness principle; “be conservative in what you do, be liberal in what you accept from others“. An approach that is now considered an anti-pattern, for such future compatibility reasons.

Flop7331|1 year ago

Let's all give ourselves extra prosthetic fingers and mutilate our ears while we're at it.

RegW|1 year ago

I suppose it would be relying on being trained on code that followed good practice. If this is true then we might suppose that this API isn't following good practice. However, a gigantic feedback loop is appearing on the horizon.

The AI of tomorrow will be trained on the output of AI today.

(Somewhere in my memory, I hear an ex-boss saying "Well that's good - isn't it?")

netcan|1 year ago

Idk if "hallucination-based API design" specifically is The Way.

There might be other ways of achieving the same goal. Also,LLM hallucination is changing/improving quite rapidly.

That said, "Designed for LLM" is probably a very productive pursuit. Puts you in the right place to understand the problems of the day.

sim7c00|1 year ago

I doubt LLM hallucinations will produce good secure code.

In my opinion, using code from LLMs, which might see your program come to life a bit quicker, will only enhance the time needed for debugging and testing, as there might be bugs and problems in there ranging from trivial things (unused variables) to very subtle and hard to find logic issues which require a deeper knowledge of the libraries and frameworks cobbled together by an LLM.

Additionally, it takes out a lot of the knowledge of these things in the long run, so people will find it more and more challenging to properly do this testing and debugging phase.

Raicuparta|1 year ago

You're going at it all wrong. You just need another LLM on the other end of the API too.

paxys|1 year ago

I can guarantee that all these users are following some "hustle university" course peddled by a Twitter influencer. Crypto and AI are the two favorite words of all these get rich quick scams.

b112|1 year ago

You can be a Google dev, and make half a million a year! For only $29.95, we'll show you how to empower yourself with the wonders of AI!

pvillano|1 year ago

I won't say all investors are entitled and overconfident, inspired by grifters, emboldened by survivorship bias, and motivated by greed. That would be rude

amai|1 year ago

I‘m tired of fixing my colleagues‘ AI generated code. They churn out a lot of code in a short amount of time, but then we loose the saved time again because during pull request review they often cannot explain what this code is actually doing. Maybe I should use an AI for code review, too?

throwuxiytayq|1 year ago

Why are these people employed? Isn’t that a bit like a fake employee who outsources his work behind your back? You can’t work with the guy because he literally doesn’t even know or understand the code he’s pushing to the repo

bruce511|1 year ago

A tiered approach to sales can help here.

Cheap version offers minimal support. (Although you still have to sift "bug reports" into my problem/ your problem.)

Standard version allows for more support (but still rate limited.)

Developer version charges for time spent, in advance.

This helps because people only expect free support if you don't explicitly offer something else. If you offer paid support them it's reasonable and expected that there are limits on free support.

Pikamander2|1 year ago

> Often this takes the form of trying to access an endpoint that does not exist, or read a property off the API response that does not exist. After probing a bit more, my suspicions are usually confirmed — ChatGPT hallucinated that endpoint or property

In some cases, you might be able to use this to your advantage to improve the product.

When working with third-party APIs, I've often run into situations where my code could be simplified greatly if the API had an extra endpoint or parameter to filter certain data, only to be disappointed when it turns out to have nothing of the sort.

It's possible that ChatGPT is "thinking" the same thing here; that most APIs have an X endpoint to make a task easier, so surely yours does too?

Over time I've sent in a few support tickets with ideas for new endpoints/parameters and on one occasion had the developer add them, which was a great feeling and allowed me to write cleaner code and make fewer redundant API calls.

ben_w|1 year ago

> It's possible that ChatGPT is "thinking" the same thing here; that most APIs have an X endpoint to make a task easier, so surely yours does too?

While this is possible, I would caution with an anecdote from my ongoing side project of "can I use it to make browser games?", in which 3.5 would create a reasonable Vector2D class and then get confused and try to call .mul() and .sub() instead of the .multiply() and .subtract() that it had just created.

Sometimes it's exceptionally insightful, other times it needs to RTFM.

t_mann|1 year ago

another consideration: if a popular AI model hallucinates an endpoint for your API for one customer, chances are another customer will run into the same situation

gorbachev|1 year ago

I've been saying for a while now that there's an absolute gold mine waiting for people who want to specialize in fixing AI generated applications.

A lot of businesses are going to either think they can have generative AI create all of their apps with the help of a cousin of the wife of the accountant, or they unknowingly contract a 10x developer from Upwork or alike who uses generative AI to create everything. Once they realize how well that's working out, the smart ones will start from scratch, the not-so-smart will attempt to fix it.

Develop a reputation for yourself for getting companies out of that pickle, and you can probably retire early.

paretoer|1 year ago

Maybe for a very short amount of time.

I suspect this quickly will be like specializing in the repair of cheap, Chinese made desk lamps from Walmart.

If the cheap desk lamp breaks, you don't pay someone to fix it. You buy another one and often it will be a newer , better model. That is the value proposition.

Of course, the hand crafted, high end desk lamp will be better but if you just want some light, for many use cases, the cheap option will be good enough.

PeterStuer|1 year ago

Doesn't solve the problem that the budget they has in mind for the app was $300, while an experienced dev can directly see this is going to be a $20K v.1 before change requests project.

djeastm|1 year ago

>I've been saying for a while now that there's an absolute gold mine waiting for people who want to specialize in fixing AI generated applications.

The real savvy ones will use later generations of LLMs to fix the output of early ones

adverbly|1 year ago

Another concern is around reviewing it.

I can't tell in a pull request what someone wrote themselves, or to what level of detail they have pre-reviewed the AI code which is now part of the pull request before allowing it to get to me. I can tell you I don't want to be fixing someone else's AI generated bugs though... Especially given that AI writes more/less dry/more verbose code, and increases code churn in general.

hoosieree|1 year ago

Just add more AI, that'll solve everything.

[edit] unfortunately I think I need to point out the above is sarcasm. Because there really are people using AI to review AI-generated code, and they do not see the problem with this approach.

pacoWebConsult|1 year ago

Seems to me like you have an opportunity to develop a couple SDKs in your customers' favorite languages (probably python and typescript) and a simple "Get Started" template that could alleviate a lot of these requests. Show them the base case, let them figure out how to work with your api via an SDK instead of directly with an API and let the advanced users build their own SDKs in whatever language they prefer. Since its, as OP claims, a simple HTTP API, the SDK could be generated with OpenAPI client generation tools.

maxidorius|1 year ago

And then the customers will open support requests for code generated by an AI that misuse that very SDK. It doesn't look like OP's issue is with the code per say, only with the lack of skills of its customers, regardless of the code they write...

jdance|1 year ago

Seems like a good way to have a bunch of new products to also support

bdcravens|1 year ago

This seems like a support issue, not an AI issue. AI is how the code was written, but the issue would be the same if it was amateurs writing bad code. If all you want to do is support your API, a support article outlining the issues you see over and over would be something to point your customers to. Warrant your API against errors, but point out that anything more is billable work. If you're not interested, partner with someone to do that work. You could even offer support contracts that include some amount of customization.

omoikane|1 year ago

This post seems to be saying that AI opened up a new avenue for people to demand free work.

If someone asked "I wrote some rough designs and interfaces, can you write me an app for free?" The author could easily detect it as a request for free work. But because the first part is hidden behind some ChatGPT generated code and the second part is disguised as a request for help, the author would be tricked into doing this free work, until they detected this pattern and write a blog post about it.

prisenco|1 year ago

That's like saying "seems like the problem is the internet is filled with low quality content" in response to ai bots when, while not wrong, the new problem is that we've created a way to accelerate the creation of that low quality content many orders of magnitude faster.

So what was a difficult problem can quickly become insurmountable.

SpicyLemonZest|1 year ago

It's a support issue in a sense, but in many contexts people want to offer a better support experience than "anything more is billable work". A reputation for being helpful and customer-friendly is valuable, especially in a business where you're selling to other programmers, and you can't buy that reputation after the fact with money from support contracts.

m463|1 year ago

> amateurs writing bad code

in volume, this turns into support writing the code.

I think of how the south park movie folks sent so much questionable content to the censors that the compromise in the end let through lots of the puppet sex.

fzeroracer|1 year ago

The difference is scale. I don't know how many times people need to say this, but LLM tools enable people to spam low quality code at a rate that is far faster than ever.

There's been multiple stories and posts here on HN about issues with AI generated PRs for open source repos because of people using them to game numbers or clout. This is a similar problem where you have a bunch of people using your API and then effectively trying to use you for free engineering work.

anonzzzies|1 year ago

I have a business fixing broken code/systems (especially if it is stressful and last minute); if you are tired/annoyed of something in the software market, just up your fees. For us not much changed; a lot of badly (throw the spec over the wall) outsourced software was fairly bad since forever; AI generated code is similar. I guess this will grow even faster though, as normally solid developers might take on a lot more work and get sloppy with AI.

lagniappe|1 year ago

There's room for all of us in this industry. What someone is unwilling to do is just an opportunity for someone else to pick up the yoke.

nkrisc|1 year ago

I have the sense that most of these people won’t be willing to pay anything to have their code fixed.

aunty_helen|1 year ago

This could actually be an ingenious way of solving the problem. If someone has a support issue and can't solve it themselves, yet requires coding help, forward them a freelancer that they can hire for 20$/hr from upwork that knows this API well etc.

shireboy|1 year ago

I can empathize, but am also wondering if some of these are feature requests in disguise. For “how to call api” docs, sample code, and even client libraries can be generated from your OpenAPI specs. Link to the docs in every reply. The more complex asks could be translated “build this endpoint and charge me for it”. If all else fails, set up partnership with devs who want to fix/build customers crap and figure out some copy to direct the more complex asks to them.

jollyllama|1 year ago

Agreed, client libs and sample applications have fallen by the wayside and could provide a solution here. It makes it very obvious to the customer when they download and run something that works, and then their changes break it, that the issue is with their changes.

tazu|1 year ago

Hilariously, the target market for the author's API seems to be the same as the top post on HN today[0]: "traders".

I think amateur "trading" attracts a specific brand of idiot that is high/left on the Dunning Kruger curve. While taking money from idiots is a viable (even admirable) business strategy, you may want to fully automate customer service to retain your sanity.

[0]: https://news.ycombinator.com/item?id=41308599

bofadeez|1 year ago

It's people who generally don't have any other skills and reject all evidence for Efficient Market Hypothesis. They legitimately think what they're doing is not gambling. No amount of empirical evidence can convince them they have no risk-adjusted alpha

creesch|1 year ago

Yeah the customer demographic here likely does worsen the situation. Although I am sure that this is happening elsewhere as well.

lacoolj|1 year ago

lol this is just like trying to help people in programming discords that are literally using AI on screen to write and rewrite the entire app as they go. then they run into an issue, ask for help, and don't understand when you say "there's a memory leak 50 lines down" or "you have to define that variable first".

AI is a great tool to help someone start an idea. When it goes past that, please don't ask us for help until you know what the code is doing that you just generated.

mikewarot|1 year ago

I'm on the other side of this when it comes to the C programming language. I've avoided it for decades, preferring Pascal, or even Visual Basic.

The single best thing to help someone in my place is clear and coherent documentation with working examples. It's how I learned to use Turbo Pascal so long ago, and generally the quickest way to get up to speed. It's also my biggest gripe with Free Pascal... their old help processing infrastructure binds them to horrible automatically generated naming of parameters as documentation, and nothing more.

Fortunately, CoPilot doesn't get impatient, and I know this, so I can just keep pounding away at things until I get something close to what I want, in spite of myself. ;-)

cratermoon|1 year ago

There's already a big market for taking out AI garbage, and it expect it to grow as the AI bubble bursts. The best thing a consultant can do today is learn the common issues and failure modes of AI generated code.

Providing an API service to customers means you will get terrible client code. Sometimes it's dumb stuff like not respecting rate limits and responding to errors by just trying again. One option, if you don't want to fix it yourself, is to partner with a consultant who will take it on, send your customers to them. Bill (or set your API access rates) appropriately.

Sometimes you have to fire customers. Really bad ones that cost more than they bring in are prime candidates for firing, or for introducing to your "Enterprise Support Tier".

mediumsmart|1 year ago

I always make gpt fix the code it created. How else would I learn.

protocolture|1 year ago

Yeah so every tech revolution does this right.

ATMs were meant to kill banking jobs but ah theres more jobs in banking than ever.

The Cloud was meant to automate away tech people, but all it did was create new tech jobs. A lot of which is cleaning up after idiots who think they can outsource everything to the cloud and get burned.

LLMs are no different. The "Ideas Man" can now get from 0 to 30% without approaching a person with experience. Cleaning up after him is going to be lucrative. There are already stories about businesses rehiring graphic designers they fired, because someone needs to finish and fine tune the results of the LLM.

busterarm|1 year ago

> ATMs were meant to kill banking jobs but ah theres more jobs in banking than ever.

ATMs only handle basic teller functions and since COVID in NYC I had to change banks twice because I couldn't find actual tellers or banks with reasonable open hours. BoA had these virtual-teller only branches and the video systems were always broken (and the only option on Saturday). This was in Midtown Manhattan and my only option was basically a single branch on the other side of town 8-4 M-F.

I'm now happily with a credit union but at least since moving to the south things are generally better because customers won't tolerate not being able to deal with an actual person.

dylan604|1 year ago

I seriously hope those rehires are coming back with a refined rate as well.

creesch|1 year ago

> ATMs were meant to kill banking jobs but ah theres more jobs in banking than ever.

US banks, who are surprisingly behind the times in as far as automation goes. Here a lot of banks used a lot of automation to reduce the amount of manual jobs needed. to the degree that many offices are now also closing as everything can be done online.

And no, there is no need to visit banks here as I get the impression it is in the US. We don't even have physical checks anymore.

reportgunner|1 year ago

Sadly everyone thinks they are The "Ideas Man".

afro88|1 year ago

> Often this takes the form of trying to access an endpoint that does not exist, or read a property off the API response that does not exist. After probing a bit more, my suspicions are usually confirmed — ChatGPT hallucinated that endpoint or property

This is an opportunity. Add another tier to your pricing structure that provides an AI that assists with coding the API connection. Really simple Llama 3.1 that RAGs your docs. Or perhaps your docs fit in the context already if the API is as simple as it sounds.

andai|1 year ago

As the author says, the errors are very easy to fix. Easy enough for GPT! He should set up a support chatbot. Only seems fair? ;)

I'm half joking, but in most cases I found that GPT was able to fix its own code. So this would reduce support burden by like 90%.

I hate chatbot support as much as the next guy, but the alternative (hiring a programmer to work as a customer support agent on AI generated code all day?) sounds not only cruel, but just bad engineering.

dimal|1 year ago

I’ve found that it’s often able to fix its own code when I’m able to understand that problem and state it clearly. On its own, it tends to just go in circles and proudly declare that its solve the problem, even though it hasn’t. It needs a knowledgeable person to guide it.

tdignan|1 year ago

OP should set up an AI chatbot to triage his customer support. It probably wouldn't be that hard to send that code right back to GPT and get a fix suggestion to the customer instantly. Stick your documentation for each endpoint in a vector database, and use RAG to give them a quick fix. If it doesn't work let them go to level 2 support for a fee.

thomasahle|1 year ago

These kind of support requests are also a big issue for open source projects.

Hanging out at the DSPy discord, we get a lot of people who need help with "their" code. The code often uses hallucinated API calls, but they insist it "should work" and our library simply is buggy.

nbzso|1 year ago

This is only the beginning. Imagine this when AI bot chains and "agents" replace conveniently junior devs on scale. Someone will "hack" an API/Lang Chain/Insert-LLM-Framework Solution.

The next decade of support business is here. Fix my "hallucination" market. Thank you, Microsoft. You did it again.

yawaramin|1 year ago

The customer support issue with selling SaaS is not just about AI, I think. It's widespread across the industry. There are customers who have almost no idea about how their software works, and expect you to guide them through their own specific setups. One issue we regularly have is people complain that our TLS certificate keeps changing and that they want us to notify them in advance of any change. We try to tell them that two/three-month duration TLS certificates is a best practice and that they should ensure that their trust store knows about standard root certificates. Then it turns out their software only supports one root cert at a time and they want us to jump into a call and guide them through changing it in their software.

danielmarkbruce|1 year ago

If you are building an API and have decent docs, it's a totally ok trade off to say "i'll lose some customers this way, but I'm not providing support". And just be upfront about it. Some stores have a no return policy with no exceptions. They lose some customers, it's ok.

OJFord|1 year ago

I think this is the point to establish a 'community', and in particular to only 'offer' community support (or charge more for spending your own time on it).

Some people will enthusiastically fix other customers' generated crap for free, let them.

julianeon|1 year ago

I'm wondering if he could just adjust the API to give an error message that an LLM (or a person) could understand and then correct. It might be worth putting some effort into beefing that up, to cut down the support requests.

markatkinson|1 year ago

I'm tired of fixing my own AI generated code.

matrix_overload|1 year ago

Well, if you are annoyed by a particular maintenance task related to your business, find a way to automate it!

In this case, you could create examples for your API in common programming languages, publish them on the product website, and even implement an automatic test that would verify that your last commit didn't break them. So, most of the non-programmer inquiries can be answered with a simple link to the examples page for the language they want.

As a bonus point, you will get some organic search traffic when people search for <what your API is doing> in <language name>.

mrbombastic|1 year ago

Funnily enough an llm is pretty good at categorizing and prioritizing support requests.

imrejonk|1 year ago

> The worst is when a request starts out simple — I help them fix one hallucination — but then that customer wants to build more complex logic, and somehow I’ve set the expectation that I will provide unlimited free support forever. I’ve gotten a number of angry messages from customers who essentially want me to build their whole app for free.

Annoying, but a good way to weed out bad customers. Life’s too short to deal with people who feel this entitled.

jrochkind1|1 year ago

I am no partiuclar fan of AI (and actually haven't used copilot or similar myself ever), but the real problem here is not AI but:

> I’ve gotten a number of angry messages from customers who essentially want me to build their whole app for free.

I guess AI maybe encourages more people to think they can build something without knowing what they are doing. i can believe that for sure. There was plenty of that before AI too. Must be even worse now.

j-a-a-p|1 year ago

> My API is just a few well documented endpoints. If you can figure out how to send a POST request using any programming language, you should have no problem using it. But that seems to be too high a bar for the new generation of prompt-engineer coders.

Nice. The time has come that we need to design the API with the LLM in mind. Or, to rephrase that, test your API that it is working with the popular LLM tools regularly.

darepublic|1 year ago

I often think about code when I'm driving. I've thought about a way to brainstorm code ideas out loud and have the llm generate a code file with my idea. I'm talking about a fairly granular level, where I'm specifying individual if blocks and the like. Then when I have my hands available I can correct the code and run my idea

zzz95|1 year ago

Every problem hides opportunities! I can see a future where documentation will be replaced by a plugin to an AI code service. Instead of providing users with documentation on how to use the package, devs will be training an LLM on how to assist the user in generating the interface code. An elaborate Chat GPT prompt for instance.

jowdones|1 year ago

Retail "traders" are the textbook definition of mentally challenged obnoxiousnes. Go meet them on forums like EliteTrader.com and you will soon realize who you are dealing with.

It's your fault really. You don't build custom software for guys having the intellectual capacity and budget of a tractor driver unless you enjoy pain.

michaelcampbell|1 year ago

This whole thing reminds me of the dotcom boom where people who learned a week or 2's worth of HTML were suddenly "programmers", and were being hired because they could put on their own pants more often than not.

paxys|1 year ago

> The worst is when a request starts out simple — I help them fix one hallucination — but then that customer wants to build more complex logic, and somehow I’ve set the expectation that I will provide unlimited free support forever. I’ve gotten a number of angry messages from customers who essentially want me to build their whole app for free.

Welcome to the life of any consultant/support agent/open source maintainer ever. AI isn't the problem here, managing expectations is.

shadowgovt|1 year ago

I wonder if there's any consistent pattern to the API hallucinations.

pmarreck|1 year ago

The game might change (a little) when it can actually run the code it generates, examine any errors and potentially fix based on that.

And when you use a language that has better checks all around.

orbit7|1 year ago

As a basic offering have good documentation and a community support forum and the option to report bugs.

Make the type of support you are providing paid, this could be in tiers.

Outsource support as needed.

xg15|1 year ago

We have come full circle: Silicon Valley is finally disrupting the software engineering industry...

wh-uws|1 year ago

I will take all of these customers. Please forward them to me.

atomic128|1 year ago

For real: profitable consulting businesses have been formed to help LLM programmers. BUGFIX 66, for example, and various others. They can charge substantial money to help customers cross that "last mile" and get their LLM-generated pile of code working.

cbg0|1 year ago

These "customers" were receiving free help. If they wanted to spend money on quality software they wouldn't have used an LLM.

cynicalpeace|1 year ago

AI is going to be like any other tool. If you don't know how to use it, you may end up hurting yourself.

If you know how to use it, it will make you 100x more efficient.

treprinum|1 year ago

Set firm boundaries and reject/ghost customers that want to build their app for free and are "angry". Those never lead to anything profitable.

sproosemoose|1 year ago

The crypto product in question is for a very young age group.

https://pump.fun/board

FractalHQ|1 year ago

This website is so cursed… on mobile it’s constantly layout shifting up and down by an entire screen height every ~0.3 seconds. I’m not sure how to feel.

linsomniac|1 year ago

Seems like the opportunity here is for the author to sell an AI that is trained on the API and documentation.

netcan|1 year ago

This is totally understandable, valid, etc.

OTOH, see script kiddies, WYSIWYG Stack Overflow C&P, etc.

It's just the way things are now.

sira04|1 year ago

Could you make a model from your API/docs and let them plug that into their AI stuff? That would be funny.

fuzzfactor|1 year ago

>I'm tired of fixing customers' AI generated code

Well, that didn't take long . . .

aussieguy1234|1 year ago

and they thought AI would put SWE's out of work...

If there are more people able to prototype their ideas with AI, sooner or later they will need a real engineer to maintain/improve that code.

That will lead to more jobs and higher demand for SWE's.

nsonha|1 year ago

is there a way we can make a support bot or a chat-based document that is fine tuned and limit the answer to only what's in the API? Getting the users to use it is another issue but one problem at a time.

bkazez|1 year ago

Charge for engineering support and hire someone to do it for you!

tpoacher|1 year ago

Same, but for academic documents.

It used to be that a 'bad' document had things 'missing', which were easy to spot and rephrase / suggest improvements.

Now a 'bad' document is 50 pages of confusing waffle instead of 5, you need to get through the headache of reading all 50 to figure out what 'sentence' the person was trying to express at the place where an autogenerated section has appeared, figure out if the condensed document is something that actually had any merit to begin with, THEN identify what's missing, and then ask them to try again.

At which point you get a NEW 50-page autogenerated document and start the process again.

F this shit.

jameslk|1 year ago

> Often, though, the customer is envisioning a more complex application, and I just have to tell them, “Sorry, you’re going to have to hire a professional developer for this.”

Welcome to enterprise sales, where building custom solutions atop of your sprawling BaaS empire is exactly what you do.

Your customers have a need and you’re basically turning down an upsell opportunity that also buys you lock in to your platform.

Another way to look at it: are they all trying to build something similar or something with similar components? Maybe it’s time to expand the features of your product to reduce the amount of work they need to do.

sholladay|1 year ago

So have an AI do it for you!

ramesh31|1 year ago

Welcome to the hell that is being a senior dev nowadays. Every junior with a Copilot license all of the sudden fancies themselves an expert programmer now. It's absolutely maddening.

snakeyjake|1 year ago

AI, Rust, crypto, Medium, SAAS startups

It's a fuckin perfect hacker news bingo.

Magnificent!

lqcfcjx|1 year ago

now you probably should build an AI to fix AI generated code based on your documentation.

OutOfHere|1 year ago

[deleted]

axelthegerman|1 year ago

As long as AI is just hallucinating sh*t I don't think anyone needs to be particularly worried :)

It's got a long way to go to even stack up to bad programmers that somehow are always able to find a job somewhere.

core_dumped|1 year ago

Language models aren’t AI no matter how hard you squint. When we get actual AI (not counting on anytime soon) it will be more than the annoyed developer that loses their job

protocolture|1 year ago

What? OP even says he uses Copilot.

suprt411|1 year ago

Do you want to outsource this L1 L2 or even L3+ support? Lets talk

djaouen|1 year ago

I'm not! Email in bio.

tonyoconnell|1 year ago

Why don't you use AI to provide support? I'm serious actually. This sounds like something AI can do really well.

ilaksh|1 year ago

For the problems given in the article, it will 100% work. It's very easy for Claude 3.5 or gpt-4o to look at documentation for a couple of API endpoints, compare it to submitted code, and point out invalid endpoints and properties. It can provide correct code also if the custom is asking for something possible.

It won't be flawless but if the issues are as basic as stated in this article, then it seems like a good option.

It will cost money to use the good models though. So I think the first step would be to make sure if they ask for an invalid endpoint it says that in so many words in the API response, and if they ask for an invalid property it states that also.

Then if that doesn't give them a clue, an LLM scans incoming support requests and automatically answers them. Or flags them for review.. To prevent abuse, it might be a person doing it but just pressing a button to send an auto-answer.

But maybe a relatively cheap bot in a Discord channel or something connected to llama 3.1 70b would be good enough to start, and people have to provide a ticket number from that bot to get an answer from a person.

hsbauauvhabzb|1 year ago

Spoken like someone whose never been on the receiving end of any such ‘support’.

stitched2gethr|1 year ago

I'm a bit torn. My first thought was "If the current state of the art LLMs made the mistakes it's unlikely an LLM would be able to correct them." But I'm not sure that's true if the support LLM (chat bot) is given very specific instructions so as to limit the possible answers. Still I think that's gonna break down pretty quick for other reasons.

Maybe the chat bot can recognize a botched request and even suggest a fix but what then? It certainly won't be able to convert the user's next request into a working application of even moderate complexity. And do you really want a chat bot to be the first interaction your customers have.

I think this is why we haven't seen these things take off outside of very large organizations who are looking to save money in exchange for making customers ask for a human when they need one.

w_for_wumbo|1 year ago

I'd be tempted to just roll with the AI generated endpoints/hallucinations. If it's presenting me statistically probable names that make sense after absorbing the world's knowledge, I'm tempted to lean into that, instead of insisting that I have it right. Correct names don't succeed as often as useful names do.

pocketarc|1 year ago

That's a great idea, but I think the main problem is that the generated endpoints/properties will be affected quite heavily by whatever the prompt/context was.

AI isn't necessarily saying "this is the one endpoint that will always be generated". Unless it is - if the customer generated code is always the same endpoints/properties then it'd definitely make sense to also support those.