(no title)
irq-1 | 6 months ago
Interesting point and one I haven't seen before. Almost like arguing that AI will work best with things it can learn quickly, rather than things that have lots of examples.
irq-1 | 6 months ago
Interesting point and one I haven't seen before. Almost like arguing that AI will work best with things it can learn quickly, rather than things that have lots of examples.
cardanome|6 months ago
Garbage in, garbage out. If you confuse it with a lot of Junior-level code and have a languages that constantly changes best practices, the output might not be great.
On the other hand, if you have a languages that was carefully designed from the start and avoids making breaking changes, if it has great first party documentation and a unified code style everyone adheres to, the LLM will have an easier time.
The later also happens to be better for humans. Honestly the best bet is to make a good language for humans. Generative AI is still evolving rapidly so no point in designing the lang for current weaknesses.
mikepurvis|6 months ago
Like instead of "the entire internet", here's a few hundred best-practice projects, some known up-to-date documentation/tutorials, and a whitelist of 3rd party modules that you're allowed to consider using.
lanthissa|6 months ago
hinkley|6 months ago
Any language that is difficult for an AI to understand will have to get popular by needing far less boilerplate code for AIs to write in the first place. We may finally start designing better APIs. Or lean into it and make much worse ones that necessitate AI. Look especially to an AI company to create a free razor and sell you the blades.
pahbloo|6 months ago
I think the best programming languages of the future will come with their own LLMs, synthetically trained before release.
WJW|6 months ago
btschaegg|6 months ago
Well, one glaring issue with the assumption of the quality of LLM output being mostly dependent on a large volume of examples online would be Sturgeon's law.