I'm not sure that source code verification is such a problem. It feels like it's definitely easier to write code to solve a problem than to verify some code written by someone else is correct and fault free.
All processes and by extension code tolerate some level of error, even our most reliable systems. Whether LLM produced output is within that tolerance is up to each practitioner to test and verify.
I think AI has revealed that there is a lot of low hanging fruit that is very tolerant of errors across many disciplines that isn’t met by our current supply of software engineers. In my own day to day that’s a lot of low impact bash scripts that automate personal things while at work it’s sales and lead gen where it’s not a big deal if a salesperson cold calls someone who couldn’t use our product (other than the temporary embarrassment it causes both parties).
throwup238|1 year ago
I think AI has revealed that there is a lot of low hanging fruit that is very tolerant of errors across many disciplines that isn’t met by our current supply of software engineers. In my own day to day that’s a lot of low impact bash scripts that automate personal things while at work it’s sales and lead gen where it’s not a big deal if a salesperson cold calls someone who couldn’t use our product (other than the temporary embarrassment it causes both parties).