(no title)
EarthMephit | 1 year ago
I think that its because often the libraries you use are niche or have a a few similar versions, the LLM really commonly hallucinated solutions and would continually suggest that library X did have that capability. I think because often in hardware projects you often hit a point where you can't do something or you need to modify a library, but the LLM tries to be "helpful" and it makes up a solution.
jerf|1 year ago
lostlogin|1 year ago
The number of projects I’ve done where my notes are the difference between hours of relearning The Way or instant success. Google doesn’t work as some niche issue is blocking the path.
ESP32, Arduino, Home Assistant And various media server things.
chamomeal|1 year ago
And then you’ll paste in the error, and they’ll just say “ok I see the problem” and output the exact same broken code lol.
I’m guessing the problem is lack of training data. Most TS codebases are mostly just JS with a few types and zod schemas. All of the neat generic stuff happens in libraries or a few utilities
swatcoder|1 year ago
Public Arduino, RPi, Pico communities are basically peak cargo cult, with the blind leading the blind through things they don't understand. The noise is vastly louder than the signal.
There's a basically giant chasm between expereinced or professional embedded developers that mostly have no need to ever touch those things or visit their forums, and the confused hobbyists on those forums randomly slapping together code until something sorta works while trying to share their discoveries.
Presumably, those communities and their internal knowledge will mature eventually, but it's taking a long long time and it's still an absolute mess.
If you're genuinely interested in embedded development and IoT stuff, and are willing to put in the time to learn, put those platforms away and challenge yourself to at least learn how to directly work with production-track SoC'a from Nordic or ESP or whatever. And buy some books or take some courses instead of relying on forums or LLM's. You'll find yourself rewarded for the effort.
bsder|1 year ago
It won't because the RPi are all undocumented, closed-source toys.
It would be an interesting experiment to see which chips an LLM is better at helping out with: RPi's with its hallucinatory ecosystem or something like the BeagleY-AI which has thousands of pages of actual TI documentation for its chips.
It would be really nice if the LLMs could cover for this and circumvent where RPi's keep getting used because they were dumped under cost to bootstrap a network effect.
rcxdude|1 year ago
I'm not sure they will. There's a kind of evaporative cooling effect where once you get to a certain level of understanding you switch around your tools enough that there's not much point interacting with the community anymore.
tacticalDonut|1 year ago
jrmg|1 year ago
Eventually I read some actual documentation and realised it was just spouting very plausible sounding nonsense - and confident at it!
The same thing happened a year or so ago when I tried to get a much older ChatGPT to help me with with USB protocol problems in some microcontroller code. It just hallucinated APIs and protocol features that didn’t actually exist. I really expected more by now - but I now suspect it’ll just never be good at niche tasks (and these two things are not particularly niche compared to some).
dagw|1 year ago
For the best of both worlds make the LLM first 'read' the documentation, and then ask for help. Make a huge difference in the quality and relevance of the answers you get.