(no title)
benmathes | 5 months ago
I'd argue the core value isn't just a better search or a faster reader. It's about providing a verified, reliable source of truth. This brings up a key tension: you say the AI isn't yet at your co-founder's level of accuracy, but is that precisely the level of confidence required to replace an engineer's manual check? How do you close that gap? You've got the data, but the trust factor is a different threshold?
I.e. maybe you've built the tool to make the problem faster, but the real win would be a tool that makes the problem safer? The killer feature might not be more speed, but rather a confidence score on every AI-generated fact, with a clear path to the source document so an engineer can verify it. It’s not about avoiding the document entirely; it’s about having a better starting point and knowing exactly what to double-check.
ctstoner|5 months ago
To close the gap, we've built our own Q/A datasets and are training custom AIs how to search and read a datasheet (like a new engineer needs to learn early on). We're concentrating on teaching the AI how to identify key information vs noise as it relates to electrical engineering (differences like 'Voltage' in the Absolute Max vs Recommended section) and where information is likely to be found in a datasheet or app note.