top | item 42475201

(no title)

jonathan-adly | 1 year ago

Here is a nice use-case. Put this in a pharmacy - have people hit a button, and ask questions about over-the-counter medications.

Really - any physical place where people are easily overwhelmed, have something like that would be really nice.

With some work - you can probably even run RAG on the questions and answer esoteric things like where the food court in an airport or the ATM in a hotel.

discuss

order

swatcoder|1 year ago

> Put this in a pharmacy - have people hit a button, and ask questions about over-the-counter medications.

Even if you trust OpenAI's models more than your trained, certified, and insured pharmacist -- the pharmacists, their regulators, and their insurers sure won't!

They've got a century of sunk costs to consider (and maybe even some valid concern over the answers a model might give on their behalf...)

Don't be expecting anything like that in an traditional regulated medical setting any time soon.

dymk|1 year ago

The last few doctors appointments I’ve had, the clinician used a service to record and summarize the visit. It was using some sort of TTS and LLM to do so. It’s already in medical settings.

pixelsort|1 year ago

Thanks for digging that out. Yes, that makes sense to me as someone who made a fully local speech-2-speech prototype with Electron, including VAD and AEC. It was responsive but taxing. I had to use a mix of specialty models over onnx/wasm in the renderer and llama.cpp in the main process. One day, multimodal model will just do it all.