I would love to have a list of tasks for each of my patients on my phone. This would make being an intern far easier. In fact, I'm currently validating and building a solution for this exact problem, which I will talk about below, but first it's important to understand what an intern (at least here in Australia) does during the actual ward round where most of their tasks for the rest of the day are created.
An average intern during a ward round has to do the following things for each patient :
1) Handwrite notes into the patient's bedside notes as the senior doctor takes a history / examines the patient.
2) Look at the patient's vitals chart and medications chart.
3) Handwrite a plan in the patient's notes at the end (this is essentially a list of tasks for the intern to do during the day).
4) Often while the intern is still writing the plan, into the patients notes, the rest of the team is already moving onto the next patient. The intern will hurriedly re-write any tasks from the plan onto their personal printed patient list (takes < 5 seconds) and then quickly go get the notes for the next patient and begin this process again. Also note that often the patients are scattered around multiple wards in the hospital.
Now Listrunner, in their demo video shows a list of tasks for each patient on an iPhone. Awesome!
But where in the ward round does my list of tasks get copied into Listrunner? If I have to manually find the patient in the app and then manually add the tasks to an app it would take minutes, not the <5 seconds it does to rewrite the tasks on a personal list in super shorthand. And no senior doctor is going to wait a couple of minutes for you to write each patients tasks into your phone (this would add 40 minutes to a 20 patient ward round).
I've been thinking about this a lot, and I think a solution using google glass would be super amazing here. I'm currently in the prototyping and validation stage of the project (following Eric Reis' 'build-measure-learn'). Happy to talk to any doctors interested in it.
It works as follows:
1) After you finish writing the patient's plan you take a photo of it with google glass.
2) OCR is performed on the photo, right then an there (hopefully in <= 1 sec) and the OCR is shown to the google glass wearer who can confirm that the OCR is correct.*
3) Those tasks are then synced to the doctor's phone, or for security reasons perhaps a hospital owned phone or tablet.
The advantage of this system is that it doesn't change the current workflow at all. It doesn't affect the speed of the ward round. Thus, faces a lower level of resistance to adoption.
Disadvantage - doctor's are notorious for bad handwriting, thus it will not work for all doctors. It's expensive. However, as google glass (and perhaps other similar tech) gets cheaper this may not be significant.
*Patient labels are already affixed to the top of the page (so OCR can be performed on the label to associate the tasks with the patient). But if the solution became widely used, a simple QR code could be added to patient labels, to make this easier.
An average intern during a ward round has to do the following things for each patient : 1) Handwrite notes into the patient's bedside notes as the senior doctor takes a history / examines the patient. 2) Look at the patient's vitals chart and medications chart. 3) Handwrite a plan in the patient's notes at the end (this is essentially a list of tasks for the intern to do during the day). 4) Often while the intern is still writing the plan, into the patients notes, the rest of the team is already moving onto the next patient. The intern will hurriedly re-write any tasks from the plan onto their personal printed patient list (takes < 5 seconds) and then quickly go get the notes for the next patient and begin this process again. Also note that often the patients are scattered around multiple wards in the hospital.
Now Listrunner, in their demo video shows a list of tasks for each patient on an iPhone. Awesome!
But where in the ward round does my list of tasks get copied into Listrunner? If I have to manually find the patient in the app and then manually add the tasks to an app it would take minutes, not the <5 seconds it does to rewrite the tasks on a personal list in super shorthand. And no senior doctor is going to wait a couple of minutes for you to write each patients tasks into your phone (this would add 40 minutes to a 20 patient ward round).
I've been thinking about this a lot, and I think a solution using google glass would be super amazing here. I'm currently in the prototyping and validation stage of the project (following Eric Reis' 'build-measure-learn'). Happy to talk to any doctors interested in it.
It works as follows:
1) After you finish writing the patient's plan you take a photo of it with google glass. 2) OCR is performed on the photo, right then an there (hopefully in <= 1 sec) and the OCR is shown to the google glass wearer who can confirm that the OCR is correct.* 3) Those tasks are then synced to the doctor's phone, or for security reasons perhaps a hospital owned phone or tablet.
The advantage of this system is that it doesn't change the current workflow at all. It doesn't affect the speed of the ward round. Thus, faces a lower level of resistance to adoption.
Disadvantage - doctor's are notorious for bad handwriting, thus it will not work for all doctors. It's expensive. However, as google glass (and perhaps other similar tech) gets cheaper this may not be significant.
*Patient labels are already affixed to the top of the page (so OCR can be performed on the label to associate the tasks with the patient). But if the solution became widely used, a simple QR code could be added to patient labels, to make this easier.