top | item 6222371

(no title)

ramanujan | 12 years ago

Very interesting and highly creative. A few thoughts.

1) If a graphical plot turns data into something visual, an audio "plot" turns data into something audible. Your output is an audio file rather than an image or video file. The typical applications of this are to turn a boolean flag into a chime (e.g. text message received). Your important insight is that this can be extended to longer-form audio outputs.

2) When is audio more advantageous than image or video?

  - When you cannot look at a screen (driving, working out)
  - When there are too many screens (control room)
  - In a very dark environment where visibility is impeded
  - If you are blind or vision-impaired
This could find real application in cockpits/control rooms, to ensure that a pilot is perceiving data even if they aren't looking at a particular dial. It could also be useful for various fitness and health apps that don't need you to look at the screen all the time.

Perhaps the most interesting application would be in a car, which is where people spend a great deal of time and have their ears and brains (but not their eyes) free. Some ideas:

a) Could you generate different sounds based on the importance of a text message (doing something like Gmail's importance filtering) signaling that you don't really need to respond to this particular message right now while driving?

b) Could you have audio feedback for important things along the road? For example, the problem with the Trapster app (trapster.com) is that I need to look at the phone to see where the speedtraps are. You can imagine an integrated audio feed that could give information like this and also tell you your constantly updated ETA (via Google Maps API call). Or you could listen to the pulse of your company on the road to do something semi-useful, and drill down into notable events via voice.

c) The really interesting thing is if you could pair this with a set of defined voice control commands. As motivation: an audible plot can't be backtracked like a visual plot. With a visual plot your eyes can just scan back to the left. To scan back and re-heard the sound you just heard requires rewinding and replaying. But it could be interesting to set up a small set of voice commands that allow not just rewinding, but rewinding and zooming. So you hear an important "BEEP" and you want to say something like "STOP. ZOOM" and set up the heuristics such that this identifies the right BEEP and then gives an audio drill-down of exactly what that BEEP represented.

d) Done right, you might be able to turn a subset of webservices into a sort of voice-controlled data radio for the road. People spend thousands of hours in their cars so it's a real opportunity.

discuss

order

creamyhorror|12 years ago

Cool ideas here. I imagine both Google and the military would be interested in building auditory feedback systems into their vehicles/control centers, if they aren't already doing it.

What I think would be a useful addition would be transforming 'levels' (as opposed to events) to ambient, continuously-playing audio. This is pretty much the "dynamic audio" of computer games.

For example, you could have strings playing according to CPU activity: softly and slowly (think double basses) when activity is low, but more loudly and urgently (cellos) when activity is high. That would create a sense of how busy the server is (if you enable the CPU activity 'channel').

edit: I see cortesi has already mentioned they're working on transforming continuous data now - good job.

cortesi|12 years ago

Have a look at my co-conspirator's blog post about Choir:

http://alexdong.com/choir-dot-io-explained/

We definitely see Choir fitting in where you can't look at or interact with a screen. Cars and wearable computing are areas we're excited about. First, though, we want to experiment on the desktop, find out what makes a good audio interface, and solve our own burning needs regarding more mundane monitoring situations.

e12e|12 years ago

Interesting project (not sure if I would really like it as a service, though -- I think I would personally prefer a library).

Either way, looks like your signup-form has some peculiar ideas about what constitutes an email address, it keeps asking me to input an email address when I type in:

  choir.io@s.hypertekst.net