top | item 11212917

(no title)

cmaury | 10 years ago

I definitely agree with you that the last thing I would want readers to take away from this article, is that they don't have to worry about accessibility or universal design. Until we have better tools, we should be providing the best possible experience with the tools we do have.

I also agree that we shouldn't be creating separate experiences for the blind. I think it's generally acknowledged that they end up being worse than a combined interface, never getting the resources or new features that experiences for sighted users get.

Where we seem to disagree is on the roll that screen readers play in limiting the usability of technology. On the one hand they are amazing because they provide access to technology that would otherwise not exist. On the other hand, by virtue of the way they function-mapping a 2 dimensional visual experience into a one dimensional stream of audio-using a screen reader can only be so efficient.

This lack of usability puts access to technology beyond the reach of many who are less tech savvy than you or I, and given that the vast majority of people losing their vision in the US are the elderly, there are a lot of people who fall into that category. What's worse, the rate of vision loss is set to double as baby-boomer's age out.

I totally agree that the medical-model of Accessibility sucks, but I think Screen Readers fall into that category. They seek to adapt an experience designed for others to the needs of the disabled. Conversational interfaces have the potential to create a consumer quality experience, that by it's very nature is accessible (at least to the blind). And accessible by default is the best possible outcome.

discuss

order

ndarilek|10 years ago

It's interesting to read that you conceptualize screen readers as rendering a 2-D environment as audio. I'm a very visual/spatial person, but I've always conceptualized them as rendering a tree of GUI widgets, rather than a visual environment. I guess it's the difference between thinking of my desk as a visual collection of objects, and more as an object with an Arduino/RPI in the top drawer, papers and folders in the second, etc. Not saying either is wrong, just that maybe it's a matter of conceptualizing UIs as groups of collected and organized widgets, rather than as laid out on a map. I've come to enjoy developing with React because I can say "here's my workspace for a given task. It has a toolbar containing these related functions, these two loosely-related larger workspaces, etc." Then I let a visual designer come along after and make things look better. :)

Anyhow, I look forward to reading more about your SDK. Where can I learn more? I'm building an app that could benefit from a conversational UI on top of the traditional one and would be interested in reading up on what you offer, particularly as it's meant for blind users too.

cmaury|10 years ago

You can check our SDK out at developer.conversantlabs.com. It's currently in a developer preview. Send me an email at chris@conversantlabs.com. It would be great to talk more. If our conversation in this thread is any indication, I think we'll have a pretty good discussion :)

stevetrewick|10 years ago

I conceptualise them as a non visual means of surfacing an n-dimensional information architecture. But I'm just weird like that.

One thing screen readers are super good at is exposing shitty IA design, which is regrettably common.

That said, it cuts both ways. There is a public transport app in the UK (Traveline GB) that as a low viz (legally blind) user I find incredibly frustrating to use, but my no viz pals absolutely love.

In this case it seems the IA is there but the visual interface to it is worse than what voiceover exposes.

Accessibity is hard.