> "I'm trying to make a software version of Stephen's voice so that we don't have to rely on these old hardware cards," says Wood.
So back in 2010 we had someone help us extract the program ROM code from the SNES DSP-n coprocessors (used in games like Pilotwings and Mario Kart.) It turns out these chips were NEC uPD7725 DSPs. There was basically only one very terse document on how the chip worked, and no emulators for it, so I had to write one. Had a bit of help in fixing the overflow flag calculations from Cydrak.
A while later, I spoke briefly through a liaison with Sam Blackburn (who was then Stephen Hawking's assistant) back in 2011'ish. They were looking for permission to use my uPD7725 emulation code (which I said yes to, obviously.) Apparently the Speech Plus text synthesizer uses NEC uPD7720s. This is basically the same chip and ISA, but with less ROM/RAM. It's a neat little fact, but not too surprising. These DSPs are really versatile, and different programs can make them do very different things.
Reading this article, it sounds like the effort was as yet unsuccessful, though :(
(It's also important to note that the uPD7720 is probably an infinitesimal part of the overall system, so I suppose they ran into additional problems.)
TL;DR Hawking only has a single reliable, low-latency binary signal (facial muscle movements), so his interface has been a constantly-moving cursor that he can "click" when it's over the next symbol/command he wishes to select. The innovations here are in the interpretation of those selections: he now has autosuggest for text (designed specifically for him based on the corpus of his works) and shortcuts for filesystem management.
I'm looking forward to seeing when the source code is released, or when a paper is written. Just looking at the data-entry video, for instance, there are interesting parallels between the timing specifications for Hawking's Yes/No dialog and GUI design for dialogs for non-disabled users - in both cases, if there's not enough spacing in between buttons, or orderings are unpredictable, it's much easier for someone to mis-click!
I think it's not only the interactive elements, but also the way information is displayed. I actually think he (and others) might benefit from a really responsive desktop environment, especially compared to the Windows floating window manager.
The videos you linked to show this very well in my opinion:
- In the longer one with Hawking and the System he's using it to type and read Wikipedia, all the while quite some screen estate is wasted with (for him probably unusable) title bars, partially hidden desktop icons in the background and the browser partially behind his input software.
- The "data entry" video shows Notepad being opened and being partially hidden by the input UI.
That does not seem useful. I would rather use apps that automatically fit themselves to available space and predefined layouts for multiple apps or a dynamic tiling approach.
Is it open source or based on open source? The article seems to switch between both at random. If they've made the entire system open source that would be incredible. I think something like this, which can improve millions of lives, is the perfect project for an open source community to work on. Sick people shouldn't have to pay for this and I'm sure lots of people would be very happy to dedicate time to improving it. It's also good to know that if Intel decided to stop work on it the users who rely on it so heavily aren't screwed - the software can continue to be improved and will always be available.
> Professor Hawking has been using his new software for several months while
> Lama and her team have been debugging and fine-tuning it. It’s almost
> finished, and when it is, Intel plans to make the system available to the
> open source community.
What software did Ebert use? I remember him talking about paying English researchers to synthesize his own voice, but I don't think anything materialized.
Arun Mehta wrote an essay called "When a Button Is All That Connects You to the World" for the book Beautiful Code (2007) about speech software designed for Stephen Hawking. I don't think the system described in Mehta's article is this one, though.
Do you have any proof to back these extraordinary claims?
If you have the faintest idea how "microwave technology" can mechanistically and physiologically read minds, you should explain your understanding.
My undergrad physical chemistry, bio degree, etc. make me question this. Microwave is for rotational spectroscopy. That gives you gas-phase or polar molecules. I assume you mean to say they're reading water? So... Concentrations? Blood flow? Seems incredibly low signal and dangerous.
If this is a joke, (which seems most plausible), this isn't a great manner of discourse for HN. It confuses, wastes mental cycles, and decreases signal:noise. Please think of everyone having to parse this stuff.
[+] [-] near|11 years ago|reply
> "I'm trying to make a software version of Stephen's voice so that we don't have to rely on these old hardware cards," says Wood.
So back in 2010 we had someone help us extract the program ROM code from the SNES DSP-n coprocessors (used in games like Pilotwings and Mario Kart.) It turns out these chips were NEC uPD7725 DSPs. There was basically only one very terse document on how the chip worked, and no emulators for it, so I had to write one. Had a bit of help in fixing the overflow flag calculations from Cydrak.
A while later, I spoke briefly through a liaison with Sam Blackburn (who was then Stephen Hawking's assistant) back in 2011'ish. They were looking for permission to use my uPD7725 emulation code (which I said yes to, obviously.) Apparently the Speech Plus text synthesizer uses NEC uPD7720s. This is basically the same chip and ISA, but with less ROM/RAM. It's a neat little fact, but not too surprising. These DSPs are really versatile, and different programs can make them do very different things.
Reading this article, it sounds like the effort was as yet unsuccessful, though :(
(It's also important to note that the uPD7720 is probably an infinitesimal part of the overall system, so I suppose they ran into additional problems.)
[+] [-] btown|11 years ago|reply
Direct link to the screen-capture video: https://www.youtube.com/watch?v=mPU6mnM2i-k
TL;DR Hawking only has a single reliable, low-latency binary signal (facial muscle movements), so his interface has been a constantly-moving cursor that he can "click" when it's over the next symbol/command he wishes to select. The innovations here are in the interpretation of those selections: he now has autosuggest for text (designed specifically for him based on the corpus of his works) and shortcuts for filesystem management.
I'm looking forward to seeing when the source code is released, or when a paper is written. Just looking at the data-entry video, for instance, there are interesting parallels between the timing specifications for Hawking's Yes/No dialog and GUI design for dialogs for non-disabled users - in both cases, if there's not enough spacing in between buttons, or orderings are unpredictable, it's much easier for someone to mis-click!
[+] [-] notthemessiah|11 years ago|reply
[+] [-] kagebe|11 years ago|reply
The videos you linked to show this very well in my opinion:
- In the longer one with Hawking and the System he's using it to type and read Wikipedia, all the while quite some screen estate is wasted with (for him probably unusable) title bars, partially hidden desktop icons in the background and the browser partially behind his input software.
- The "data entry" video shows Notepad being opened and being partially hidden by the input UI.
That does not seem useful. I would rather use apps that automatically fit themselves to available space and predefined layouts for multiple apps or a dynamic tiling approach.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] k-mcgrady|11 years ago|reply
[+] [-] devnonymous|11 years ago|reply
[+] [-] kps|11 years ago|reply
[+] [-] kmfrk|11 years ago|reply
[+] [-] quarterto|11 years ago|reply
http://www.rogerebert.com/rogers-journal/finding-my-own-voic... https://www.cereproc.com/
[+] [-] agildehaus|11 years ago|reply
https://www.youtube.com/watch?v=_0KUw3xr7cA
And it wouldn't be possible for Hawking. Ebert's system was put together using hundreds of hours of recordings from Skiskel & Ebert At the Movies.
[+] [-] omaranto|11 years ago|reply
[+] [-] dang|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] applehammer|11 years ago|reply
[+] [-] xahrepap|11 years ago|reply
[+] [-] omaranto|11 years ago|reply
[deleted]
[+] [-] mwti|11 years ago|reply
[deleted]
[+] [-] possibilistic|11 years ago|reply
If you have the faintest idea how "microwave technology" can mechanistically and physiologically read minds, you should explain your understanding.
My undergrad physical chemistry, bio degree, etc. make me question this. Microwave is for rotational spectroscopy. That gives you gas-phase or polar molecules. I assume you mean to say they're reading water? So... Concentrations? Blood flow? Seems incredibly low signal and dangerous.
If this is a joke, (which seems most plausible), this isn't a great manner of discourse for HN. It confuses, wastes mental cycles, and decreases signal:noise. Please think of everyone having to parse this stuff.
[+] [-] NhanH|11 years ago|reply
[+] [-] acd|11 years ago|reply
[+] [-] ofcapl_|11 years ago|reply