top | item 22737201

(no title)

king07828 | 6 years ago

Has there been any work to use a neural network to generate/simulate ECoG signal output from an EEG signal input? (My Google-fu only gives definitions and distinctions for ECoG and EEG). Almost sounds similar in concept to deep learning super sampling (DLSS), i.e., taking a low resolution image/signal (EEG) and using a neural network to generate/simulate a high resolution image/signal (ECoG).

discuss

order

raverbashing|6 years ago

The problem sounds similar but it's completely different

DLSS kinda works because we "know" what is what thing in a photo.

EEG to ECOG would be trying to figure out a painting (that could be anything, from any painter) from a significant distance behind a frosted glass

king07828|6 years ago

> DLSS kinda works because we "know" what is what thing in a photo.

I thought the reason DLSS works is because the same rendering algorithm is used to generate the low resolution image and the high resolution image and the neural network merely learns a filter between the two.

Take a patient with ECoG implant(s), put EEG sensors on the patient, and hit record. You now have the same rendering mechanism (the brain) generating a low resolution signal (EEG) and a high resolution signal (ECoG).

However, back to DLSS, if the low resolution signal is a single pixel, then generating a 4k image from just that single pixel may not be very fruitful.

Still, it would be interesting to see an attempt at using a generative adversarial network (GAN) to generate an ECoG from an EEG. And if it doesn't work, then make a determination of how much more EEG sensitivity is needed before it will work.