bgalbraith
|
9 years ago
|
on: Brain-sensing technology allows typing at 12 words per minute
This is a valid point and a real challenge of brain-computer interface (BCI) technology. It is largely trying to help those who suffer from locked-in syndrome, where they really do not have any reliable motor control at all, including eye movement. If you do have the ability to reliably execute any kind of motor control, such as eye movement or muscle twitch, that can be exploited for more effective, durable interface control than current state of the art of BCI.
bgalbraith
|
9 years ago
|
on: Install GPU TensorFlow from Sources with Ubuntu 16.04 and Cuda 8.0 RC
As stated elsewhere, this can actually be a very frustrating process. I lost a good chunk of my long weekend trying to build TF from source for CUDA 8.0 / cuDNN 5.1. Generally speaking the culprit is that the CUDA installers for Linux are highly dependent on your kernel and gcc versions. This is a huge headache for people who want to stay up-to-date on their distro packages. CentOS has no problem because hardly anything changes, but you're essentially handcuffed to whatever version s of Ubuntu or Fedora were out when NVIDIA decided to start packaging up the next release. Bumping gcc to 5.4 in Ubuntu 16.04.1 broke the 16.04 installer, which relied on gcc 5.3.
bgalbraith
|
9 years ago
|
on: Deep Reinforcement Learning: Pong from Pixels
Reinforcement Learning is one of the most exciting areas of research in machine learning and AI going on right now in my opinion. It is going to play heavily in creating AI that can make decisions in dynamic environments.
A great introduction to the topic is the book Reinforcement Learning: An Introduction by Sutton & Barto. You can find the official HTML version of the 1st edition and a PDF of a recent draft of the 2nd ed. here: https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html
bgalbraith
|
9 years ago
|
on: Magic Leap Alleges Workers Stole Its Secrets
Bradski is also the creator of OpenCV, an industry standard computer vision library.
bgalbraith
|
9 years ago
|
on: Google supercharges machine learning tasks with TPU custom chip
IBM's TrueNorth chip is taking a much more neuromorphic design approach by trying to approximate networks of biological neurons. They are investigating a new form of computer architecture away from the classic Von Neumann model.
TPUs are custom ASICs that speed up math on tensors i.e. high-dimensional matrices. Tensors feature prominently in artificial neural networks, especially the deep learning architectures. While GPUs help accelerate these operations, they are optimized first and foremost for video rendering/gaming applications -- compute-specific features are mostly tacked on. TPUs are optimized solely for doing ML-related computations.
bgalbraith
|
10 years ago
|
on: CMU’s computer science dean on its poaching problem
You absolutely do not need a PhD for industry unless you want an R&D job in a handful of domains. A PhD is not simply learning more facts about a particular topic. It's an apprenticeship for conducting independently directed academic research. There are so many topics that you can not just learn from reading some online sources e.g. most experimental work. Most of the time spent in a typical PhD program is spent trying to solve problems that have no easy answers and no easy guidelines to follow. While you can definitely gain similar knowledge and experience in an industry setting, you almost never have the freedom to take the 3+ years often necessary to explore a narrow topic, struggle and fail repeatedly, be faced with and overcome crushing doubt and frustration, and do so in a generally supportive community.
bgalbraith
|
10 years ago
|
on: CaptionBot by Microsoft
bgalbraith
|
10 years ago
|
on: The Nvidia DGX-1 Deep Learning Supercomputer in a Box
You are correct. My initial response was a pedantic point about the semantic use of monopoly in this context, which isn't helpful.
I would love it if AMD would care more about GPGPU, but they don't, and NVIDIA has little incentive to make their OpenCl drivers equal to their CUDA ones.
bgalbraith
|
10 years ago
|
on: The Nvidia DGX-1 Deep Learning Supercomputer in a Box
NVIDIA does not have a monopoly in the traditional sense. But yes, the have a de facto one because there is no viable competition.
It's like saying MATLAB has a monopoly in academic research because so much of the code is written in it. That is slowly changing and moving over to Python now, which is great. Maybe OpenCL will get there someday, but I don't see it happening any time soon.
bgalbraith
|
10 years ago
|
on: The Nvidia DGX-1 Deep Learning Supercomputer in a Box
What monopoly? You totally have a choice, it's just that NVIDIA made a large bet on GPGPU and it is paying off for them. You don't see AMD heavily pushing their cards for compute purposes or developing computational developer relations.
bgalbraith
|
10 years ago
|
on: Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves
Only overheated once, though I rarely had them operating continuously for more than a few minutes at a time.
bgalbraith
|
10 years ago
|
on: Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves
Ideally, yes, we want to pre-train in a virtual environment using as close to the real model robot as possible. I worked on such a problem as part of my PhD research on mobile robots using the Webots simulator (
https://www.cyberbotics.com/overview) as my virtual environment.
In my case, I was working on biologically-inspired models for picking up distant objects. It's impractical to tune hyperparameters in hardware, so you need to be able to create a virtual version that gets you close enough. Once you can demonstrate success there, you then have to move to the physical robot, which introduces several additional challenges: 1) imperfections in your actual hardware behavior vs idealized simulated ones, 2) real-world sensor noise and constraints, 3) dealing with real-world timing and inputs instead of a clean, lock-step simulated environment, 4) having different API to poll sensors/actuate servos between virtual and hardware robots, and 5) ensuring that your trained model can be transferred effectively between your virtual and hardware robot control system.
I was able to solve these issues for my particular constrained research use case, and was pretty happy with the results. You can see a demo reel of the robot here: https://www.youtube.com/watch?v=EoIXFKVGaXw
bgalbraith
|
10 years ago
|
on: Oculus is giving Rifts to their original Kickstarter backers
This was totally unexpected. When I saw the Kickstarter update email in my inbox, I just assumed we were getting to jump the pre-order line. This was a great goodwill move on their part.
bgalbraith
|
10 years ago
|
on: You Don't Have to Be a Scientist to Own a Proper EEG Headset
You are correct. I worked with EEG as part of my PhD research on brain computer interfaces, and it is a super noisy signal regardless of the headset. This makes is very hard to do anything reliably with it outside a pretty narrow scope.
bgalbraith
|
11 years ago
|
on: Brain Monitors Are Going Mainstream, Despite Skepticism
These consumer-targeted devices are predominantly used to detect and act on changes in the relative power of certain frequency bands from rhythmic neural activity. These changes in activity are common across people, and don't require any specialized user-specific techniques to identify. For instance, alpha wave (8-12 Hz) power is correlated with arousal/attentiveness.
bgalbraith
|
11 years ago
|
on: Computational Neuroscience in Python
Very nice! I'm glad you found them useful. When I have time, I'll convert them and some unpublished examples into IPython Notebooks.
bgalbraith
|
11 years ago
|
on: Computational Neuroscience in Python
Related, a few years ago I wrote a series of blog posts with code and discussion on how to do some basic neural simulations in Python:
http://www.neurdon.com/author/byron/. This includes spiking leaky integrate-and-fire neurons, the Hodgkin-Huxley neuron model, and the Izhikevich model neurons.
bgalbraith
|
12 years ago
|
on: Yahoo breaks every mailing list in the world including the IETF's
> unlike forums, people really enjoy mailing lists. I don't think I've ever met anyone, ever, who said they liked forums.
I think it entirely depends on what the purpose of communication channel is serving.
Mailing lists are transient passive participation. I can sign up to a list and never have to do another thing because I use email all the time. Occasionally a back and forth discussion might pop up, but I can easily choose to ignore it by simply glancing at the subject line.
Forums are persistent active participation. I have to specifically access the forum, possibly logging in in the process, to see what activity has happened. Many do enable some kind of email notification with a set frequency. Digest emails lose the benefit of the quick glance decision to attend or not, while all activity would be similar to the mailing list model. As forums can encourage more silo-ed conversations or short disposable responses, getting all activity is generally not ideal, however.
bgalbraith
|
12 years ago
|
on: Tiny Helicopter Piloted By Human Thoughts
The current state of the art for BCI ranges from 2-3 continuously valued "channels" using motor imagery (the method used in the article) or 1 channel of 2-32+ discrete choices using a sensory stimulus-based method such as event-related potentials (P300) or steady state visual evoked potentials (SSVEP).
It is highly unlikely that an EEG BCI will ever replace any normal task, as the performance relative to any reliable motor movement for direct control is terrible. For instance, if you have an eye tracker, you can reliably out perform the best BCI. It is really aimed at severely paralyzed people who don't have any other means of communication. The idea is undoubtedly cool and compelling, but the practicality of BCI for healthy subject use is very limited.
bgalbraith
|
12 years ago
|
on: Tiny Helicopter Piloted By Human Thoughts
They are using a technique called motor imagery, which looks for small changes in synchrony in the sensory motor rhythms (SMR). SMR is currently only capable of reliably detecting left hand, right hand, and combined foot imagined movement. When they say "raising a hand" they are not finding a pattern of activity that relates to that gesture, they are simply detecting if a left vs right vs both motor action was imagined. As such, you cannot, unfortunately, simply think up another gesture to add.