bgalbraith's comments

bgalbraith | 13 years ago | on: Thesis Hatement

I agree with your sentiment about the lamentable, though typical HN response. However, it is totally in accordance with the mean audience of the site. Almost 50% of HN readers are under the age of 25 [1]. HN is a product of Y Combinator, an accelerator that predominantly focuses on software-based startups frequently run by kids in their 20s. The average perspective is highly skewed and very different from someone several years into a non-CS technical PhD let alone a humanities one.

[1] http://royal.pingdom.com/2012/08/21/report-social-network-de...

bgalbraith | 13 years ago | on: Ph.D. Bust, Pt. 2: How Bad Is the Job Market For Young American-Born Scientists?

The initial motivation for these articles - the government says we need more STEM students, but look how many underemployed PhDs there are! - is flimsy. The US does not necessarily need more graduate students, what it needs is more skilled technical workers, especially in manufacturing.

Few students who go into PhD programs anticipate ending up in industry, unless you goal is R&D. The opportunity cost is so great that you are better off in most cases going straight into industry if that is your ultimate goal. As a current PhD student, my advice to anyone considering it is to not do it unless you are truly passionate about your research field.

bgalbraith | 13 years ago | on: Starbucks acquires Teavana for $620 Million

I totally agree. The stores are attractively laid out, and I did manage to get a nice in-cup loose leaf tea infuser, but I disliked my entire experience in the store. The hard sell treatment I got was really unwelcome and the tea itself was way over priced. I won't go in to another one and I don't recommend anyone go there.

bgalbraith | 13 years ago | on: Your Brain Can Be Hacked

OK, sensationalistic headlines aside, this is what is actually going on.

Using EEG, you can look for something called a P300 Event Related Potential (ERP). This is a positive deflection from the baseline activity in the brain signals approximately 300 milliseconds after an anticipated event occurs. Note two key facts about this:

1) P300 actually varies by person; it can appear sooner or much later than 300 ms and have different amplitudes. Because of this, a training phase is required to train the classifier.

2) The P300 happens when an event happens the subject is anticipating or recognizes, so they have to be primed in some sense. For instance, the researchers asked subjects to think of an imaginary PIN, then flashed single digits at them one at a time and tried to infer what the first digit of the PIN was by that. Because they were thinking of, say, 1234, when 1 flashed on the screen, a P300 may have been generated.

What the researchers did was interesting, in that they made the case for potential malware in a consumer BCI game. Their accuracy rates weren't that great, however. This is a far far cry from nefarious agents pulling secret info from your brain.

bgalbraith | 13 years ago | on: Ask HN: I'm a PHP developer, should I learn Rails or Django?

Like any language/platform, they both solve particular problems really well and then are OK or downright messy for the rest. I've developed applications for both platforms, and will more likely choose Django over Rails at the moment, but mostly because I work with Python a lot in general.

Ruby is a fun language and Rails is, essentially, its killer app. Programming Rails apps means learning a lot of conventions, a lot of "the right way" to do things. It's not just code style, but about good software engineering practices -- testing, database migrations, and multiple environments are all built in and easy to use. There are also some tremendous environment and deployment tools out there. RVM and bundler make dependencies easy to manage, and HAML greatly cleans up the HTML side of things. As mentioned elsewhere, Rails development can be really fast once you've got up to speed, but that can take time and cause serious headaches for anyone who walks into maintaining an existing reasonably complex project.

I love Python and use it all the time. It does have a bit of a problem with legacy libraries moving to Python 3, Django being one of them (if you want to try Django, use Python 2.7). Django was built for a newspaper website, as such it has some slightly odd conventions that can be confusing at first. Notably it makes the distinction between an app (reusable piece of functionality) and a site (comprising one or more apps). It has a strict template engine (by default), that has its own DSL which requires some getting used to. Django generates a lot less files than Rails by default, and generally feels more compact. It also has a really nice admin feature out of the box.

bgalbraith | 13 years ago | on: Scientists trace a wiring plan for entire mouse brain

The title is a little misleading. What they've done is effectively created hundreds of snapshots of neural connectivity, each focusing on how one particular region of the brain hooks into another. The goal is to make this data available so experts and hobbyists alike can potentially identify connections that offer insight into brain function and disease.

An analogy would be trying to reconstruct the street layout of a city by taking pictures from a plane as it flew over. Each time you fly past, you might get a different angle, different weather conditions, or try a different camera. Now you have hundreds of photos, and you are hoping people who know things about cities will find useful information there, like where the best restaurants might be located.

Disclosure: I worked on the front-facing side of this project (data browser and image viewer).

bgalbraith | 13 years ago | on: Startup University

I think it's more correct to say that in almost all cases, the purpose of academic research is to advance human knowledge, not produce a commercial product. Academia is not industry, nor should it be. What drives an academic scientist is the pursuit of knowledge, the recognition from peers that he/she was the one who formulated the new theorem, found the new approach. Going from proof of concept to polished product is not academically interesting and, as pointed out, not valuable to the degree granting or tenure process.

Many universities realize that there may be valuable technologies that need to be extracted and developed. Business schools try to match MBA students with Engineering students, offices of tech transfer try to offer some support in this area as well. Schools are offering specific advanced programs in commercialization of academic research (e.g. Notre Dame's ESTEEM program http://esteem.nd.edu/). Even some companies are trying to help this process, like TandemLaunch (http://www.tandemlaunchtech.com/).

bgalbraith | 14 years ago | on: What happened to cyberpunk?

Speculative science fiction is not so much a window into the future, but rather a reflection of the present in which it was written. By drawing on current trends, authors can extrapolate a future. The cyberpunk aesthetic arose around the same time as the personal computer, early Internet, 80's corporate greed, continuing urban sprawl, the Japanese economic bubble leading to property purchases all over the world, etc. -- inspiration for many themes that appear in cyberpunk works. Many of these things have cased to be novel (in the same way) or exist at all. For instance, computers and connectivity is ubiquitous now, massive conglomerate corporations haven't totally taken over for governments (depending on who you ask, at least), China has somewhat replaced Japan, and the world has moved on. Unless they were just writing genre fiction, authors like Gibson would not keep cyberpunk going because the world today is very different from the one that spawned it.

Another thought is to compare it to the other famous "-punk" style, steampunk. Steampunk benefits from the fact that all the technological foundation (steam, mechanical, analog) it is based on exists. I'd argue that the genre can thrive because a well-defined foundation exists from which to create. With cyberpunk, some of the foundations exist (computer hacking and espionage), but many are still left as future speculation (synthetic organs, cybernetic implants). It becomes harder to have a consistent foundation when there are still moving targets. I guess we can still have trench coats and mirror shades ;)

I really enjoy the cyberpunk aesthetic, and I wish it had more of a footprint in current music/film/TV. It may be though that true cyberpunk is really a product of the 80's. The spirit of what drove it, technology-driven dystopia, continues to this day.

bgalbraith | 14 years ago | on: Leap: A new gesture based interface for devices

I work on adaptive mobile robots as part of my research, and I'd be very interested to see how the LEAP compares to the Kinect in this area. I submitted a developer kit request, so maybe I'll get to find out.

Also, from the Ars Technica post on LEAP:

"The company says the breakthrough in resolution comes not from the hardware, which consists of relatively standard parts, but from what CTO David Holz calls 'a number of major algorithmic and mathematical problems that had not been solved or were considered unsolvable.'"

I'm conflicted by that statement. As a current academic, I hope they publish these supposed breakthroughs, as hiding them behind trade secrets makes me sad. As an entrepreneurial-minded person, however, I understand the desire for competitive advantage.

bgalbraith | 14 years ago | on: Python 2.8 Un-release Schedule

I should clarify that Python 3, by itself, is fine. I had the unfortunate situation of having to use it for a task in which no suitable library existed, which led to a significant amount of unwelcome additional effort. However, until those library deficiencies are fully met, it makes no sense to move forward from 2.7.

Thanks for the Pillow link, btw. I was unaware of it.

bgalbraith | 14 years ago | on: Python 2.8 Un-release Schedule

I do a lot of scientific computing in Python and have had awful experiences with Python 3. We encourage everyone who comes into our lab to work with 2.7 because the libraries are all there. It's true that Numpy and Scipy now build for Python 3, but another key library, PIL, is lost in limbo with no clear timeline for porting last I checked.

The only reason I've used Python 3 at all was because of project involving blender. I needed to do in-memory JPEG compression for quickly streaming images from the game engine. In Python 2, this is a couple lines of code using PIL. Instead, I ended up having to write my own pyjpeg module that provided a ctypes interface to a custom libjpeg-based compression library. I'm proud of the result, but the aggravation and frustration that entailed has removed any desire I have to move to Python 3.

bgalbraith | 14 years ago | on: IBM: Mind reading is less than five years away. For real.

No, I haven't, though I'm interested in which library are you referring to. We've been developing our own Python wrapper interface to their API, though this is to share a common interface with the other EEG DAQ (e.g. g.tec) Python wrappers we've been developing.

bgalbraith | 14 years ago | on: IBM: Mind reading is less than five years away. For real.

When talking about EEG-based "mind reading", there are three primary methods currently under study (when looking at locked-in patients at least):

1) P300 - This refers to a predictable change in the EEG signal that happens around 300 milliseconds after something you were expecting happens. For example, if I am looking for a particular letter to flash amongst a grid of letters all randomly flashing, a P300 will be triggered when the letter I want flashes.

2) SSVEP - This stands for steady state visually evoked potential. This approach uses EEG signals recorded from over the visual cortex, which responds to constantly flickering stimuli. Given a few seconds, the power of the frequency of the attended stimulus increases in the EEG, which can then be detected and used to make a decision.

3) SMR - This stands for sensorimotor rhythms, and is an approach that looks for changes in EEG activity over the motor cortex. Successful approaches have been able to identify when you imagine clenching your left or right fists, or pushing down on your foot. Unlike the other two, this does not require external stimuli.

SMR is the most like what we consider mind reading, as the user is initiating the signal while the other two infer what a person is looking at. It is limited to only 2-3 degrees of freedom at the moment, however, and is the hardest signal to work with. It is susceptible to external factors such as the current environment and mental state, and not everyone seems to be able to generate the needed signals. SSVEP, while lacking the wow factor of SMR, is much easier to work with and is a much more stable signal.

Disclosure: I work in this area. Here's a flashy NSF video highlighting our lab: http://www.nsf.gov/news/special_reports/science_nation/brain...

bgalbraith | 14 years ago | on: IBM: Mind reading is less than five years away. For real.

As a current PhD student working in this area, I caution you about getting too excited about the Emotiv EPOC. We've got one in the lab we've started to work with as a potential low-cost EEG system. The out-of-the-box software is kinda hokey, so you may end up with an expensive novelty you use once or twice.

On the technical side, it does seem to be the best current option for consumer EEG, though most of these devices are actually strongly influenced by, if not heavily reliant on, facial muscle movements.

bgalbraith | 15 years ago | on: What is a PhD Really Worth?

tl;dr - we learn how to get by with less...

Thanks Nature, I'm glad that the main value you see in our spending 4+ years in advanced scientific study amounts to building a healthy respect for free food.

In pure economic terms, yes, the PhD seems almost always the wrong option. I'm curious about other factors such as general wellbeing and quality of life. Though chasing nonexistent tenure track positions while hopping from postdoc to postdoc probably isn't all that great.. Straight to industry for me.

bgalbraith | 15 years ago | on: Notice: Experimenting with HN

Apologies if this has been suggested before.

I'm curious if something like the BCS ranking system in college football could work for online communities. The specifics don't translate, but the general idea is that you use a weighted combination of human and machine generated rankings. This can be seen as maintaining the user-driven voting system but tempered with an impartial community spirit moderator in the form of a machine learning algorithm.

How could this be applied to HN? Let's leave the standard karma/voting system as it exists, as that seems to generally work.

Next, determine the general distribution of votes per comment. This will allow for things like z-scores to be determined that can notice if a particular comment has received significantly more votes than usual.

Next, perform a machine learning algorithm on a corpus of comments. Something as straightforward as Bayesian filters can work, though self-organizing maps also have potential. This is effectively doing the same thing as spam filtering, but instead of simply flagging something as spam, it would provide its own +/- vote. The initial training would start with a baseline of existing comments, and then periodically, say once a night, be updated with recently added comments and votes. Additional information, such as the karma of the commenter, can also be incorporated.

The final ranking then, which here would be how high up on the page it appears, would be a weighted combination of user votes, z-score scaling, and machine votes.

bgalbraith | 15 years ago | on: Introduction to Artifical Neural Networks

An excellent resource to get a more comprehensive review of ANNs is Neural Network Design (http://hagan.okstate.edu/nnd.html). Unfortunately, the book itself is out of print, but that link has some sample chapters and used copies are floating around. The book was written by the authors of the Neural Network Toolbox for MATLAB, so naturally all the examples and demos referenced are in MATLAB.

bgalbraith | 15 years ago | on: Is Stealth Mode Stupid?

This seems like one of the few cases where full on stealth mode makes sense, and even then, only if the team is slow and the work so far amounts to nothing more than an elevator pitch.

If the idea is something a bigger competitor would actually want, they are probably better off letting the team do the ground work, then either acquiring them outright or copying what they did. I would guess the biggest threats are from other motivated small teams that could execute the idea faster to achieve the aforementioned acquisition exit strategy.

bgalbraith | 15 years ago | on: Ask HN: Leaving college to work on a startup?

As someone who has been in a similar situation, think honestly about why you are disenchanted. Is it because you are taking required classes that don't interest you at the moment and the homework/projects seem like tedious busy work? Is it because you think your degree will be a piece of paper leading to a 9-to-5 in a cubicle, if you're lucky? Or is it because you're driven to work on a project that you hack away day and night on, and class seems like an annoyance that gets in the way?

If it's the latter, and you have a promising opportunity to roll into right away, then there's reason to think hard about your options. Otherwise I'd highly recommend you stick it out and get the degree.

page 2