4258HzG's comments

4258HzG | 6 years ago | on: Don’t Confuse a Bug’s Priority with Its Severity

If people's first impression of what you want your UI to mean is different often enough to cause problems (for example see the amount of debate here), that is the bug, not the people misinterpreting it. The use of these terms in Jira is one of my pet peeves, since different teams often use them very differently and can be real fun when sharing Jira boards with multiple opinionated companies.

The colloquial usage of 'severity' and 'priority' just has too much overlap. Something like pairing 'likelihood' and 'severity' assessments, as is standard in safety, would still be general and make it immediately clear why something could be severe yet low priority, especially since people commonly mean both likely and severe when ranking a problem as severe. Keeping with the author's definition of severe, renaming severity as 'System Impact' at least immediately narrows down what is severe, but still carries the possible implication of 'frequent'.

I think fighting for a given interpretation here without using different terms is akin to insisting that people simply understand that a 'significant difference' reported in science means 'statistically different' as opposed to an 'important difference'.

4258HzG | 6 years ago | on: Nuclear Magnetic Resonance and Fourier Transform (2017)

I find how the Fourier Transform works so simply in multiple dimensions as long as you have cartesian sampling is where the transform starts to shine. Just apply along one dimension then the others and because it's a linear unitary operator they are incredibly robust and it makes no difference which order you apply them. Then you can combine them with other measurement types in higher dimensional measurements (e.g. a relaxation period) and still just mindlessly apply the Fourier Transform along the frequency dimensions (first since it's so robust) and get the desired result.

There isn't any difference in the Fourier analysis between spectroscopy and imaging. In imaging you're just encoding position as a frequency and if you enforce Cartesian sampling the analysis remains the same, including combining extra dimensions. Spectroscopy and flow imaging experiments can get pretty crazy with 5-6 encoding dimensions limited only by your patience and instrument drift over longer periods (weeks).

The differences between spectroscopy and MRI are primarily application driven and then because it is tricker to apply precise magnetic field gradient pulses vs precisely timing RF pulses. Combined with the fact people are a lot bigger, more impatient and more delicate than test tubes while imaging can also be far less quantitative drives the design of very different NMR/MRI pulse sequences. In imaging, the speed gains and human placed limits on gradient slew rates (Audio frequency dB/dt induces currents in neurons) often justify the trouble of non-Cartesian and/or incomplete sampling.

If you want a next step for MRI signal processing, look into multiple-coil reconstruction techniques and how they not only combine the spacial sensitivity profiles of the coils with gradient imaging, without knowing the actual sensitivity profiles a-priori. Pretty much every medical MRI machine uses multiple receive coils to reduce imaging times.

Note: My user name is proton gamma though I left the field mid-career to do sw development. (A far better career though not as exciting if you love physics like I do.)

(edits: grammar, probably still missed a few typos)

4258HzG | 9 years ago | on: Reasons blog posts can be of higher scientific quality than journal articles

#1 Reason: Blogs aren't included in the publication count which is needed to gain research funding, get hired as a professor and get tenure.

A bit cynical, but a lot of the problems the writer suggests are more of a symptom of trying to get as many papers out as possible than the cause (and that one rarely reviews the supplementary material). If scientific blogs counted like publications many of the same problems would appear there.

4258HzG | 9 years ago | on: Why the University of California Is Appealing the CRISPR Patent Decision

Given current IP law, I think the more relevant question is how public Universities should handle their patents (e.g. for public use, for a nominal license fee, or profit) and not if they should patent.

A non-existant or poorly executed patent allows for anyone to follow up, as MIT/Harvard in this case, and effectively patent minor extensions to the idea and still lock it up. Even if a public university wants their inventions to be free to the public, they need to properly patent it if it is going to stay available for public use. Eg. BSD style licenses for patents.

4258HzG | 9 years ago | on: Why the University of California Is Appealing the CRISPR Patent Decision

To stretch your analogy: "Tennis balls colored for enhanced human recognition." So they're patenting a method to 'improve' CRISPER, in this case 'adapting' it for use in humans. To technically use the Harvard / MIT patent you may still need the UCB one.

Why this happened from my experience patenting through UCB and dealing with Harvard is that UCB does not have the best patent department when it comes to helping inventors draft patents that are defensible in terms of pushing for sufficiently general language, or including defensive dependent claims (like for use humans) for precluding people patenting specific extensions as Harvard/MIT did in this case. (They just put my patent memo in the appropriate legal format with a few minor modifications.) In contrast, Harvard/MIT as organizations are far more serious about intellectual property and have the legal departments to reflect it.

4258HzG | 9 years ago | on: Ask HN: Should I Sign an NDA??

That is basically the case that the person you're talking to requiring it (small company), if the 'contract' is basically a substitute for a hire, or if programmers are viewed there as fungible goods.

For companies large enough to have a large bureaucracy of legal staff whose standard terms are a Byzantine compromise of many different departments / businesses requirements / and past initiatives. Then local legal aids will typically had a fixed set of options of types of contracts they'll entertain quickly (in order to get anything done). For example, they might have a mutual disclosure agreement on file. Later, if the questioner offers something important enough to get upper management's attention minor modifications can be made within limits. Then again you have to offer something unique and important for that to happen.

4258HzG | 9 years ago | on: Ask HN: Should I Sign an NDA??

One thing to consider is if the client you're dealing with requiring it, or is their firm requiring it by policy? If it's the later, the kind of leeway you'll have in terms of getting appropriate terms signed off by their legal team is quite different.

For example, for early discussions a mutual disclosure agreement is another nice way to get compliance with a company's standard legal terms (ie. both parties agree to not reveal anything confidential or sensitive), and can be useful to get things started far enough to get upper management support required to make exceptions to their standard legal procedures. (I've been in the situation where my and a supplier's legal team's where far and distant enough from a project not to prioritize resolving incompatible differences in standard contract terms for months eventually requiring a loud "nudge" from upper management behind a closed door.)

4258HzG | 9 years ago | on: We Don’t Do Legacy (2012)

>> Not to mention that MIT and other 'elite schools' barely teach well anyways.

> Ignore this person, they don't know what they are talking about.

On the mean these schools have more knowledgeable professors, and regardless of teaching talent will push a course as far as the class can take. So, I found that when reaching grad school students coming from the Ivy's simply covered more in their courses (though not necessarily with more effort or better technique).

State schools aren't always better teaching wise: at many research universities a teaching award can be a kiss of death for gaining tenure, and gaining it tends afflicts professors in the same way.

4258HzG | 9 years ago | on: Samsung’s biggest challenge now is Google software, not Apple hardware

Samsung apps are used because Samsung makes it hard to avoid using their apps and custom skins without a bit of dedication and persistence of a reasonably technically savvy user. (Think why people used early versions of IE and then imagine if windows had also gone out of the way to automatically reset the default browser to IE if the user changed it.)

I don't bother with Samsung phones after my first, after I found that their custom keyboards started crashing and freezing (Chinese mode), and then started to reappearing over the Google ones I set after each automatic app update. My only regret, is that I didn't pay the Apple tax earlier. (I'm sure there is a more permanent solution to that phone, but why should I sink more time when I'm one update away from fixing some other issue again.)

If it weren't for the manufacturers, I'd like how you can customize an Android phone. Its the best feature of Android vs IOS. However, between the poor quality customizations and the delayed to non-existant updates it leads to, it's also Android's worst feature.

4258HzG | 9 years ago | on: Uber takes its self-driving cars off the road after one flipped over in Arizona

As much as I dislike Uber, I hate the uniformly negative tone the media has switched to now that it's the popular thing instead of singing Uber's praises far more. The same with the possibilities with autonomous driving. Like Theranos, it's not as if many of these issues weren't there for some time, but we've magically transitioned from them being visionary and daring to evil.

4258HzG | 9 years ago | on: How to explain to a layperson why a developer should not be interrupted? (2011)

I would guess 'management' would be the more likely target. In jobs like sales and administration, being able to rapidly switch focus to what just came up is advantageous. The people on the manufacturing floor or operations I have worked with have the same issues with being interrupted during their work, though it's often an engineer that's the culprit. (Managers can quantify 'blue colar' productivity losses easier.)

4258HzG | 9 years ago | on: Overkill: Java as a first programming language (2010)

Or for younger programmers (pre-college) a learning oriented language like Processing where you can gently move into java, and get to do graphical and interactive programming quickly seems to be a better option. As a teen, the lack a excitement from learning to write to files and handle text input killed my initial enthusiasm for programming and had me go first into the sciences instead. Whereas, I still find it very satisfying to be able to make interactive graphics in a few lines of code in a way that text output or static webpages never will for me.

For the college level, I agree with the writer's suggestion of Python, not just for its ease of learning, but its breadth of libraries making it useful for many science and engineering courses. However, I think that for getting advanced (fun) features early on by picking and getting the right library (working) may still be a bit too much unnecessary friction to do 'fun' programming to get earlier students engaged. Learning extra tooling and library systems is 'trivial' if you code regularly, but is still a surprising amount of work when you're new to programing and gets in the way of early feedback. (I have had colleagues in the sciences hesitate at the sight of import statements, dealing with multiple libraries, and new IDE's when suggesting using something like sci-py/python over Matlab for projects, for these sorts of reasons.)

4258HzG | 9 years ago | on: Google Glass is getting a second life in the manufacturing industry

Spectacles never had near the awareness or hype that Google glass originally had, and the news I saw surrounding it was always overshadowed by mentioning what happened with glass. I'm not saying Glass would have succeeded otherwise, but it would have made a far better impression and not set the tone for other efforts.

4258HzG | 9 years ago | on: Google Glass is getting a second life in the manufacturing industry

It's kind of clear that their quoted expert Tsai didn't interview anyone while they were using Google Glass.

"With Google Glass, it may look like you're listening to the person in front of you, but you could actually be watching a movie or looking up sports stats."

Unfortunately the problem is the opposite and more offensive. It's very obvious if you're having a conversation with someone while they're using Glass while looking at you from their eye movements, and it makes the user look really weird ignoring any fashion issues. I found the experience quite a bit more offensive than having someone reading emails on their laptop while you talked to them. Unlike the laptop case, with glass you get a very clear direct view of their eyes as they scan whatever glass is showing them, while there is the obvious false pretense of giving you their full attention.

I think one of the big problems with glass was that they picked the wrong sort of people to be early public users that then set the tone for the product. A process that ensured only super-enthusiastic users would bother applying is also the sort that would select for the least willing to notice how other people might find certain uses of it rude and annoying.

page 1