kevinalexbrown | 2 months ago | on: Ask HN: What Are You Working On? (December 2025)
kevinalexbrown's comments
kevinalexbrown | 4 months ago | on: Ask HN: Who is hiring? (November 2025)
we're building the standard model for bio. We're doing for biology what mathematics did for physics.
papers this month: genomics - https://arxiv.org/abs/2509.25573 protein-language: https://arxiv.org/abs/2509.22853 longitudinal ehr: https://arxiv.org/abs/2509.25591
hello world: https://standardmodelbio.substack.com/p/introducing-standard...
we're humble and ambitious - we want to be the quiet backbone of biomedical ai but we want none of the glory of the final applications.
kevinalexbrown | 4 months ago | on: Ask HN: What are you working on? (October 2025)
We're pretty jazzed.
kevinalexbrown | 4 years ago | on: Japanese scientists develop vaccine to eliminate cells behind aging
kevinalexbrown | 4 years ago | on: “AI promised to revolutionize radiology but so far its failing”
In reality, radiologists will not be summarily replaced one day. They will get more and more productive as tools extend their reach. This can occur even as the number of radiologists increases.
Here's a recent example where Hinton was right in concept: recent AI work for lung cancer detection made radiologists perform better in an FDA 510k clearance.
20 readers reviewed all of 232 cases using both a second-reader as well as a concurrent first reader workflows. Following the read according to both workflows, five expert radiologists reviewed all consolidated marks. The reference standard was based on reader majority (three out of five) followed by expert adjudication, as needed. As a result of the study’s truthing process, 143 cases were identified as including at least one true nodule and 89 with no true nodules. All endpoints of the analyses were satisfactorily met. These analyses demonstrated that all readers showed a significant improvement for the detection of pulmonary nodules (solid, part-solid and ground glass) with both reading workflows.
https://www.accessdata.fda.gov/cdrh_docs/pdf20/K203258.pdf
(I am proud to have worked with others on versions of the above, but do not speak for them or the approval, etc)
The AI revolution in medicine is here. That is not in dispute by most clinicians in training now, nor, from all signs, by the FDA. Not everyone is making use of it yet, and not all of it is perfect (as with radiologists - just try to get a clean training set). But the idea that machine learning/ai is overpromising is like criticizing Steve Jobs in 2008 for overpromising the iphone by saying it hasn't totally changed your life yet. Ok.
kevinalexbrown | 6 years ago | on: Ask HN: Who is hiring? (February 2020)
Responsibilities: · Contribute to research projects to develop intelligent solutions for medical imaging and text analytics · Conduct fast prototyping, feasibility studies for exploratory clinical research · Support the productization of research prototypes
We look for: · Strong research capability in computer vision, machine learning, text analytics and medical image analysis, proven by publications in journals/conferences. · Research experience in image/text analytics using large scale, weakly supervised / unsupervised learning algorithms · Research experience in medical image/text analysis of different modalities (CT, MRI, PET, medical reports etc.)
Email: [email protected]
kevinalexbrown | 6 years ago | on: Ask HN: Who is hiring? (January 2020)
We offer well-paid internships lasting >= 3 months, with independent moonshot projects.
Responsibilities: · Contribute to research projects to develop intelligent solutions for medical imaging and text analytics · Conduct fast prototyping, feasibility studies for exploratory clinical research · Support the productization of research prototypes
We look for: · Strong research capability in computer vision, machine learning, text analytics and medical image analysis, proven by publications in journals/conferences. · Research experience in image/text analytics using large scale, weakly supervised / unsupervised learning algorithms · Research experience in medical image/text analysis of different modalities (CT, MRI, PET, medical reports etc.)
Email: [email protected]
kevinalexbrown | 6 years ago | on: Breast cancer detection in mammography using deep learning approach
kevinalexbrown | 6 years ago | on: Ask HN: Who is hiring? (December 2019)
Our R&D group delivers medical image/text tools (e.g. deep learning, NLP, etc) for medical data analysis. We are well recognized for delivering cutting-edge intelligent solutions to Siemens 3D workstations and medical imaging scanners. Our group also has strong publication record in top tier journals and conferences, and several Siemens "inventor of the year" award recipients.
We offer well-paid internships lasting >= 3 months, with independent moonshot projects.
Responsibilities: · Contribute to research projects to develop intelligent solutions for medical imaging and text analytics · Conduct fast prototyping, feasibility studies for exploratory clinical research · Support the productization of research prototypes
We look for: · Strong research capability in computer vision, machine learning, text analytics and medical image analysis, proven by publications in journals/conferences. · Research experience in image/text analytics using large scale, weakly supervised / unsupervised learning algorithms · Research experience in medical image/text analysis of different modalities (CT, MRI, PET, medical reports etc.)
Email: [email protected]
kevinalexbrown | 6 years ago | on: Nearly 400 medical devices, procedures and practices found ineffective in study
What is the best learning rate for updating physicians (our models) from the results of RCT's (part of our loss)?
The authors reviewed all articles in three journals from (generally) between 2003-2017. They didn't, afaict (please point me if they did), review the time-to-correction (if any correction has been made). It takes some time before the results of an RCT end up in established practice. I'm actually surprised it's so small in many cases.
It's not like there's a database where the results of every RCT are immediately updated and the physician model is retrained overnight on the new data.
Even if there were, imagine if the learning rate (so to speak) were so high that every discipline immediately changed their published best practices on the basis of a single RCT?
Here's a cautionary paragraph from one of the excellent reversal studies they use:
Several limitations of the study warrant discussion. First, because we enrolled only 26% of eligible patients, our findings must be generalized cautiously. The most frequent reason that patients declined enrollment was a strong preference for one treatment or the other. Since patients' preferences may be associated with treatment outcome, our trial may be vulnerable to selection bias. Participating surgeons may not have referred potentially eligible patients because they were uncomfortable randomly assigning these patients to treatment; this form of selective enrollment may also create bias.26 Second, because the trial was conducted in academic referral centers, the findings should be generalized carefully to community settings. Third, we did not formally assess the fidelity of the physical therapists or surgeons to the standard intervention protocols. Finally, our study was not blinded, since our investigative group did not consider a sham comparison group feasible. [0]
I'm less concerned about RCT to Best Practice time than from Best Practice to Typical Physician Practice time. There is a cascaded model connected to the 'complex RCT loss' and it's discipline published practice down to individual physician treating patients. Compressing the time from RCT to individual physician is fraught with difficulties, but could be improved.
Finally, RCT is the gold standard, but it's not perfect and it doesn't always clearly translate to the individual physician's model of practice. Many best practices weren't established from RCT's either.
And an inconclusive result from an RCT is not the same thing as proving that there's no difference in outcomes, but a proper statistician can chime in there.
kevinalexbrown | 6 years ago | on: A former lead designer of Gmail fixes Gmail with a Chrome extension
kevinalexbrown | 7 years ago | on: Opinion: A.I. Could Worsen Health Disparities
kevinalexbrown | 7 years ago | on: Opinion: A.I. Could Worsen Health Disparities
If anything, a machine-learning point of view better addresses his concerns than a traditional one, because they can be much more quickly updated to correct for identified biases. Doctors spend years and years of hard work becoming efficient and effective human algorithms themselves, and updating those human algorithms in the face of newer evidence is difficult. In standard practice, biases are often invisible and uncodified to begin with. "Moral intuition" is something all doctors use, but it's also something of a black box in nearly every real-world use case.
kevinalexbrown | 8 years ago | on: Advanced Data Analysis from an Elementary Point of View (2017) [pdf]
For whatever it’s worth, he seems to be a dedicated teacher who posts self criticisms of his courses publicly online. The course this book is based on has grown quite successful as well.
I’m not sure if you’re just trolling, but encouraging others to avoid this book might be a mistake. It’s quite good.
kevinalexbrown | 8 years ago | on: Some Insights from a Julia Developer
A version of this would be: how can good package development be as easy as possible, and how can package use be as easy as possible?
I haven’t done any serious work in Julia mainly because the python libraries are mature, good, and performant enough. I can’t speak for everyone, but for end users in science labs library support is perhaps the biggest consideration for language choice.
kevinalexbrown | 8 years ago | on: The Asynchronous Computability Theorem
There are many impossibility problems that can be solved by relaxing some constraints (like wait-free) or by accepting some unsolvability (we don’t worry too hard about our programs halting in practice, and we generally trust compressed sensing results because the probability of failure is provably delta, say), or just by changing some desiderata.
Great examples include arrows impossibility theorem or the no good clustering result, each of which have solutions if the assumptions that operationalize our intuition are changed slightly.
kevinalexbrown | 8 years ago | on: The U.S. is risking an academic brain drain
What would happen if we increased research funding by X percent? How did we settle on the current funding levels? I would be curious to see a reasonable source for this. A cursory google search mostly returned opinion pieces that we should increase funding for science. I agree, but hard(er) numbers would be better. It would be great to see a back-of-the-envelope ROI for X percent funding increase in T time. Obviously funding can be applied in many ways, and the ROI is difficult to measure, but someone must have studied it.
For the immediate future, the US remains the best place for research. But dominance can begin to change before the effects become obvious, like a large company that's still profitable long after it's become irrelevant.
kevinalexbrown | 8 years ago | on: A Startup Making Paper Out of Stone, Not Trees
Aside: supposedly global pulp production is 34 percent recycling, 45 percent from sawmill waste, and 21 percent 'logs and chips.'[0] The wikipedia article later states, from another source, that 16 percent of production comes from tree farms. The gist of your general point stands, as I understand it: we're not going out and cutting old growth or even secondary growth forests for paper (though we make use of reject trees when we target them for other reasons).
kevinalexbrown | 8 years ago | on: When sexual selection can lead to a decline in the capacity for survival
It's dangerously easy to say "oh yea, makes sense, natural selection happens by mating so if mates choose club wings, I get it. Obvious." But Prum's trying to go a step further, and test just how far out of balance and arbitrary the mate selection part can be from the direct do-not-die part of evolutionary fitness.
He proposes that we can differentiate between these two by considering that the club wings aren't actually indicators of higher direct fitness, because they hurt the ability to fly, even among females that have no need for such shenanagins. I'm not sure I totally agree with or grasp that, but at least it's an attempt to further understand and test the idea.
I'm frankly surprised by comments accusing a well established evolutionary biologist of severely misunderstanding natural selection. The author has spent his career, among other things, investigating mechanisms of evolution, and identifying and performing tests to assess their relative importance to a particular species (here's an example: http://prumlab.yale.edu/sites/default/files/prum_1997_phylog...).
You might consider whether your objections are addressed in his work not aimed at the lay population, and that your criticism really just amounts to "He wrote this at not exactly the right level of sophistication for me." Maybe that's true, but it's a pretty boring claim.
kevinalexbrown | 9 years ago | on: Crayfish kept alone found to develop higher alcohol tolerance
And you can definitely make causal inferences if you don't understand everything. If that weren't true, you wouldn't be able to infer that moving your arm makes the coffee cup in your hand move unless you knew everything about physics.
The patient is not a document - multimodal foundation models for biomedicine. JEPA's working well.