colincooke's comments

colincooke | 1 month ago | on: Booting from a vinyl record (2020)

Oh man, this reminds me of my "party trick" back in the day of saying I could tell what OS a computer was running by listening to the HDD seeking. The good old days

colincooke | 6 months ago | on: MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline

It is worth noting that this study was tbh pretty poorly performed from a psychology/neuroscience perspective and the neuro community was kind of roasting their results as uninterpretable.

Their trial design and interpretation of results is not properly done (i.e. they are making unfair comparison of LLM users to non-LLM users), so they can't really make the kind of claims they are making.

This would not stand up to peer review in it's current form.

I'm also saying this as someone who generally does believe these declines exist, but this is not the evidence it claims to be.

colincooke | 11 months ago | on: Ask HN: What are you working on? (March 2025)

Nice work! Would totally have used this when I was freelancing. Honestly love the serif'd fonts, would love to see everything serif'd tbh.

Also back when I had to do these (I used Wave) having a notes section was very useful to include a few things (i.e. I used to include conversion rates). Would probably be pretty easy.

colincooke | 1 year ago | on: Penn to reduce graduate admissions, rescind acceptances amid research cuts

Again please read my post carefully. There is a valid critique of overhead rates, but simply doing it suddenly in this manner has little added economic benefit in the long run, while ruining lives and creating waste/chaos in the short run.

You can make a strong argument these institutions require reform, but such reform should be done not overnight, and not through such broad strokes.

colincooke | 1 year ago | on: Penn to reduce graduate admissions, rescind acceptances amid research cuts

1. Why should the public believe that they can fix it. Perhaps they can't, that's not entirely my point. My point is that if the government firmly believes that a change is necessary there are _simple_ ways of acheiving such a change without causing such chaos, waste, and hardship. Perhaps a phased in approach, or other mechanisms. Overnight shock therapy offers very little economic benefits while having very harsh personal and insitutional cost.

2. What is illegal about the change. The NIH overhead rate is actually negotiated directly between the institution and the NIH, following a process put into law. This is why a federal judge has blocked this order [1]. I'm far from a lawyer, but my read of this is that this is a change that would need to come through congress or a re-negotiation of the rates through the mandated process.

[1]: https://www.aamc.org/news/press-releases/aamc-lawsuit-result...

colincooke | 1 year ago | on: Penn to reduce graduate admissions, rescind acceptances amid research cuts

The entire academic industry is in turmoil, the uncertainty on how bad things could get is probably the worst of it as Universities are having to plan for some pretty extreme outcomes even if unlikely.

For those who are questioning the validity of a 59% (or higher for some other institutions) overhead rate, your concerns are worth hearing and a review could be necessary, but oh my please not like this. This was an overnight (likely illegal!) change made with no warning and no consultation.

If the government decided that a cap was necessary it should be phased in to allow for insitutions to adjust the operational budgets gradually rather than this shock therapy that destroys lives and WASTES research money (as labs are potentially unable to staff their ongoing projects). A phased in approach would have nearly the same long-term budget implications.

Are there too many admin staff? Likely? Is this the right way to address that? Absolutely not.

For those who are unfamiliar with how career progress works in Academia, it is so competitive that even a year or two "break" in your career likely means you are forever unable to get a job. If you're on year 12 of an academic career, attempting to get your first job after your second (probably underpaid) postdoc and suddenly there are no jobs, you can't just wait it out. You are probably just done, and out of the market forever as you will lose your connections and have a gap in your CV which in this market is enough to disqualify you.

colincooke | 1 year ago | on: Humans have caused 1.5 °C of long-term global warming according to new estimates

There is also a good case to be made that the prices being bandied around are actually much too high [1]

TL;DR is three major factors:

1. The agencies that are doing the estimates are _very_ bad at exponential development curves (cough cough IEA estimating solar [2])

2. Unfortunately much of the developing world's economy is not growing as fast as we previously thought it would (similar thing happening with birthrates)

3. Many costs are absolute and _not_ marginal, which is just wrong IMO. We are going to need the energy either way, we should be talking about the "green premium" (as far as it exists), not how much it'll cost to generate XX TWH of energy

[1]: https://www.economist.com/interactive/briefing/2024/11/14/th...

[2]: https://www.economist.com/interactive/briefing/2024/11/14/th...

colincooke | 1 year ago | on: Functional architecture of cerebral cortex during naturalistic movie watching

Quite a fun paper, but very difficult to draw conclusions from. Their headline finding is that they created a map between resting state and "watching a movie".

This is, for better or worse, the kind of research you can only do at institutions which have free fMRI scanning (MIT, Princeton, Harvard, etc.). No behavioural links, only very detailed activation maps that we can't really draw conclusions from. A missed opportunity IMO to spend these kinds of scanning resource on some kind of more narrowly focussed task with some behavioural outcome they can link to brain data.

colincooke | 1 year ago | on: Using AI Generated Code Will Make You a Bad Programmer

To me the issue with AI generated code, and what is different than prior innovations in software development, is that it is the the wrong abstraction (or one could argue not even an abstraction anymore).

Most of SWE (and much of engineering in general) is built on abstractions -- I use a Numpy to do math for me, React to build a UI, or Moment to do date operations. All of these libraries offer abstractions that give me high leverage on a problem in a reliable way.

The issue with the current state of AI tools for code generation is that they don't offer a reliable abstraction, instead the abstraction is the prompt/context, and the reliability can vary quite a bit.

I would feel like one hand it tied behind my back without LLM tools (I use both co-pilot and Gemini daily), however the amount of code I allow these tools to write _for_ me is quite limited. I use these tools to automate small snippets (co-pilot) or help me ideate (Gemini). I wouldn't trust them to write more than a contained function as I don't trust that it'll do what I intend.

So while I think these tools are amazing for increasing productivity, I'm still skeptical of using them at scale to write reliable software, and I'm not sure if the path we are on with them is the right one to get there.

colincooke | 2 years ago | on: An automatic indexing system for Postgres

Seeing as you're here, the reason the price is too high for us is that "SCALE" plan is too much for us and the "PRODUCTION" plan is too little. I'd happily pay $200/month for two servers, but can't justify $400/month for two (we only have two right now).

colincooke | 2 years ago | on: Comparing humans, GPT-4, and GPT-4V on abstraction and reasoning tasks

My wife studys people for living (experimental cognitive psychologist), the quality of MTurk is laughable, if that's our standard for higher level cognition then the bar is low. You'll see the most basic "attention check" questions ("answer option C if you read the question") be failed routinely, honestly at this point I think GPT4 would to a better job than most MTurkers at these tasks...

She has found that prolific is substantially better (you have to pay more for it as well), however that may only be because it's a higher cost/newer platform.

colincooke | 2 years ago | on: An automatic indexing system for Postgres

I had no idea what was going on under the hood but used PGAnalyze for about 6 months after launching a new product. It was excellent at suggesting indexes and monitoring unused ones.

However, after a few months the ROI crept down until it wasn't worth it anymore (access patterns stabilized). I'm tempted to bring it back once and a while but the price tag keeps me from having it always on.

colincooke | 2 years ago | on: X-Change: Electricity – On track for disruption

My favorite thing about this report is on energy modelers (section 2.5). They show the (now famous) graph where large institutions (like the IEA) regularly predict that solar will start to have linear growth rather than exponential due to constraints in our energy system. This is despite every previous forecast that did this being wrong, with the number of new solar installations growing exponetially YoY instead.

What is interesting is that despite these constant misses, the IEA is constantly used as the definitive source for energy modeling across the world.

colincooke | 2 years ago | on: Astral

This along with pydantic [0] means that 2/3 of my favorite python open source projects are now commercially backed. I wonder how long FastAPI will last?

As an aside, what is the issue with versioning these days? Ruff and FastAPI both have massive user bases and a reliable codebase, but haven't released v1.0.0! Ruff hasn't even got as far as v0.1.0.

[0]: https://techcrunch.com/2023/02/16/sequoia-backs-open-source-...

colincooke | 3 years ago | on: Super Resolution: Image-to-Image Translation Using Deep Learning in ArcGIS Pro

I should have been more nuanced I suppose. There is a time and place for these kinds of image "enhancements", they just don't belong in ESRI's scientific GIS platform. Folks don't view these images for pleasure (or at least very few do), they are typically used to analyze the satellite data or georeference other imagery.

Deep learning image enhancement is totally appropriate in your smartphone, as there the goal is not accuracy but perceived quality. Doing this to satellite imagery where the primary consumer cares about accuracy is what I call "reckless"

colincooke | 3 years ago | on: Super Resolution: Image-to-Image Translation Using Deep Learning in ArcGIS Pro

This whole concept is so reckless in realms where the image content actually matters and people keep doing it anyways. You cannot CREATE information. You can infer it in certain situations, but if you infer the information and then analyze it you are setting yourself up to make mistakes by overextrapolating a bias/trend in your data to images where you have no idea if that inference is valid.

This was a big thing in the medical imaging community (where I did my stint as a CV researcher), folks were hallucinating microscope images and CT scans with no information theory justification as to why it worked.

Super resolution IS possible, but it must be done by synthesizing new pieces of information, not by inferring based on what other similar looking objects looked like. A cool technique by my former advisor does this with microscopes [1].

Deep learning has a place here, just not as a "lets create information" step, but as a way to learn how to synthesize additional information about images from more sources (i.e. more similar to how Google does Night Sight [2]).

Edit: if you want to see (an attempt) at using deep learning in this field you can checkout one of my papers [3].

[1]: https://en.wikipedia.org/wiki/Fourier_ptychography [2]: http://graphics.stanford.edu/papers/night-sight-sigasia19/ni... [3]: https://openaccess.thecvf.com/content/ICCV2021/html/Cooke_Ph...

colincooke | 3 years ago | on: Study Suggests Fructose Could Drive Alzheimer's Disease

While I agree that these theory based papers are useful, and are often the precursor to experiments, I believe the general understanding of "study has found X" in pop culture is that there is "hard evidence" of the finding being tauted. Theories are risky to place too much credence in without being steeped in the field yourself (is this a theory that most people in the field agree with, or is the one suggesting it an outlier?).

As usual science communication is never done as well as we could all hope, but I personally like this "hierarchy of evidence" approach in understanding if something is ready to be consumed by the general public, rather than requiring further discussion with the scientific community.

page 1