GloamingNiblets's comments

GloamingNiblets | 3 months ago | on: Cognitive and mental health correlates of short-form video use

The nature of the content is an important variable to control for in future work, but the primary negative impact appears to be via the devastating effect on human attention.

From the paper: "repeated exposure to highly stimulating, fast-paced content may contribute to habituation, in which users become desensitized to slower, more effortful cognitive tasks such as reading, problem solving, or deep learning. This process may gradually reduce cognitive endurance and weaken the brain’s ability to sustain attention on a single task... potentially reinforcing impulsive engagement patterns and encouraging habitual seeking of instant gratification".

GloamingNiblets | 3 months ago | on: Waymo robotaxis are now giving rides on freeways in LA, SF and Phoenix

I don't have any specific knowledge about Waymo's stack, but I can confidently say Waymo's reaction time is likely poorer than an attentive human. By the time sensor data makes it through the perception stack, prediction/planning stack, and back to the controls stack, you're likely looking at >500ms. Waymos have the advantage of consistency though (they never text and drive).

GloamingNiblets | 7 months ago | on: A Photonic SRAM with Embedded XOR Logic for Ultra-Fast In-Memory Computing

The von Neumann architecture is not ideal for all use cases; ML training and inference is hugely memory bound and a ton of energy is spent moving network weights around for just a few OPs. Our own squishy neural networks can be viewed as a form of in-memory computing: synapses both store network properties and execute the computation (there's no need to read out synapse weights for calculation elsewhere).

It's still very niche but could offer enormous power savings for ML inference.

GloamingNiblets | 9 months ago | on: Compiling a neural net to C for a speedup

Thank you for the excellent writeup of some extremely interesting work! Do you have any opinions on whether binary networks and/or differentiable circuits will play a large role in the future of AI? I've long had this hunch that we'll look back on current dense vector representations as an inferior way of encoding information.

GloamingNiblets | 10 months ago | on: Does RL Incentivize Reasoning in LLMs Beyond the Base Model?

Thanks for sharing. I had trouble reading the transcript, so here is Claude's cleaned up version and summary:

Here's the condensed and formatted transcription in a single paragraph: This is the last thing I want to highlight this section on why RL works. Here they evaluate different things - they evaluate specifically pass at K and maj at K. Maj at K is like majority voting, so what you do is you have a model, you have a question, and you output not just one output but an ordered set. So you give your top 20 answers - 0 is your best answer that the model wants to give most, then the second most answer, third most answer, and so on. They could all be correct, just different reformulations of the same answer or different derivations stated in different ways. What you're interested in is how many of the top K results are correct - that's the pass at K. And if you had to vote if majority voting on the top K, how often would you be correct then? There's a slight difference, and that slight difference is actually made more drastic by reinforcement learning. They say, "As shown in figure 7, reinforcement learning enhances majority at K performance but not pass at K." These findings indicate that reinforcement learning enhances the model's overall performance by rendering the output distribution more robust. In other words, it seems that the improvement is attributed to boosting the correct response from Top K rather than the enhancement of fundamental capabilities. This is something we've come to learn in many different ways from reinforcement learning on language models or even supervised fine-tuning - what's happening most likely is that the capabilities of doing all of these things are already present in the underlying pre-trained language model. Summary: Reinforcement learning improves language model performance not by enhancing fundamental capabilities but by making the output distribution more robust, effectively boosting correct responses within the top results rather than improving the model's inherent abilities.

GloamingNiblets | 11 months ago | on: Bored of It

As a counterpoint, if I were to be teleported naked onto an abandoned island 10000 years ago and could bring one "tool" with me, a solar powered terminal with an LLM would be my #1 pick. An able-bodied and resourceful individual equipped with an LLM could accomplish far far more than with any other tool I can think of.

GloamingNiblets | 11 months ago | on: It’s not mold, it’s calcium lactate (2018)

Given our developing understanding of the importance of the human microbiome, which includes fungi (the mycobiome), I steer clear of anti fungal preservatives in my food personally.

Just because something has been used since 1955 doesn't mean it's all good.

page 1