gallabytes
|
1 year ago
|
on: Ask HN: Vision Pro owners, are you still using it?
I use it as a fancy monitor that I can strap to my face and fits in a suitcase. sometimes I want to work lying down and it's great for that. It being based on ipados makes it kinda useless aside from the display.
gallabytes
|
1 year ago
|
on: Maxtext: A simple, performant and scalable Jax LLM
> Some of this complexity may be necessary for achieving optimal performance in Jax. E.g. extra indirection to avoid the compiler making some bad fusion decision, or multiple calls so something can be marked as static for the jit in the outer call
certainly some of it is but not the lion's share - I have a much simpler (private) codebase which scales pretty similarly afaict.
the complexity of Maxtext feels more Serious Engineering ™ flavored, following Best Practices.
gallabytes
|
2 years ago
|
on: Adobe will charge “credits” for generative AI
this is not even close to true
gallabytes
|
2 years ago
|
on: Google “We have no moat, and neither does OpenAI”
I literally just don't feel like running them tbh, and see no reason to publish them either way. Mostly prefer to let the outputs speak for themselves.
For a while I was using an FID variant for evaluation during training, but didn't find it very helpful vs just looking at output images.
gallabytes
|
2 years ago
|
on: The Decade of Deep Learning
OP was about classical statistical techniques. I'm pretty sure human artists are not logistic regression?
gallabytes
|
2 years ago
|
on: The Decade of Deep Learning
lmao no. imagine trying to do text to image with anything other than deep learning. nothing else comes close.
gallabytes
|
3 years ago
|
on: TPU v4 provides exaFLOPS-scale ML with efficiency gains
yeah we trained v5 on TPUs and continue to train on them.
gallabytes
|
3 years ago
|
on: TPU v4 provides exaFLOPS-scale ML with efficiency gains
We didn't, v5 was trained on TPUs too
gallabytes
|
3 years ago
|
on: Imagen: An AI system that creates photorealistic images from input text
... no it definitely wasn't. that's $50m. read the paper, they tell you how long it took on a v4-256, which you know the public rental price for.
gallabytes
|
4 years ago
|
on: How “latency numbers everybody should know” decreased from 1990–2020
That wouldn't make sense - compression has very cache-friendly access patterns, and would benefit greatly from the observed improvements in memory bandwidth.
gallabytes
|
4 years ago
|
on: How “latency numbers everybody should know” decreased from 1990–2020
SIMD - compressing has gotten faster, but (assuming OP is correct rather than just missing info) the reference algorithm didn't have room to take advantage of SIMD. The relevant improvements since 2010 or so mostly look like bandwidth improvements not latency, and coincide with increasing ubiquity of SIMD instructions and SIMD-friendly algorithms.
gallabytes
|
5 years ago
|
on: Hackers tell the story of the Twitter attack from the inside
the real issue is what you get when you google Scott [Lastname]. If the top result is an NYT story his patients will reliably find it, and that can be problems.
gallabytes
|
6 years ago
|
on: Facebook's Libra: national currency tokens, a new white paper – what this means
why do you care about p2p? I sent money on PayPal US -> EU last week. it was as easy as sending it to someone in the US, and the fees were no worse than what I often see crypto exchanges charging.
gallabytes
|
6 years ago
|
on: Stages of denial in encountering K
there almost certainly is an encoding such that every program any human will ever write fits in 128 bytes, though I doubt we'll ever design one. to convince yourself of this, notice that you don't expect to ever produce two programs with the same blake2 hash.
there's a lot of room for improvement in conciseness of code. I would still be surprised if it was meaningfully possible to write a full-featured modern OS with one page of APL
gallabytes
|
7 years ago
|
on: Is Google OAuth Down?
gallabytes
|
9 years ago
|
on: Logical Induction
This makes me think the only thing we disagree on is the meaning of the words "red team" and "blue team" :)
When I say it feels like we spend a lot of time red teaming, that means I think we spend somewhere between 30 and 60% of research time trying to break things and see how they fail.
This is fully compatible with not immediately implementing things - it's much less expensive to break something /before/ you build it.
gallabytes
|
9 years ago
|
on: Logical Induction
I find that perception fairly surprising, as for a very long time it felt like we did more red team than blue team. I do acknowledge that this has been changing recently, but only significantly in the context of building on the results in this paper.
gallabytes
|
9 years ago
|
on: And no new Macs were announced once more
I get about 12 hrs battery life with everything working out of the box on my xps 13 with Ubuntu 16.04 and kernel 4.6.
gallabytes
|
10 years ago
|
on: Relation Between Type Theory, Category Theory and Logic
We have something between the two in HoTT - universes of types is stratified by homotopy levels, corresponding to how many dimensions of structure a type has. A space with only points is thus a 0-type, a space with at most 1 point is a -1-type, and a space with only one is a -2-type.
The catch is that univalence is inconsistent with LEM at h-levels greater than -1, but assuming it is perfectly consistent for -1 types, which can be thought of as the "at most true" propositions of classical logic.
gallabytes
|
10 years ago
|
on: Relation Between Type Theory, Category Theory and Logic
A lot of work was going into the cubical model, but IIRC they realized it was a dead end about a month and a half ago. Right now the most promising work looks to be formalizing the set theoretic model in NuPRL.