rhogar's comments

rhogar | 23 days ago | on: Labor market impacts of AI: A new measure and early evidence

This is very close to the thesis, or at least theme, of the essays in The Mythical Man-Month, Fred Brooks. Some elements are dated (1975), but many feel timeless.

Brooks law “Adding manpower to a late software project makes it later” is just the surface of some of the metaphorical language that has most stuck with me: large systems and teams quickening entanglement in tar pits through their struggle against coordination scaling pains, conceptual integrity in design akin to preserving architectural unity of Reims cathedral, roles and limitations attempting to expand surgical teams, etc.

Love a good metaphor, even when its foundation is overextended or out of date. Highly recommend.

rhogar | 2 years ago | on: Show HN: Ragas – Open-source library for evaluating RAG pipelines

Congratulations on the launch! Personally would love to see a rough estimates of the expected number of requests and tokens required to run tasks like synthetic data generation for different amounts of data. Though this is likely highly variable, would like to have a loose idea of possible incurred costs and execution time.

rhogar | 2 years ago | on: Stable Diffusion XL technical report [pdf]

The report does not detail hardware -- though it states that SDXL has 2.6B parameters in its UNet component, compared to SD 1.4/1.5 with 860M and SD 2.0/2.1 with 865M. So SDXL has roughly 3x more UNet parameters. In January, MosaicML claimed a model comparable to Stable Diffusion V2 could be trained with 79,000 A100-hours in 13 days. Some sort of inference can be made from this information, would be interested to hear someone with more insight here provide more perspective.

rhogar | 2 years ago | on: AudioPaLM: A Large Language Model That Can Speak and Listen

Though inference for the 8B model is almost definitely not capable of near real time inference yet, we’re approaching babelfish territory. Main difference perhaps being this is powered by burning massive amounts of carbon as opposed to a fish brain.

rhogar | 2 years ago | on: RoboCat – A Self-Improving Robotic Agent

Performance on so few examples is impressive, paired with generalizability across broader tasks, and multiple embodiments + environments (and from just visual goals rather than complex verbal instructions) is quite a jump from where we saw Gato at last spring. If representative, seems a strong step toward meaningful autonomous skill acquisition/transference in realistic settings.
page 1