mi3law's comments

mi3law | 11 months ago | on: Ask HN: Who is hiring? (April 2025)

AO Labs | real-time reinforcement learning | https://www.aolabs.ai/ | SF, CA + remote

AO Labs is an applied AI research lab building real-time reinforcement learning, unlocking AI that can be hyper-personalized with less data than LLMs.

Our first product is an API that learns by combining end-user context, application data, and a feedback signal (eg. an end-users' likes and dislikes) in a fast learning loop that's lightweight enough to provision a distinct agent per-user. There's no gap between training and inference-- train at the edge, learn all the time.

With our framework we increase training efficiency and also combine the static pre-trained intelligence from LLMs with continuous training to learn local contexts. AI progress is bottlenecked by backpropagation, which necessitates labelled data to set the ground truth while also leaving a gap between training and inference that result in increasingly larger, more homogenizing models.

We are hiring for applied researchers & various engineering roles– please reach out to [email protected]

mi3law | 1 year ago | on: Ask HN: Who is hiring? (November 2024)

AO Labs | Applied scientists/researchers & various roles | https://www.aolabs.ai/ | Berkeley, CA + remote

AO Labs is building a more reliable alternative to deep learning and LLMs using continuously trainable, compute-efficient weightless neural networks. We are building AI that can learn after training.

We're a community of developers and researchers building general intelligence from the bottom-up and we are making space for collaborators at all levels --hackers, contributors, the curious (some of whom we’ve hired already). Get in touch: ali at aolabs.ai and I’ll share some demos.

With our framework we increase training efficiency and also combine the static pre-trained intelligence from LLMs with continuous training to learn local contexts. AI progress is bottlenecked by backpropagation, which necessitates a human in the loop to set the ground truth while also leaving a gap between training and inference that result in increasingly larger, more homogenizing models.

* If you reached out to our previous post here, please email me again and we’ll get back to you first. Our situation has changed some as a startup hence the delayed response.

mi3law | 1 year ago | on: Ask HN: Who is hiring? (July 2024)

AO Labs | Building an alternative to backpropagation | https://www.aolabs.ai/ | Berkeley, CA + remote

AI systems struggle with edge cases and understanding local context despite increasing model sizes. From our research at UC Berkeley into the evolution of intelligence from simple organisms, we’ve discovered the missing link is continuous learning (deep learning is pre-trained by design). Models built with our framework learn through customizable parameters similar to animal instincts, allowing for AI grounded with built-in memory and reasoning. We're a community of 160+ developers and researchers building general intelligence from the bottom-up from places like Berkeley, NYU, Imperial College, and Google.

We're building way outside of the current paradigm and we're looking for collaborators at all levels --hackers, contributors, the curious-- as we'll be making our first hires soon. Email with "HN Hiring" in subject line to: ali at aolabs.ai or chat with us in our discord: https://discord.gg/Zg9bHPYss5

This post is near identical to mine from last month; if you reached out then, please know that I'll respond to you soon (I've been busy wrapping up a fundraise).

mi3law | 1 year ago | on: Are animals conscious? New research

What you explain here also explains the current problems in AGI research. sigh Humans keep thinking that reality, like the sun once did, revolves around them.

mi3law | 1 year ago | on: Ask HN: Who is hiring? (June 2024)

AO Labs | Building an alternative to backpropagation | https://www.aolabs.ai/ | Berkeley, CA + remote

AI systems struggle with edge cases and understanding local context despite increasing model sizes. From our research at UC Berkeley into the evolution of intelligence from simple organisms, we’ve discovered the missing link is continuous learning (deep learning is pre-trained by design). Models built with our framework learn through customizable parameters similar to animal instincts, allowing for AI grounded with built-in memory and reasoning. We're a community of 160+ developers and researchers building general intelligence from the bottom-up from places like Berkeley, NYU, Imperial College, and Google.

We're building way outside of the current paradigm and we're looking for collaborators at all levels --hackers, contributors, the curious-- as we'll be making our first hires soon. Reach out: ali at aolabs.ai

mi3law | 2 years ago | on: Coqui Studio and API are shutting down

Very strange to see no comment on this indeed!

If any Coqui users need managed support, I'm with LMNT.com and we're happy to help-- email me ali @ lmnt.com

mi3law | 2 years ago | on: Three senior researchers have resigned from OpenAI

Those zillions of lines are given to ChatGPT in the form of weights and biases through backprop during pre-training. The data does not map to any experience of ChatGPT itself, so it's performance involves associations between data, not associations between data and its own experience of that data.

Compare ChatGPT to a dog-- a dog's experience of an audible "sit" command maps to that particular dog's history of experience, manipulated through pain or pleasure (i.e. if you associate treat + "sit", you'll have a dog with its own grounded definition of sit). A human also learns words like "sit," and we always have our own understanding of those words, even if we can agree on them together too certain degrees in lines of linguistic corpora. In fact, the linguistic corpora is borne out of our experiences, our individual understandings, and that's a one way arrow, so something trained purely on that resultant data is always an abstraction level away from experience, and therefore from true grounded understanding or truth. Hence GPT (and all deep learning) unsolvable hallucination or grounding problems.

mi3law | 2 years ago | on: OpenAI's board has fired Sam Altman

I totally agree.

They must be under so much crazy pressure at OpenAI that it indeed is like a cult. I'm glad to see the snake finally eat iself. Hopefully that'll return some sanity to our field.

mi3law | 2 years ago | on: Three senior researchers have resigned from OpenAI

My basis for these claims is from my research career, work described so far at aolabs.ai; still very much in progress, but form what I've learned I can respond to the 2 claims you're poking at--

1) we should agree on what we mean by smart or intelligent. That's really hard to do so let's narrow it down to "does not hallucinate" the way GPT does, or more high level has a subjective understanding of its own that another agent can reliably come to trust. I can tell you that AI/deep learning/LLM hallucination is a technically unsolvable problem, so it'll never get "smarter" in that way.

2) This connects to number 2. Humans and animals of course aren't infinitely "smart;" we fuck up and hallucinate in ways of our own, but that's just it, we have a grounded truth of our own, born of a body and emotional experience that grounds our rational experience, or the consciousness you talk about.

So my claim is really one claim, that AI cannot perform the same tasks or "true" intelligence level of a human in the sense of not hallucinating like GPT without having a subjective experience of its own.

There is no answer or understanding "out there;" it's all what we experience and come to understand.

This is my favorite topic. I have much more to share on it including working code, though at a level of an extremely simple organism (thinking we can skip to human level and even jump exponentially beyond that is what I'm calling out as BS).

mi3law | 2 years ago | on: Three senior researchers have resigned from OpenAI

FTX / crypto, which just imploded last year.

Look, I'm an AGI/AI researcher myself. I believe and bleed this stuff. AI is here to stay and is forever a part of computing in many ways. Sam Altman and others bastardized it by overhyping it to current levels, derailing real work. All the traction OpenAI has accumulated, outside of github copoilot / codex, is itself so far away from product-market fit that people are playing off the novelty of AGI / the GPT/AI being on its way to "smarter" than human rather than any real usage.

Hype in tech is real. Overhype and bubbles are real. In AI in particular, there's been AI winters because of the overhype.

mi3law | 2 years ago | on: Three senior researchers have resigned from OpenAI

It can be useful in certain contexts, most certainly as a code co-pilot, but that and yours/others' usage doesn't change the fundamental mismatch between the limits of this tech and what Sam and others have hyped it up to do.

We've already trained it on all the data there is, it's not going to get "smarter" and it'll always lack true subjective understanding, so the overhype has been real, indeed to bubble levels as per OP.

mi3law | 2 years ago | on: Ilya Sutskever "at the center" of Altman firing?

Not quite accurate.

OpenAI is set up in a weird way where nobody has equity or shares in a traditional C-Corp sense, but they have Profit Participation Units, an alternative structure I presume they concocted when Sam joined as CEO or when they first fell in bed with Microsoft. Now, does Sam have PPUs? Who knows?

mi3law | 2 years ago | on: OpenAI's board has fired Sam Altman

My friend, I agree with you on the source likely being a fundamental or philosophical differences. The lie that I was calling out is that AGI/superintelligence is "the most important invention," and that's philosophical differences I hope the board had with Sam.

There really is no evidence at all for AGI/superintelligence even being possible to claim it's as important as Sam has been shilling.

mi3law | 2 years ago | on: OpenAI's board has fired Sam Altman

I think my point is different than what you're breaking down here.

The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.

But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.

Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.

page 1