mi3law's comments

mi3law | 2 years ago | on: OpenAI's board has fired Sam Altman

My theory as a pure AGI researcher-- it's because of the AGI lies OpenAI was built on, largely due to Sam.

On one hand, OpenAI is completely (financially) premised on the belief that AGI will change everything, 100x return, etc. but then why did they give up so much control/equity to Microsoft for their money?

Sam finally recently admitted that for OpenAI to achieve AGI they "need another breakthrough," so my guess it's this lie that cost him his sandcastle. I know as a researcher than OpenAI and Sam specifically were lying about AGI.

Screenshot of Sam's quote RE needing another breakthrough for AGI: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_pr... source: https://garymarcus.substack.com/p/has-sam-altman-gone-full-g...

mi3law | 2 years ago | on: Ask HN: Who is hiring? (August 2023)

We're looking for people to join our founding team; those roles are not as formally defined hence the lack of a careers page. Contact info is ali @ aolabs.ai

mi3law | 2 years ago | on: Ask HN: Who is hiring? (August 2023)

AO Labs | Non-technical cofounders + founding team | https://www.aolabs.ai/ | Berkeley, CA + remote

We've built a technical solution to the AI hallucination problem in the form of AI Agents that can be trained locally and continuously, grounding themselves in local context. We're building this as the next inevitable layer in the AI stack, a way to get to per-user training and per-user accuracy.

In less technical speak, we're building AGI from the bottom-up using an alternative to backpropagation, starting from the simplest animal levels. In business speak, we have an AI Agents-as-a-Service API to enable per-user accuracy and training.

Fundraising the past 6 months for anything in AI that's not genAI was difficult. Still, nobody can do what we can in AI. Not even close. Maybe we're a research project for a while longer, however we're especially looking for business-minded people to help us move forward.

mi3law | 2 years ago | on: AI and the Frontier Paradox

No, I don't see this as accurate. A body has a whole host of intelligence built into it, and can even learn (more akin to habituation). The underlying infra which you are suggesting could represent a body of sorts for AGI completely lacks this type of intelligence. And it's an open question how much general intelligence it itself functionally predicated on lower forms of intelligence such as that found in bodies.

mi3law | 2 years ago | on: AI and the Frontier Paradox

I mean to emphasize that AGI is being built only in the shape of mind, as if mind is separate from body, which is clearly not the case in our human experience of general intelligence.

mi3law | 2 years ago | on: AI and the Frontier Paradox

Excellent observation. The single-threaded effort dominating AI today (i.e. the base assumption that OpenAI can scale GPT up as it is today into AGI) is what's causing the bottleneck.

Assuming AGI can be built as 1 system is assuming that mind is separate from body, which is the old dualist idea we've outgrown in our own awareness, but somehow not when it comes to AI. We've been growing AI in that direction at https://www.aolabs.ai/

mi3law | 2 years ago | on: Locally trained AI Agents for network device discovery

Pre-trained systems dominate AI today (as deep as the P in GPT). My team and I have been researching and building alternatives given how the hallucination, blackbox and other problems don't seem solvable within* the current paradigm.

One of our first applications is a network automation solution for Netbox-- locally trained AI Agents unique to each Netbox account to predict roles of newly added devices given the current local list of devices, like a context-aware autocomplete.

Agents are lightweight by design, this particular Netbox Agent is 40-neuron, and when hooked up demo.netbox.dev and consistently gets 80%+ accuracy predicting device roles even when trained on ~60 devices only.

Try it out: https://aolabs-netbox.streamlit.app/

You can use dummy data from demo.netbox.dev. More on Netbox: https://github.com/netbox-community/netbox

We'd keen for feedback, if this is useful, how we could extend it if it is, or if it sparks other application ideas.

* Somebody has to be the pre-trainer, leaving an irreducible gap of misunderstanding between AI and its application which we are trying to diminish by adding a layer of local training.

mi3law | 2 years ago | on: Ask HN: Who is hiring? (April 2023)

AO Labs | https://www.aolabs.ai/ | Founding team | Berkeley, CA + remote

Coming up with objectives or deciding what is appropriate (i.e. what to backpropagate against) is a function of intelligence not accounted for in current AI design.

We've built this in code, a form of AGI at animal-levels using (naturally) an alternative to backpropagation. Get in touch ali at aolabs.ai.

mi3law | 4 years ago | on: Not everyone should meditate

>> I'm not sure exactly right now what this refers to as fifth state of consciousness, is it the eight circuit model of consciousness?

No, the fifth state here is an interpretation of the Gurdjieff system and the associated Fourth Way. Loosely the fifth state equivalent to enlightenment (satori). The specific quote is from a book by Robert de Ropp called "The Master Game."

I haven't looked into Leary's eight circuit model in years. Thank you for reminding me of it. By your description of your current state, sounds like you have found utility in that particular framework?

mi3law | 4 years ago | on: Not everyone should meditate

I'm sorry you've had such difficult experiences. Does the following excerpt resonate with you? If so, I'd be happy to share more.

"This comment is extremely important and should be borne in mind by all who feel tempted to dabble with the psychedelic experience without knowing what they are doing or why. He who enters the fifth state of consciousness without preparation may be spiritually paralyzed by his experience. He has seen too much too soon and, as a result, all games become meaningless. He cannot play the life games that satisfy men in the third state of consciousness. He cannot play the Master Game because he knows nothing about it and has no teacher. So he becomes, like Daumal's "leaf in the wind," an even more helpless plaything of external forces than he was before his rash experiment."

mi3law | 4 years ago | on: Not everyone should meditate

Properly read, Plato's allegory of the cave in the Republic is a response to the question you've articulated. Kant noted as much, and when not resolved with dualism (which flowed from Plato through Descartes to the modern day), this allegory offers powerful insights. I love these topics, have been swimming in them recently, so happy to talk more, just email me (info in profile).

mi3law | 4 years ago | on: Ludwig Wittgenstein: A Mind on Fire

Thank you for sharing this! You should post it to HN as its own post-- I work to classic music and this is a very interesting list that others would enjoy, too.

mi3law | 4 years ago | on: Schizophrenia linked to marijuana use disorder is on the rise, study finds

Thank you for sharing your experience and I'm sorry for your losses.

I'm taken away by your story and would like to connect, please. I have a similar experience and I'm working on a research project in AI that is related to this. Please drop me a note me aee at berkeley edu I couldn't find your contact info in your profile.

page 2