facu17y's comments

facu17y | 1 year ago | on: Safe Superintelligence Inc.

How can they speak of Safety when they are based partly in a colonialist settler entity that is committing a genocide and wanting to exterminate the indigenous population to make room for the Greater Zionist State.

I don't do business with Israeli companies while Israel is engaged in mass Extermination of a human population they treat as dogs.

facu17y | 2 years ago | on: What We Need Instead of "Web Components"

Browser vendors are (or should be) managing the abstractions for their own needs, with developer needs expected to be met by framework/library developers.

Who says web components are meant for use directly by the developer? Maybe they're primarily meant for the browser developers (those who build browser features), not for use directly by web app developers.

facu17y | 2 years ago | on: Building AI without a neural network

Well, if it's so useless why is it on the HN front page? Are there "PR" companies behind promoting items to the HN front page? I'm sure there are because sometimes an article like this comes up at #3 and everyone says it's got no substance, clickbait, etc

facu17y | 2 years ago | on: If 95% doesn't count as a vote of no confidence, what number would?

"Four years ago, Altman’s mentor, Y Combinator founder Paul Graham, flew from the United Kingdom to San Francisco to give his protégé the boot, according to three people familiar with the incident, which has not been previously reported."

I guess he had a change of heart about Sam because ... ?

facu17y | 2 years ago | on: Sam Altman, Greg Brockman and others to join Microsoft

Sam didn't create the breakthroughs behind the current GPT.

He did not create the breakthroughs behind the next GPT.

None of the people that may follow have the same handle on the tech as Ilya. I mean they built up Ilya's image in our mind so much, that he's one of a kind genius (or maybe Musk did that) and now we are to believe that his genius doesn't matter and that Microsoft already knows how to create AGI and that OpenAI is no longer relevant?

Or did I get it wrong?

facu17y | 2 years ago | on: OpenAI's board has fired Sam Altman

What did Sam Altman hide from his board that caused his firing as CEO of OpenAI?

1) That LLMs cannot generalize outside of _patterns_ they pick up during training? (as shown by a recent paper from Google, and as many of us know from our work testing LLMs and working around their short comings)

2) That every time you train a new model, with potentially very high expense, you have no idea what you're going to get. Generally better but also potentially bigger reliability challenges. LLMs are fundamentally unreliable and not stable in any kind of use case besides chat apps, especially when they keep tweaking and updating the model and deprecating old ones. No one can build on shifting sands.

3) The GPT4-Turbo regressed on code generation performance and the 128K window is only usable up to 16K (but for me in use cases more compicated than Q&A over docs, I found that 1.2K is max usable window. That's 100X than he advertised.

4) That he priced GPT4-V at a massive loss to crush the competition

5) That he rushed the GPT Builder product, causing massive drain on resources dedicated to existing customers, and having to halt sign ups, even with a $29B investment riding on the grwoth of the user base. Any one of the above or none of the above.

No one knows... but the board.. .and Microsoft who has 49% control of the board.

facu17y | 2 years ago | on: Establishment of the U.S. Artificial Intelligence Safety Institute

"Despite the increasing complexity and capabilities of machine learning models, they still lack what is commonly understood as "agency." They don't have desires, intentions, or the ability to form goals. They operate under a fixed set of rules or algorithms and don't "want" anything.

Even in feedback loop systems where a model might "learn" from the outcomes of its actions, this learning is typically constrained by the objectives set by human operators. The model itself doesn't have the ability to decide what it wants to learn or how it wants to act; it's merely optimizing for a function that was determined by its creators.

Furthermore, any tendency to "meander and drift outside the scope of their original objective" would generally be considered a bug rather than a feature indicative of agency. Such behavior usually implies that the system is not performing as intended and needs to be corrected or constrained.

In summary, while machine learning models are becoming increasingly sophisticated and capable, they do not possess agency in the way living organisms do. Their actions are a result of algorithms and programming, not independent thought or desire. As a result, questions about their "autonomy" are often less about the models themselves developing agency and more about the ethical and practical implications of the tasks we delegate to them."

The above is from the horse's mouth (ChatGPT4)

My commentary:

We have yet to achieve the kind of agency a jelly fish has, which operates with a nervous system comprised of roughly 10K neurons (vs 100B in humans) and no such thing as a brain. We have not yet been able to replicate the Agency present in a simple nervous system.

I would say even an Amoeba has more agency than a $1B+ OpenAI model since the Amoeba can feed itself and grow in numbers far more successfully and sustainably in the wild with all the unpredictability in its environment than an OpenAI based AI Agent, which ends up stuck in loops or derailed.

What is my point?

We're jumping the gun with these regulations. That's all I'm saying. Not that we should not keep an eye and have a healthy amount of concern and make sure we're on top of it, but we are clearly jumping the gun since we the AI agents so far are unable to compete with a jelly fish in open-ended survival mode (not to be confused with Minecraft survival mode) due to the AI's lack of agency (as a unitary agent and as a collective).

facu17y | 2 years ago | on: Progress on No-GIL CPython

Mojo was mature enough for a person I know in the community who ported their Python port of llama2 to it. Also, others pointed out other languages they would rather use.

The rationalization in your response obfuscate the real reason you were triggered to down vote, which is that you are too emotionally vested in Python and afraid to try a better alternative.

facu17y | 2 years ago | on: Progress on No-GIL CPython

This downvoting to -3 is illustrative of how HN down votes are about territorial warfare, ego, etc. Has nothing to do with logic. You don't like to port your python code to nogil python? I'm gonna downvote you. WTF is wrong with you people.
page 1