facu17y | 1 year ago | on: Safe Superintelligence Inc.
facu17y's comments
facu17y | 2 years ago | on: LoRA from scratch: implementation for LLM finetuning
facu17y | 2 years ago | on: NY Times copyright suit wants OpenAI to delete all GPT instances
facu17y | 2 years ago | on: What We Need Instead of "Web Components"
Who says web components are meant for use directly by the developer? Maybe they're primarily meant for the browser developers (those who build browser features), not for use directly by web app developers.
facu17y | 2 years ago | on: Building AI without a neural network
facu17y | 2 years ago | on: If 95% doesn't count as a vote of no confidence, what number would?
I guess he had a change of heart about Sam because ... ?
facu17y | 2 years ago | on: Sam Altman, Greg Brockman and others to join Microsoft
He did not create the breakthroughs behind the next GPT.
None of the people that may follow have the same handle on the tech as Ilya. I mean they built up Ilya's image in our mind so much, that he's one of a kind genius (or maybe Musk did that) and now we are to believe that his genius doesn't matter and that Microsoft already knows how to create AGI and that OpenAI is no longer relevant?
Or did I get it wrong?
facu17y | 2 years ago | on: OpenAI negotiations to reinstate Altman hit snag over board role
facu17y | 2 years ago | on: OpenAI's board has fired Sam Altman
1) That LLMs cannot generalize outside of _patterns_ they pick up during training? (as shown by a recent paper from Google, and as many of us know from our work testing LLMs and working around their short comings)
2) That every time you train a new model, with potentially very high expense, you have no idea what you're going to get. Generally better but also potentially bigger reliability challenges. LLMs are fundamentally unreliable and not stable in any kind of use case besides chat apps, especially when they keep tweaking and updating the model and deprecating old ones. No one can build on shifting sands.
3) The GPT4-Turbo regressed on code generation performance and the 128K window is only usable up to 16K (but for me in use cases more compicated than Q&A over docs, I found that 1.2K is max usable window. That's 100X than he advertised.
4) That he priced GPT4-V at a massive loss to crush the competition
5) That he rushed the GPT Builder product, causing massive drain on resources dedicated to existing customers, and having to halt sign ups, even with a $29B investment riding on the grwoth of the user base. Any one of the above or none of the above.
No one knows... but the board.. .and Microsoft who has 49% control of the board.
facu17y | 2 years ago | on: Pix2tex: Using a ViT to convert images of equations into LaTeX code
I fed the equation image (screenshot at the right frame from their gif then cropped) into ChatGPT (GPT4-V) and it correctly deciphered the equation and gave the correct LaText code.
Why was the repo removed?
facu17y | 2 years ago | on: Establishment of the U.S. Artificial Intelligence Safety Institute
Even in feedback loop systems where a model might "learn" from the outcomes of its actions, this learning is typically constrained by the objectives set by human operators. The model itself doesn't have the ability to decide what it wants to learn or how it wants to act; it's merely optimizing for a function that was determined by its creators.
Furthermore, any tendency to "meander and drift outside the scope of their original objective" would generally be considered a bug rather than a feature indicative of agency. Such behavior usually implies that the system is not performing as intended and needs to be corrected or constrained.
In summary, while machine learning models are becoming increasingly sophisticated and capable, they do not possess agency in the way living organisms do. Their actions are a result of algorithms and programming, not independent thought or desire. As a result, questions about their "autonomy" are often less about the models themselves developing agency and more about the ethical and practical implications of the tasks we delegate to them."
The above is from the horse's mouth (ChatGPT4)
My commentary:
We have yet to achieve the kind of agency a jelly fish has, which operates with a nervous system comprised of roughly 10K neurons (vs 100B in humans) and no such thing as a brain. We have not yet been able to replicate the Agency present in a simple nervous system.
I would say even an Amoeba has more agency than a $1B+ OpenAI model since the Amoeba can feed itself and grow in numbers far more successfully and sustainably in the wild with all the unpredictability in its environment than an OpenAI based AI Agent, which ends up stuck in loops or derailed.
What is my point?
We're jumping the gun with these regulations. That's all I'm saying. Not that we should not keep an eye and have a healthy amount of concern and make sure we're on top of it, but we are clearly jumping the gun since we the AI agents so far are unable to compete with a jelly fish in open-ended survival mode (not to be confused with Minecraft survival mode) due to the AI's lack of agency (as a unitary agent and as a collective).
facu17y | 2 years ago | on: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
facu17y | 2 years ago | on: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
facu17y | 2 years ago | on: Progress on No-GIL CPython
The rationalization in your response obfuscate the real reason you were triggered to down vote, which is that you are too emotionally vested in Python and afraid to try a better alternative.
facu17y | 2 years ago | on: Progress on No-GIL CPython
facu17y | 2 years ago | on: Progress on No-GIL CPython
facu17y | 2 years ago | on: PaLI-3 Vision Language Models
facu17y | 2 years ago | on: Microsoft is reportedly losing lots of money per user on GitHub Copilot
facu17y | 2 years ago | on: Microsoft is reportedly losing lots of money per user on GitHub Copilot
facu17y | 2 years ago | on: TinyML and Efficient Deep Learning Computing
I don't do business with Israeli companies while Israel is engaged in mass Extermination of a human population they treat as dogs.