mathinaly's comments

mathinaly | 1 year ago | on: Meta Launches AI Studio in US

It's an addictive behavioral loop. Lots of social media platforms are the same. There is very little value users actually get out of it because the algorithms are designed to manipulate them to click on ads since that's how the platforms make money.

mathinaly | 1 year ago | on: Meta Launches AI Studio in US

Maybe not for you but plenty of other people do spend a lot of time on Meta's digital properties. The human/AI avatars will be another engagement maximizing feature for a lot of accounts and I'm certain it will increase revenue for Meta.

mathinaly | 1 year ago | on: Meta Launches AI Studio in US

Creators, celebrities, and engagement farmers will use it to increase engagement and generate more revenue for themselves and Meta. You are forgetting Meta is a for-profit company and their goal is to increase quarterly profits by increasing the amount of time people spend on the platform.

mathinaly | 1 year ago | on: Meta Launches AI Studio in US

This is for people who want to farm for engagement and then convert that engagement into monetary profits. This is where all social media and many online platforms are headed. Any platform where the goal is engagement will eventually end up saturated with AI avatars that are trying to trick people into buying stuff or clicking on links to sign up for stuff so that the original account can get some referral bonus on some blockchain.

You're thinking about this in terms of what value it's going to deliver to you personally but that's not the goal here. The goal is to keep people engaged, that's always Meta's #1 priority because the more people spend time on their platform the more revenue Meta can generate by showing people ads. So if "creators" opt into using AI avatars then that means the people who follow those creators will habituate themselves to interacting with the human/AI hybrid and if the behavior is addictive enough then that will increase engagement. Regular engagement farming accounts can only spend time on the platform interacting a certain amount of time (they eventually have to sleep) but these AI/human hybrid accounts can interact with everyone 24 hours a day, 7 days a week, across geographic boundaries, and in any language.

mathinaly | 1 year ago | on: An approach to the fundamental theory of physics

I wasn't providing an argument to convince anyone of anything. Study the mathematics and if you have a way of making progress in constructing better physical theories based on Wolfram's foundations then more power to you. In general, talk is cheap and the proof is in the pudding. Wolfram never provides any testable results of possible experiments to validate his theories. He is mostly theorycrafting with rewrite systems and hoping something useful comes out. It's a lot like an evolutionary search over the space of possible rewrite systems to make some nice looking graphs. Whatever he's doing is not science in any meaningful sense of the word because there are no predictive and falsifiable experiments based on his theories.

mathinaly | 1 year ago | on: LeanDojo: Theorem Proving in Lean Using LLMs

Because meta-mathematical proofs often use transcendental induction and associated "non-constructive" and "non-finitistic" arguments. The diagonilization argument itself is an instance of something that can not actually be implemented on a computer because constructing the relevant function in finite time is impossible. Computers are great but when people say things like "The human mind is software running on the brain like a computer" that indicates to me they are confused about what they're trying to say about minds, brains, and computers. Collapsing all those different concepts into a Turing machine is what I mean by a confused ontology.

In any event, I'm dropping out of this thread since I don't have much else to say on this and it often leads to unnecessary theorycrafting with people who haven't done the prerequisite reading on the relevant matters.

mathinaly | 1 year ago | on: LeanDojo: Theorem Proving in Lean Using LLMs

No computer has ever discovered the concept of a Turing machine and the associated halting problem (incompleteness theorem). If you think a search in an axiomatic system can discover an incompleteness result it is because your ontology about what computers can do is confused. People are not computers.

mathinaly | 1 year ago | on: An approach to the fundamental theory of physics

That's a good example and demonstration. The unitary invariance basically requires that the norm of the vector is preserved so that if we start with a unit vector then unitary evolution of that vector will always keep it that way. This is not the case for arbitrary programs because they don't have to preserve any invariants which makes them ill-suited for physical theory building. This is why Wolfram's approach is too open-ended, hypergraph evolution is way too lax of a framework for describing physical reality and conforming to existing experimental results.

mathinaly | 1 year ago | on: LeanDojo: Theorem Proving in Lean Using LLMs

It's possible that the hypothesis is independent of the existing axiomatic systems for mathematics and a computer can't discover that on its own. It will loop forever looking for a proof that will never show up in the search. Computers are useful for doing fast calculations but attributing intelligence to them beyond that is mostly a result of confused ontologies and metaphysics about what computers are capable of doing. Computation is a subset of mathematics and can never actually be a replacement for it. The incompleteness theorem for example is a meta-mathematical statement about the limits of axiomatic systems that can not be discovered with axiomatic systems alone.

mathinaly | 1 year ago | on: An approach to the fundamental theory of physics

The paradigm he's using is too open ended. In quantum mechanics the mathematics is based on Hilbert spaces and unitary evolutions of state vectors. You might ask why this is the case and it is because of conservation principles. Unitary evolution preserves "information" in the state vector throughout its physical evolution. This is not the case for Wolfram's theories. There are no conservation principles in cellular automata other than explicitly forcing the evolution of the automaton to actually preserve the relevant information. More generally, most computational theories of physics are much too lax about the relevant conservation principles and that is why his theory does not predict anything. Turing machines specifically are not required to preserve anything about the initial state and so information can be destroyed and created ex nihilo, violating the main principle of physics which requires that all matter and energy be conserved. The equations have to balance out at the beginning and the end, whatever you start with can not be greater or less than what you end with (at least in physics).
page 1