top | item 47211701

(no title)

goatlover | 15 hours ago

The article asks the same question in the last part, wondering whether it's just randomly selected. MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment. The math never says entanglement destroys superposition beyond a certain point of complexity (many different entangled systems forming the environment).

The author does say the approach is a combination of Copenhagen and MWI, removing the outlandish parts of both. Seems to preserve the randomness of the former though.

discuss

order

Joker_vD|14 hours ago

> MWI proponents have always argued decoherence leads to the entire world being put into superposition as decoherence just spreads entanglement to the environment.

Well, duh. It's not like classic objects actually exist, or the classical/quantum divide: everything is quantum, including the "observers". The "classical observer" is a crude approximation that breaks down to a pointy enough question. Just like shorting the perfect battery (with zero internal resistance) with a perfect wire (with zero external resistance) — this scenario is not an approximation of any possible real scenario so it's paradoxicality (infinite current!) is irrelevant.

flockonus|14 hours ago

Random is a very interesting concept. In relation to nature we seem to use "random" as anything we can't or are currently unable to model.

To call something random doesn't mean it's impossible to model, in fact all sorts of natural facts seemed random one day before being covered by a model. One very relatable example example is the motion of stars in the the night sky, which seemed random for ages, until the Copernican revolution.

The fact we have access to random() function in programming seems to trip many people. random() is a particular model implementation of random, but stuff in nature isn't random().

My point is, using "just random" to do work in any scientific explanation is a clutch.

Maxatar|14 hours ago

In science randomness is usually used to abstract over a large number of possible paths that result in some outcome without having to reason individually about any specific path or all such paths.

It does not have to mean something inherently non-deterministic or something that can't be modelled, although it certainly is the case that if something is inherently non-deterministic then it would necessarily have to be modelled randomly. Modelling things as a random process is very useful even in cases where the underlying phenomenon has a fully understood and deterministic model; a simple example of this would be chess. It's an entirely deterministic game with perfect information that is fully understood, but nevertheless all the best chess engines model positions probabilistically and use randomness as part of their search.

staticassertion|14 hours ago

There's disagreement on this. You seem to just be saying that brute facts or brute contingencies don't exist, but I suspect most scientists would disagree with that.

Nevermark|13 hours ago

I am not sure why you are being downvoted.

The use of "random" as explanation or characterization in science has certainly spanned everything from "we don't know", to "there is inherent indivisible physical randomness".

And I would agree, in the latter case it is a crutch. A postulate that something gets decided by no mechanisms whatsoever (randomness obeying a distribution still leaves the unexplained "choice").

It is remarkable that people still suggest the latter, when the theory, both in theory and experiment, doesn't require a physical choice at all (even if we experience a choice, that experience is explained without the universe making a choice).