(no title)
sxp | 2 months ago
For those who want a skeptical & cynical view: if remote viewing works, it would be part of the standard strategy of every hedge fund. Remember that theses are groups who pay millions for millisecond advantages in information. And you only need an ~51-55% success rate to make a killing in HFT (vs a 50% success rate from a coinflip). The fact that hedge funds don't have remote viewers on staff is evidence against RV providing utility greater than an RNG.
And for curious people who want to try a scientific approach, I suggest joining https://www.social-rv.com/ which is collecting data about RV and trying to make the experiment ironclad via blockchain authentication of predictions.
lofaszvanitt|2 months ago
LargoLasskhyfv|2 months ago
And lastly simple inability by most to perceive that, and other ESP/Psi stuff, maybe akin to so called aphantasia for people who can't visually imagine things.
Edit: Also Weapons of Class Disruption. Can't have that, ever.
krackers|2 months ago
The popular ones on the "explore sessions" are a very close match, but if you look at other predictions by those accounts, they're less sure. It's very easy to form a connection between any two images if you allow abstracted forms of similarity, and fundamentally there are very limited themes when it comes to images (natural things, man-made things. Smooth vs sharp.).
A good control test might be to have LLMs produce output instead, and score that.
krackers|2 months ago
And this is worsened by the fact that the LLM-based auto scoring explicitly uses the last 10 as decoy targets
>When you submit a session, the system collects your last 10 targets (including the current target) to create a pool of possible matches. A multimodal AI agent is presented with your complete session (including all drawings, text, and data) along with all 10 targets from the pool. The agent is instructed to analyze and rank the targets based on how well they match the session content.
The protocol otherwise seems good, but the specific carveouts here would seem to bias results.
The source for the judging is at https://github.com/Social-RV/comparative-judging which is the part which would need to be studied carefully. At first glance, it exposes raw filenames to the LLM which might bias things. The ranking logic also seems a bit sketchy, it does some tournament-style elimination thing which I haven't analyzed thoroughly but if decoys are eliminated in an earlier round it could bias things compared to just asking the LLM to order the 10 images based on similarity in a single-pass which is obviously unbiased.
chasemc67|1 month ago
Some answers to your questions: - the target pool has 275 targets in it - we USED to use the last 9 targets as decoys, but changed to randomly sampling 9 targets from the pool instead several months ago. I've updated the FAQ to reflect that - the unique identifiers we show the LLM for the decoy targets is not the file name but rather the DB primary key for that target. There should be no information in it the AI could use to bias a decision - in regards to the tournament-style elimination, we have a new judge coming out soon that does a single pass. When this was originally built, the single-pass wasn't reliable enough on available models
Thanks very much for your thoughtful feedback and questions about what we're doing!
pmontra|2 months ago
gtdhvy|2 months ago
[deleted]
bfuller|2 months ago
sxp|2 months ago
None of these groups can replicate their results beyond the initial claims. This is strong evidence that positive results in RV are just due to selection bias, specifically https://en.wikipedia.org/wiki/Publication_bias. If those investment groups could actually replicate their results, they would still be major names and others would be actively trying to copy them since it should only take a couple of millions of dollars to find capable RV candidates.
The non-skeptical view is that if people try to predict the stock market via RV, they will interfere with the future and their prediction ability will decrease. But when weighing this hypothesis against the hypothesis that RV is just selection bias, the latter wins due to Occam's Razor.
fumeux_fume|2 months ago
stevenhuang|2 months ago
Jessica Utts, a well respected statistician
> Despite Professor Hyman's continued protests about parapsychology lacking repeatability, I have never seen a skeptic attempt to perform an experiment with enough trials to even come close to insuring success. The parapsychologists who have recently been willing to take on this challenge have indeed found success in their experiments, as described in my original report.
https://ics.uci.edu/~jutts/response.html
chasemc67|1 month ago
social-rv is our attempt to do yet another reproduction, as publicly as possibly. And we got the same result
MontyCarloHall|2 months ago
jjk166|2 months ago
Please don't use the efficient capitalism argument. By that logic, if polio vaccines worked then why didn't 1940s pharma companies sell polio vaccines back when people were getting polio?
Remote viewing is bunk, but not because hedge funds in their omniscience have determined it to be unprofitable.
squigz|2 months ago
Because they didn't know about such a vaccine. We know that remote viewing "exists"
I think that GP's point - that, if such things exist, they would actually be utilized - is a good one. The framing might not be great but it's also not entirely relevant. You could just as easily make a non-capitalism example - like why don't fire departments use them
kjkjadksj|2 months ago
the_af|2 months ago
It absolutely does not work. Not "unreliably", but not work at all.
This reminds me of that one time on HN when someone tried to convince me that ritual witchcraft (I think they called it blood magic) on servers was a real thing, necessary to make them work, and my dismissal was typical of narrow minded people.