Ah, Worldcoin has set up booths at many shopping malls here in Kenya. The first time I saw them a few months ago I was reminded of the "OneCoin" pyramid scam that was big in East Africa a few years ago. https://www.theregister.com/2022/12/20/crypto_ponzi_scheme_c...
Worldcoin gives off really similar vibes. The footer of their website reads:
> Worldcoin tokens are not intended to be available to people or companies who are residents of, or are located, incorporated or have a registered agent in, the United States or other restricted territories.
>I really don't get good vibes from this whole thing...
Trust your instincts here. This thing is hardly a democratic technology -- it seems like it's by the Silicon Valley Elite, of the Silicon Valley Elite, for the Silicon Valley Elite.
It makes me think of that astonishing line by Susan Sontag: "The [_____] race is the cancer of human history; it is the [_____] race and it alone—its ideologies and inventions—which eradicates autonomous civilizations wherever it spreads, which has upset the ecological balance of the planet, which now threatens the very existence of life itself."
But also Emil Cioran: "What makes bad [technologists] worse is that they [are steeped only in tech-centric thought] (just as bad philosophers read only philosophers), whereas they would benefit much more from a book of botany or geology. We are enriched [and gain a sensible sense of ethics] only by frequenting disciplines remote from our own. This is true, of course, only for realms where the ego is rampant."
Sam Altman is behind both OpenAI and Worldcoin, the latter being a well known scam to gather biometric data.
So Sam Altman first creates the situation that we can no longer distinguish humans from bots, then asks everyone to trust him with even more biometric data to get around the problem he created.
Either way he wins at everyone else’s expense. I urge you to not take this at face value, Sam has already shown with Worldcoin that he is not trustworthy.
I was tricked by a machine yesterday. I had to call up the bank because their online banking website had booted me out.
After only a couple of rings, and no hold music, I was straight through to a person! This is unprecedented. The call was something like:
"Hi, you're through to foobank. How can I help you today?"
"Hi, your online banking has locked me out and said I need to call this number to get my account re-enabled."
"No problem. What message do you get when you try to login?"
"Oh, I haven't actually tried to login again, I can try if you want. It just kicked me out and said my account was locked and I need to call to get it re-enabled".
"No problem. If you click the 'reset my password' button under the login form, you'll be able to reset your password."
"I'm not sure that's going to work, but I'll give it a try. It definitely said my account was locked and I need to call to get it re-enabled."
"No problem. If you click the 'reset my password' button under the login form, you'll be able to reset your password."
"...are you a machine?"
"I'm Ava (edit: maybe Ada[0]?), a virtual assistant. Would you like me to put you through to a member of staff?"
"Yes please".
And only then did I get to spend 10 minutes listening to hold music and ads, before a member of staff actually unlocked my account.
> So Sam Altman first creates the situation that we can no longer distinguish humans from bots…
Any time human communication is mediated by technology there’s the chance that the communication is not really what it seems to be. Are we watching live events on TV or a recording of live events or a reenactment of actual events or complete fiction?
In some sense, on the internet everything is already a bot, it’s just that right now the majority of the bots are directed by humans in real time. I fully expect the majority of bots will be semi or fully autonomous in the coming years. (Maybe we’ll stop staring at screens all day.)
I don't know the exact implementation of Worldcoin, so correct me if I'm wrong here.
But theoretically, you could implement the protocol in a privacy-preserving manner where the only thing that needs to be saved, is the hash of the biometric data, not the biometric data itself.
So lets say that your face + fingerprint + iris each outputs a value. Concat those and hash them, and you have a unique value that can be reproduced elsewhere, without having to store anything else but the actual hash of the input.
Again, I'm not sure if this is what they are doing, but if that's how it works, they wouldn't actually need to gather any biometric data, after creating the hash it can be thrown away.
Note that this is the same Worldcoin that has been going round poor countries scanning people's eyeballs with an orb in exchange for some shady cryptocurrency with the primary objective of making some billionaires richer. See e.g. previous discussions on HN at https://news.ycombinator.com/item?id=28947468 and https://news.ycombinator.com/item?id=28998065 . I thought trying to turn our world into a terrifying dystopia for private profit was scary, but this article trying to sell it as something that is somehow beneficial for humanity is even worse.
I would encourage people who are otherwise deeply cynical of anything crypto (I know I am, and I hate 99% of crypto projects) to not immediately discount Worldcoin and make their own judgements based on the content Worldcoin presents. Much of the hacker news discussion on this project is making claims and assumptions that are factually incorrect or at best, misleading.
Online discussion is already largely broken, and will get much more broken in the coming years without something similar to Worldcoin.
- User gets their World ID in a compatible wallet (e.g. the World App).
- User receives credentials in their World ID. The flagship credential is biometric verification, currently available by using the Orb. The user can also verify their phone number to obtain the respective credential.
- Project integrates with World ID.
- User connects their World ID to authenticate, and optionally prove they are a unique human doing something only once. The user's wallet will generate a Zero-Knowledge Proof to accomplish this.
- Project verifies the Zero-knowledge Proof, either by using the API or by verifying on-chain.
Interestingly: Tamper detection system not disclosed
For obvious reasons, these files do not including the PCBs and sensors related to the Orb's tamper detection system.
OpenAi started out as "open", this venture also uses rhetoric that sounds innocent if one is not used to newspeak.
More likely, they want to become the central identity provider for the whole planet and collect as much biometric data as possible.
UBI plans may also start out as UBI, but will degrade soon: "Hey, we know that it is supposed to be unconditional, but we are running into financial difficulties. Would you mind plucking some cotton to shore up your income?"
As I've been reading this (and lobbing remarks in the comments) I've tried to get to the bottom of why this makes me uneasy. I think I've worked it out well enough to express it now.
In biology, the more successful a species is, the more parasites, predators, and pathogens adapt to attack it. This means that over time a species has to change in order to survive, or die off by massive and continued attrition.
Technology, I think, evolves in the same way. It isn't static, it responds to markets, new techniques, and new threats. It can be exploited both technically and socially. And of course, the bigger the target in terms of both users and codebase, the more valuable and vulnerable it becomes.
This is a rough way of saying: I don't believe a world scale system like this could ever marshal enough continuous investment to respond to the enormousness of the capital that will be spent breaking it by criminals, spies, and probably advertisers.
The question of establishing trust in the world has always been hard, even before computers. It is harder now, and the odds are against us.
There will be frameworks that build around your anonymous proof that allow people to block you across all platforms at your identity level. The default implementation of Worldcoin doesn't tie all your online accounts together, but I think many platforms would choose to use it in a way that doesn't identify you as any specific person, but does identify you as the same person across platforms.
With that particular implementation, if you spam on one account on one platform, people can block you across all accounts and all platforms. And I'm sure something like community maintained lists we have for adblockers will emerge.
Their claimed goals and ethics sound pretty compelling. Something along these lines was strongly called for, and if their project ends up serving the general role that they seem to be pursuing, it might serve as an essential element of future society -- either directly or as an early work.
That said.. it's hard to see terms like "coin", "wallet", "Web3", "NFT", etc., without a bit of concern -- even if, admittedly, such terms might be appropriate and justifiable in this sort of application.
Is there a page that shows their overall economic model, perhaps with flow-charts and such? This is, where are the cash/token/hardware/etc. in-flows and out-flows?
And is there an early-adopter incentive? And if so, is it significant, or is the system designed to be fair to folks whenever they might join?
Asking in part because the classic pyramid-scam thing, where early-adopters end up collecting huge rewards at the expense of late-adopters, seems like a major hallmark of dubious projects. Projects without such asymmetries would seem more credible, both in terms of not being yet another pyramid-scam and long-term viability.
How does this prevent humans from posting content generated by AI using their own verified identities? This coin doesn't solve the core problem which is telling AI and human generated content apart. I don't care of there is a crypto coin equivalent of a blue tick next to it.
A small fee to use a service will get rid of fake identities and bots. The answer is charging a fair fee. Not scanning iris’ to track that every person only has one account.
In that case you are simply delegating indentity checks to credit cards companies / banks. Earning money independently is definitely within the reach of current AI models.
PGP’s web of trust requires honest people. It kinda works when it can only be used to send emails to whole dozens of people. When you push it to the scale of 8B people and involve money, it’s going to break down badly. Tell me, are all your acquaintances honest? Not in my case. Now, think of acquaintances of acquaintances and so on.
> It empowers individuals to verify their humanness online while maintaining their anonymity through zero-knowledge proofs. Advancements in AI make it difficult to distinguish between AI and humans on the internet, highlighting a need for authentic human recognition and verification.
I am absolutely relieved to see this. This is exactly what I've been saying we all need endlessly online for years now. And especially because the internet is about to be overrun with AI generated garbage.
Yes - we need to be cynical with any implementation of this. We need to find flaws and criticize every aspect of it. But we absolutely 100% need this technology if we want online discourse to continue. Reddit is already a hellhole of bots and generated content, and has been getting progressively worse for years.
This will definitely become illegal by governments who don't want to lose their monopoly on private data. It is also already possible in many countries to digitally sign documents with government ID.
> (2) preventing the dissemination of AI-generated content
This is not preventing it, since humans can also disemminate such content. Real humans are behind the "bot networks" that some authoritarian countries use on facebook.
This whole identity verification narrative will rapidly veer toward a dystopian social credit system where tracking of your whereabouts and spending is inescapable and accepted by everyone.
But sure, let's first verify that you are a genuine human. Insert coin to play again, please.
I'm perfectly happy to let AI run its course and totally destroy reddit, Wikipedia, Google, Twitter, Amazon, Facebook and so on with endless content that everyone can just ignore. Let the humans get back to IRL.
aorth|2 years ago
Worldcoin gives off really similar vibes. The footer of their website reads:
> Worldcoin tokens are not intended to be available to people or companies who are residents of, or are located, incorporated or have a registered agent in, the United States or other restricted territories.
That doesn't sound very good! And then there's this critical review of Worldcoin's operations in Indonesia https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
I really don't get good vibes from this whole thing...
hdivider|2 years ago
Trust your instincts here. This thing is hardly a democratic technology -- it seems like it's by the Silicon Valley Elite, of the Silicon Valley Elite, for the Silicon Valley Elite.
50|2 years ago
But also Emil Cioran: "What makes bad [technologists] worse is that they [are steeped only in tech-centric thought] (just as bad philosophers read only philosophers), whereas they would benefit much more from a book of botany or geology. We are enriched [and gain a sensible sense of ethics] only by frequenting disciplines remote from our own. This is true, of course, only for realms where the ego is rampant."
latexr|2 years ago
So Sam Altman first creates the situation that we can no longer distinguish humans from bots, then asks everyone to trust him with even more biometric data to get around the problem he created.
Either way he wins at everyone else’s expense. I urge you to not take this at face value, Sam has already shown with Worldcoin that he is not trustworthy.
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
jstanley|2 years ago
I was tricked by a machine yesterday. I had to call up the bank because their online banking website had booted me out.
After only a couple of rings, and no hold music, I was straight through to a person! This is unprecedented. The call was something like:
"Hi, you're through to foobank. How can I help you today?"
"Hi, your online banking has locked me out and said I need to call this number to get my account re-enabled."
"No problem. What message do you get when you try to login?"
"Oh, I haven't actually tried to login again, I can try if you want. It just kicked me out and said my account was locked and I need to call to get it re-enabled".
"No problem. If you click the 'reset my password' button under the login form, you'll be able to reset your password."
"I'm not sure that's going to work, but I'll give it a try. It definitely said my account was locked and I need to call to get it re-enabled."
"No problem. If you click the 'reset my password' button under the login form, you'll be able to reset your password."
"...are you a machine?"
"I'm Ava (edit: maybe Ada[0]?), a virtual assistant. Would you like me to put you through to a member of staff?"
"Yes please".
And only then did I get to spend 10 minutes listening to hold music and ads, before a member of staff actually unlocked my account.
I felt stupid and deceived.
[0] https://www.ada.cx/
jt2190|2 years ago
Any time human communication is mediated by technology there’s the chance that the communication is not really what it seems to be. Are we watching live events on TV or a recording of live events or a reenactment of actual events or complete fiction?
In some sense, on the internet everything is already a bot, it’s just that right now the majority of the bots are directed by humans in real time. I fully expect the majority of bots will be semi or fully autonomous in the coming years. (Maybe we’ll stop staring at screens all day.)
capableweb|2 years ago
But theoretically, you could implement the protocol in a privacy-preserving manner where the only thing that needs to be saved, is the hash of the biometric data, not the biometric data itself.
So lets say that your face + fingerprint + iris each outputs a value. Concat those and hash them, and you have a unique value that can be reproduced elsewhere, without having to store anything else but the actual hash of the input.
Again, I'm not sure if this is what they are doing, but if that's how it works, they wouldn't actually need to gather any biometric data, after creating the hash it can be thrown away.
lezojeda|2 years ago
[deleted]
plainOldText|2 years ago
[1] https://techcrunch.com/2023/03/07/worldcoin-cofounded-by-sam...
astoor|2 years ago
93po|2 years ago
Online discussion is already largely broken, and will get much more broken in the coming years without something similar to Worldcoin.
gavi|2 years ago
In broad strokes, this is how World ID works.
- User gets their World ID in a compatible wallet (e.g. the World App).
- User receives credentials in their World ID. The flagship credential is biometric verification, currently available by using the Orb. The user can also verify their phone number to obtain the respective credential.
- Project integrates with World ID.
- User connects their World ID to authenticate, and optionally prove they are a unique human doing something only once. The user's wallet will generate a Zero-Knowledge Proof to accomplish this.
- Project verifies the Zero-knowledge Proof, either by using the API or by verifying on-chain.
gavi|2 years ago
Interestingly: Tamper detection system not disclosed For obvious reasons, these files do not including the PCBs and sensors related to the Orb's tamper detection system.
hrqay|2 years ago
More likely, they want to become the central identity provider for the whole planet and collect as much biometric data as possible.
UBI plans may also start out as UBI, but will degrade soon: "Hey, we know that it is supposed to be unconditional, but we are running into financial difficulties. Would you mind plucking some cotton to shore up your income?"
capableweb|2 years ago
WorldCoin - A for-profit, limited liability US-based company (soon only available in the US)
Seems on brand :)
boringuser2|2 years ago
hkt|2 years ago
hkt|2 years ago
In biology, the more successful a species is, the more parasites, predators, and pathogens adapt to attack it. This means that over time a species has to change in order to survive, or die off by massive and continued attrition.
Technology, I think, evolves in the same way. It isn't static, it responds to markets, new techniques, and new threats. It can be exploited both technically and socially. And of course, the bigger the target in terms of both users and codebase, the more valuable and vulnerable it becomes.
This is a rough way of saying: I don't believe a world scale system like this could ever marshal enough continuous investment to respond to the enormousness of the capital that will be spent breaking it by criminals, spies, and probably advertisers.
The question of establishing trust in the world has always been hard, even before computers. It is harder now, and the odds are against us.
frabcus|2 years ago
It claims to both identify humans and be zero knowledge.
What’s to stop me registering a bunch of times then letting my bot use my identities?
The answer is implied to be my iris scan. But then it isn’t zero knowledge for some entity is it? Unless it is relying on the Orb never being hacked?
Any good third party write ups on it? The WorldCoin page is a bit long and doesn’t quickly explain how it works at a basic level.
macrolime|2 years ago
This iris hash should then be stored in some decentralised database like a blockchain or something.
https://worldcoin.org/blog/developers/privacy-deep-dive
93po|2 years ago
With that particular implementation, if you spam on one account on one platform, people can block you across all accounts and all platforms. And I'm sure something like community maintained lists we have for adblockers will emerge.
neom|2 years ago
https://www.youtube.com/watch?v=MA2ttYtUbF8&ab_channel=ETHGl...
This whole thing gives me the heeby jebies...
_Nat_|2 years ago
Then it's also great to see that they seem to be pretty open about stuff, [including their hardware](https://worldcoin.org/blog/engineering/opening-orb-look-insi... ).
That said.. it's hard to see terms like "coin", "wallet", "Web3", "NFT", etc., without a bit of concern -- even if, admittedly, such terms might be appropriate and justifiable in this sort of application.
Is there a page that shows their overall economic model, perhaps with flow-charts and such? This is, where are the cash/token/hardware/etc. in-flows and out-flows?
And is there an early-adopter incentive? And if so, is it significant, or is the system designed to be fair to folks whenever they might join?
Asking in part because the classic pyramid-scam thing, where early-adopters end up collecting huge rewards at the expense of late-adopters, seems like a major hallmark of dubious projects. Projects without such asymmetries would seem more credible, both in terms of not being yet another pyramid-scam and long-term viability.
latexr|2 years ago
Yet their actual ethics are pretty abysmal. Worldcoin is a known scam. Notably, it’s also by Sam Altman, the CEO of OpenAI.
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
CTDOCodebases|2 years ago
Why look for a market when you can make one?
Nice move!
swader999|2 years ago
Thorentis|2 years ago
This is just another crypto scam coin.
ssss11|2 years ago
astrange|2 years ago
No, it'll get rid of bots worth less than the account fee to run, except for people who can do payment fraud.
It won't get rid of fake identities because you have to do identity verification for that; otherwise you can impersonate a corporation.
satellite2|2 years ago
obscur|2 years ago
wskish|2 years ago
calculated|2 years ago
hkt|2 years ago
oefrha|2 years ago
pffft8888|2 years ago
https://twitter.com/marcfawzi/status/1636115903959158785
Obviously, it's a no-brainer idea at a high level. The devil is in the details.
93po|2 years ago
I am absolutely relieved to see this. This is exactly what I've been saying we all need endlessly online for years now. And especially because the internet is about to be overrun with AI generated garbage.
Yes - we need to be cynical with any implementation of this. We need to find flaws and criticize every aspect of it. But we absolutely 100% need this technology if we want online discourse to continue. Reddit is already a hellhole of bots and generated content, and has been getting progressively worse for years.
hkt|2 years ago
notahacker|2 years ago
seydor|2 years ago
> (2) preventing the dissemination of AI-generated content
This is not preventing it, since humans can also disemminate such content. Real humans are behind the "bot networks" that some authoritarian countries use on facebook.
Hoasi|2 years ago
But sure, let's first verify that you are a genuine human. Insert coin to play again, please.
swader999|2 years ago
speedylight|2 years ago
beej71|2 years ago
phreeza|2 years ago
makingstuffs|2 years ago
voz_|2 years ago
unknown|2 years ago
[deleted]
affgrff2|2 years ago
scotty79|2 years ago
hkt|2 years ago
New mechanical turk-alike in 3, 2, 1. Really though. This would barely dent the budgets of disinformation campaigns.
sswam|2 years ago
[deleted]
xch|2 years ago
ssss11|2 years ago