"what is Fivecast ONYX? an AI-powered surveillance platform purchased by ICE for $4.2 million and CBP for additional license costs. according to Fivecast’s own documentation and EFF’s reporting, they do automated collection of multimedia data from social media and dark web, build “digital footprints” from biographical data, tracks shifts in sentiment and emotion, assigns risk scores, searches across 300+ platforms and 28+ billion data points, identifies people with “violent tendencies”"
Glad to know that my tinfoil hat wasn't too tight when social media came to be and this obvious use was predicted. How quickly will not having social media accounts become a crime?
According to Persona's damage control article[0], the subdomain had "onyx" in its name because that's the internal code name for the project, and it's named after the pokémon Onyx. No connection to Fivecast ONYX.
I am not that old and I remember when people warned other to put too much info on social media. You can even identify people through a few sentences and some people have basically a complete life encyclopedia about themselves online. Sure, those are usually not the most influential for political developments besides being called influencers.
> How quickly will not having social media accounts become a crime?
Ah, it already is. Just being trialed against people with less rights and no voting power.
Since the last several months, your US visa will be rejected if you do not submit public social media profiles.
If you think the government is spending a hundred billions on this category of tech for vetting a few thousand people, you are a prime candidate to buy a bridge that I can sell you for a discount.
Governments in Europe should be seriously scrutinising this with the background conversation of departing American tech going on. Discord users globally were being coerced into handing over their ID to this American surveillance tech. Are we just going to let this go on?
You act like the governments of Europe weren’t the reason Discord decided they needed to get government issued identity information from European users…
Please note that Persona primarily operates as a "service provider" or "processor" for its customers. We act as a "business" or "controller" only for specific services, such as identity verification for LinkedIn, FoxCorp, and Reusable Persona. To learn more about how Persona manages your personal data, please refer to our privacy notices, which can be accessed through the following link: https://withpersona.com/legal/privacy-notices
If you wish to exercise your privacy rights related to services where Persona is a "service provider" or "processor," please contact the entity using our service, as they are the "controller" of the data. We will assist the relevant customer to fulfill your data subject rights, but we do not handle such requests directly on their behalf.
For any privacy rights request related to services where Persona acts as a "business" or "controller," including identity verification for LinkedIn, FoxCorp, Reusable Persona, and personal data related to our sales, marketing activities, or website browsing on withpersona.com, please use our Data Subject Request (DSAR) available at the following link: https://withpersona.com/dsar
For all other inquiries, we will respond as soon as possible.
That does not match the very similar reply I got as a California resident asserting my rights under California's "Right to Know" Act , regarding LinkedIn profile data and related
This is the same complete bullshit trying to remove oneself from political donation emails. "Oh, okay, we will remove you from that one." Days later it's a "different campaign." Sometimes it's the exact same people from weeks ago who have just renamed their campaign and started sending again.
We need far stronger laws for all of it, which will never happen because the rot and corruption has fully metastasized.
I don't like it playing sound, I can't read the blog post in the metro. In fact I will direct my attention to the next thing and not remember reading this later.
It seems like at every technological step, we're sold the dream and delivered the meme. We always end up with the worst possible combination of players, ideas and outcomes; with the promise of what the said technology delivers in terms of additional freedom or free time never realised. How many more broken social contracts can society endure before it crumbles?
It's "socializing the losses and privatizing the gains"… but now alarmingly supercharged well beyond purely financial realms, and into really basic and fundamental matters of individual physical autonomy and liberty.
> How many more broken social contracts can society endure before it crumbles?
Having any kind of agency in those things would be a start.
If <FAANG bigcorp of your choice> announces with great fanfare "We're building this totally awesome new technology that will make everything better! And the best thing? You won't have to do anything, we will auto-update all your devices/accounts/etc with it for free! Trust us!", then whether you personally believe their enthusiastic predictions or not doesn't really matter a lot - you will get it anyway, unless you spend a lot of energy to deliberately avoid the new technology.
From my understanding, we are pretty close to a Dystopian world where all elites of a certain group collaborate to run a Super Leviathan. We still gotta choose our flavors, which may not be feasible in maybe 5-10 years when those leviathans clash into each other.
It's already crumbling. That's why we have AI-powered fascism in the first place. Society destabilizes and a significant fraction of the population says "perhaps authoritarianism is a good thing." It's never worth it, though.
The story here is that a FedRAMP-authorized system had 53MB of Vite dev source maps exposed on a production government endpoint. That's not "sold the dream, delivered the meme," that's a specific auditable compliance failure. Meanwhile a fintech engineer explaining that this is all standard legally-mandated KYC infrastructure got flagged to death. The interesting question isn't whether technology betrays us, it's why US law requires this surveillance apparatus in the first place and why the security assessment apparently missed checking for /vite-dev/ on a government system.
Also every technological step? Ever? Really? This wouldn't happen to be typed on a computer from a climate-controlled room on a nice global network or anything?
I think that's a natural outcome of a model where sociopaths climb to the top, with a layer of sycophants beneath them that shield normal workers from perceiving the amount of depravity going on at the top which would make them unable to continue and tank the business. AI might remove the reliance on regular folks and give sociopaths direct execution of all ideas they have without any moral opposition, and that would explain a lot of the rush for AI everywhere we see nowadays.
Many tech execs operate under the thesis that china & the democratic party are existential threats that warrant a surveillance/military/police ramp up. Meanwhile, many tech employees are credulous and frequently adopt self-serving geopolitical narratives. The current macro trends don't help (huge defense budgets, bad labor market power, China is in fact more powerful)
surprised nobody responded with the most straightforward, occams razor explanation
they think what they're doing is actually good for society
not everyone is in the hackerspace libertarian / socialist sphere
i used to work for a place that used persona despite it adding extra friction to signups (literally resulting in less paying customers to the dismay of PMs) because it was worth it to combat fraud. theres a tradeoff in everything
My employer isn't particularly bad for society, but let's pretend they are.
My company is a large employer of foreign workers. I already live in fear of being priced out by foreign bodyshop firms. If I decided what we were doing was immoral, and dug my heels in. I'd just be replaced by a H-1B worker. If everyone else in my company decided they wouldn't build the torment nexus, we'd all just be replaced by H-1B workers. It'd be a minor inconvenience to the company, but they'd weather it just fine. Under this system, any kind of collective bargaining becomes impossible, moral, financial, or otherwise.
Organize in your country and advocate for data deletion jubilees, organize in your country to champion new taxes against US digital services, organize in your country to advocate for homegrown solutions over US tech.
If you aren't actively organizing you aren't going to accomplish anything.
Remember that people power trumps monetary power, but you have to commit for people power to work.
1. Request your data. Email idv-privacy@withpersona.com or privacy@withpersona.com. Under GDPR, they have 30 days to respond.
2. Request deletion. The verification is done. LinkedIn already has the result. There is no reason for Persona to keep your passport scan and facial geometry on their servers. Ask them to delete it.
3. Contact their DPO. dpo@withpersona.com — that’s their Data Protection Officer. If you want to object to them using your documents as AI training data under “legitimate interests,” this is where you do it.
4. Think twice before verifying. That blue badge might not be worth what you’re trading for it. A checkmark is cosmetic. Biometric data is forever.
I get the KYC concerns for API access, but I'm sortof baffled at why they'd need all of the AML stuff, given that they're not payment processors/financial institutions.
Or does Persona provide that by default? Don't know much about their service...
> OpenAI’s disclosures reference biometric data stored “up to a year.” the source > code shows face list retention capped at 3 years. government IDs retained
> “permanently” per Persona’s practices. which is it?
I keep saying this. This is the playbook -- everything is moving to standardize Sam Altman's biometric authentication cryptocurrency company to use internet services. This has been a slow moving strategy for /years/ and every new step over that period only get closer, not further from this goal.
"We weren’t hacked" is doing PR triage for "we exposed sensitive internal implementation details." Spy company semantics are always incredible. The house didn’t burn down, it just leaked gas.
Another downvoted comment asks if this is all LLM output. While I don't think all of it is, chunks of it have LLM smells so I wanted to point those out as the author or other readers may find it useful:
The ASCII flowcharts all contain jagged vertical lines. This is the biggest indicator of LLM output as no human would ever produce that. You can simply see with your eyes that it's wrong if you even glance at it.
> there’s no way for us to prove that they don’t have access to all of that data anyway. we can only assume that they don’t have access to all of that data. but if you want my two cents, they probably do.
This doesn't quite read as LLM output but it makes the whole article look like a conspiracy theory.
> after trying to write a few exploits, vmfunc decided to browse their infra on shodan. it all started with a Shodan search. a single IP. 34.49.93.177 sitting on Google Cloud in Kansas City. one open port. one SSL certificate. two hostnames that tell a story nobody was supposed to read:
> and the company that runs all of this is the same one that takes your passport photo when you sign up for ChatGPT. same codebase. same platform. different deployment. same facial recognition. same screening algorithms. same data model.
> and as always, the information wants to be free. we didn’t break anything. we didn’t bypass anything. we queried URLs, pressed buttons, and read what came back. if that’s enough to expose the architecture of a global surveillance platform… maybe the problem isn’t us.
These all absolutely stink of LLM writing patterns.
calling data sovereignty laws a cybersecurity risk in the same week that Persona had 2500 files exposed on a government endpoint is an interesting choice of timing.
Stand in a hospital and say that credibly. I recommend the maternity ward.
Our consumer markets are a wreck. We have no federal watch dog exercising any authority. We have unchecked intelligence agencies actively trying to enslave the world. Our desire for convenience is not the problem, the people taking advantage of it are.
Thank god for noscript. Did see or hear any of that and dumped the text-only version of the article and HN discussion right to my local hard drive for off-line reading.
General Alexander (former Director of NSA) admitted, around DEF CON XX (circa 2012), that the intelligence community defines "intercept" as when a human analyst catalogues a piece of information.
Reading between these lines, some decade+ later... we swim beyond seas of deception, in these interceptionless databases of humanity. Less than just a number, only weights held in artificial minds.
Author was doing such a good write-up, until I saw repeated AI syntax "its not x, but y" and "a is b. b is c. and, c is the final thing in this series of short, punchy sentences". Really tired of this. Why is it so hard to just write naturally? Maybe I'm just easily triggered
The right wing went full censorship and surveillance after the Charlie Kirk assassination. It is probably not a coincidence that they targeted Discord first, because the suspect was in a Discord group.
They promised freedom of speech and liberty and this is what we get.
> The right wing went full censorship and surveillance after the Charlie Kirk assassination.
No, earlier. US tech is mostly surveillance tech, with Thiel being sponsor and broker for authoritarian right. The doge operation started around day 1, and was a breach into the government to steal data that was yet out of reach for certain plotters.
The right wing went full censorship and surveillance long before the Charlie Kirk assassination. Anyone who believed that the right wing (or the left wing, for that matter; let's not pretend that censorious dipshittery is not bipartisan) was honestly promising freedom of speech as opposed to merely freedom of speech they like and censorship of speech they don't like was at best willfully blinding themselves to the actual actions of politicians.
likely not. Being able to read and understand is a matter of skill though. There are many technical terms there that may make it unreadable for non-technical audience. But you can solve that by having an AI explain it to you.
This is the most important section, as the above ones any privacy-conscious person would assume most anyway. I did mention before that we need an open-source platform that tracks the people who work and build such systems. Those are the enablers who have no morals or ethics - a greedy corporation is always greedy, but when the average employee is willing to work full time on building such systems, they need to be exposed publicly, just as they are working relentlessly on violating private people's privacy. It isn't about public humiliation; it's about basic human decency and maintaining a minimum ethical code to abide by. These individuals shouldn't be hired or dealt with, not even a simple connection on LinkedIn.
These individuals are dangerous. They are like rats among us and should be exposed, and I bet some of them are reading this as well.
Please don't post LLM output on HN. If an article is unreadable, we accept a link to an archived version of the original content (on a site like Archive.org or Archive.today), not a summary, because then people comment in response to the summary, which may not be an accurate representation of the original content.
dylan604|6 days ago
Glad to know that my tinfoil hat wasn't too tight when social media came to be and this obvious use was predicted. How quickly will not having social media accounts become a crime?
varenc|6 days ago
[0] https://withpersona.com/blog/post-incident-review-source-map...
jcgrillo|6 days ago
raxxorraxor|5 days ago
tamimio|6 days ago
a_victorp|6 days ago
fooker|6 days ago
Ah, it already is. Just being trialed against people with less rights and no voting power.
Since the last several months, your US visa will be rejected if you do not submit public social media profiles.
If you think the government is spending a hundred billions on this category of tech for vetting a few thousand people, you are a prime candidate to buy a bridge that I can sell you for a discount.
cloverich|6 days ago
Note also there's a direct response from Persona's security team here[1], and a lot of back and forth from Rick on Twitter[2].
[1]: https://withpersona.com/blog/post-incident-review-source-map...
[2]: https://x.com/Persona_IDV/status/2025048195773198385?s=20
[3]: https://news.ycombinator.com/item?id=47136036
cloverich|6 days ago
aeldidi|6 days ago
kelvinjps10|6 days ago
cedws|6 days ago
thephyber|6 days ago
teyopi|6 days ago
[1] https://www.theguardian.com/world/2021/may/31/denmark-helped...
frm88|6 days ago
5o1ecist|6 days ago
4midori|6 days ago
Hi there,
Thank you for reaching out to Persona.
Please note that Persona primarily operates as a "service provider" or "processor" for its customers. We act as a "business" or "controller" only for specific services, such as identity verification for LinkedIn, FoxCorp, and Reusable Persona. To learn more about how Persona manages your personal data, please refer to our privacy notices, which can be accessed through the following link: https://withpersona.com/legal/privacy-notices
If you wish to exercise your privacy rights related to services where Persona is a "service provider" or "processor," please contact the entity using our service, as they are the "controller" of the data. We will assist the relevant customer to fulfill your data subject rights, but we do not handle such requests directly on their behalf.
For any privacy rights request related to services where Persona acts as a "business" or "controller," including identity verification for LinkedIn, FoxCorp, Reusable Persona, and personal data related to our sales, marketing activities, or website browsing on withpersona.com, please use our Data Subject Request (DSAR) available at the following link: https://withpersona.com/dsar
For all other inquiries, we will respond as soon as possible.
###
TL;DR we're not responsible, go talk to LinkedIn.
mistrial9|6 days ago
plagiarist|6 days ago
We need far stronger laws for all of it, which will never happen because the rot and corruption has fully metastasized.
edverma2|6 days ago
emsign|5 days ago
mock-possum|6 days ago
spacebacon|6 days ago
nanobuilds|6 days ago
raincole|6 days ago
Persona's side of the story.
PostOnce|6 days ago
There's a problem here, right? Who else might want to flag you and lock you out of shit? Is this the new normal?
Will they flag Republicans / Democrats / Catholics / Buddhists / People Of Any Particular Skintone / People with Blue Shoes Who Are Exactly 5'9 / ????
The corporations are out of control. We should bring them to heel.
We should also resist and refuse to comply with these totally arbitrary requests we don't have to comply with.
Havoc|6 days ago
oth001|6 days ago
tiffanyh|6 days ago
What am I missing?
https://withpersona.com/customers/openai
deaux|5 days ago
pharos92|6 days ago
dlenski|6 days ago
unknown|6 days ago
[deleted]
xg15|6 days ago
Having any kind of agency in those things would be a start.
If <FAANG bigcorp of your choice> announces with great fanfare "We're building this totally awesome new technology that will make everything better! And the best thing? You won't have to do anything, we will auto-update all your devices/accounts/etc with it for free! Trust us!", then whether you personally believe their enthusiastic predictions or not doesn't really matter a lot - you will get it anyway, unless you spend a lot of energy to deliberately avoid the new technology.
whynotmaybe|6 days ago
Who wins at the end?
nehal3m|6 days ago
ferguess_k|6 days ago
asdfman123|6 days ago
ctoth|6 days ago
Also every technological step? Ever? Really? This wouldn't happen to be typed on a computer from a climate-controlled room on a nice global network or anything?
storus|6 days ago
vpShane|6 days ago
> How many more broken social contracts can society endure before it crumbles?
I wouldn't call this much of a society if people's eyes are open.
What's that song name, they don't care about us?
gslepak|6 days ago
Havoc|6 days ago
ceroxylon|6 days ago
Ancalagon|6 days ago
mikestew|6 days ago
konart|6 days ago
Because they believe that it's going to be build anyone by someone else?
Because they are not entirely aware of what they are building?
biophysboy|6 days ago
Edit:forgot the most obvious... money
FrustratedMonky|6 days ago
A common theme in a lot of movies, books, et..
snarf21|6 days ago
bombdailer|6 days ago
globalnode|6 days ago
GorbachevyChase|6 days ago
Nezteb|6 days ago
Immoral boot-licking human engineers are indistinguishable from LLMs.
ej88|6 days ago
they think what they're doing is actually good for society
not everyone is in the hackerspace libertarian / socialist sphere
i used to work for a place that used persona despite it adding extra friction to signups (literally resulting in less paying customers to the dismay of PMs) because it was worth it to combat fraud. theres a tradeoff in everything
bigyabai|6 days ago
samaltmanfried|6 days ago
unknown|6 days ago
[deleted]
yoyohello13|6 days ago
jcgrillo|6 days ago
MattDaEskimo|6 days ago
shimman|6 days ago
If you aren't actively organizing you aren't going to accomplish anything.
Remember that people power trumps monetary power, but you have to commit for people power to work.
drac89|6 days ago
1. Request your data. Email idv-privacy@withpersona.com or privacy@withpersona.com. Under GDPR, they have 30 days to respond.
2. Request deletion. The verification is done. LinkedIn already has the result. There is no reason for Persona to keep your passport scan and facial geometry on their servers. Ask them to delete it.
3. Contact their DPO. dpo@withpersona.com — that’s their Data Protection Officer. If you want to object to them using your documents as AI training data under “legitimate interests,” this is where you do it.
4. Think twice before verifying. That blue badge might not be worth what you’re trading for it. A checkmark is cosmetic. Biometric data is forever.
int32_64|6 days ago
disgruntledphd2|6 days ago
Or does Persona provide that by default? Don't know much about their service...
emsign|5 days ago
Kiboneu|6 days ago
I keep saying this. This is the playbook -- everything is moving to standardize Sam Altman's biometric authentication cryptocurrency company to use internet services. This has been a slow moving strategy for /years/ and every new step over that period only get closer, not further from this goal.
unknown|6 days ago
[deleted]
time2buybitcoin|6 days ago
[deleted]
rambojohnson|6 days ago
unknown|6 days ago
[deleted]
ArchieScrivener|6 days ago
unknown|6 days ago
[deleted]
OneDeuxTriSeiGo|6 days ago
ericd|6 days ago
LiamPowell|6 days ago
The ASCII flowcharts all contain jagged vertical lines. This is the biggest indicator of LLM output as no human would ever produce that. You can simply see with your eyes that it's wrong if you even glance at it.
> there’s no way for us to prove that they don’t have access to all of that data anyway. we can only assume that they don’t have access to all of that data. but if you want my two cents, they probably do.
This doesn't quite read as LLM output but it makes the whole article look like a conspiracy theory.
> after trying to write a few exploits, vmfunc decided to browse their infra on shodan. it all started with a Shodan search. a single IP. 34.49.93.177 sitting on Google Cloud in Kansas City. one open port. one SSL certificate. two hostnames that tell a story nobody was supposed to read:
> and the company that runs all of this is the same one that takes your passport photo when you sign up for ChatGPT. same codebase. same platform. different deployment. same facial recognition. same screening algorithms. same data model.
> and as always, the information wants to be free. we didn’t break anything. we didn’t bypass anything. we queried URLs, pressed buttons, and read what came back. if that’s enough to expose the architecture of a global surveillance platform… maybe the problem isn’t us.
These all absolutely stink of LLM writing patterns.
kevincloudsec|5 days ago
5o1ecist|6 days ago
The 90s called, THE CAT HUNTS THE MOUSE! :D :D
sebastianconcpt|6 days ago
Convenience is to humans, what bulb lights at night are to bugs.
themafia|6 days ago
Stand in a hospital and say that credibly. I recommend the maternity ward.
Our consumer markets are a wreck. We have no federal watch dog exercising any authority. We have unchecked intelligence agencies actively trying to enslave the world. Our desire for convenience is not the problem, the people taking advantage of it are.
esafak|6 days ago
Ms-J|6 days ago
Your biographic data will leak to every hacker and every government world wide.
baddash|6 days ago
trinsic2|6 days ago
noutella|6 days ago
fdefitte|6 days ago
[deleted]
ProllyInfamous|6 days ago
Reading between these lines, some decade+ later... we swim beyond seas of deception, in these interceptionless databases of humanity. Less than just a number, only weights held in artificial minds.
righthand|6 days ago
newzino|6 days ago
[deleted]
dang|6 days ago
fintech_eng|6 days ago
[deleted]
RiverCrochet|6 days ago
[deleted]
throw4847285|6 days ago
billfor|6 days ago
zoklet-enjoyer|6 days ago
blurbleblurble|6 days ago
outside1234|6 days ago
tinfoilhatter|6 days ago
[deleted]
akramachamarei|6 days ago
ms170888|6 days ago
[deleted]
unknown|6 days ago
[deleted]
selfhoster11|6 days ago
unknown|6 days ago
[deleted]
rd|6 days ago
snowhale|6 days ago
[deleted]
standardly|6 days ago
firegodjr|6 days ago
tr_alts|6 days ago
They promised freedom of speech and liberty and this is what we get.
exceptione|6 days ago
platevoltage|5 days ago
hactually|6 days ago
jcranmer|6 days ago
FarmerPotato|6 days ago
random3|6 days ago
tamimio|6 days ago
This is the most important section, as the above ones any privacy-conscious person would assume most anyway. I did mention before that we need an open-source platform that tracks the people who work and build such systems. Those are the enablers who have no morals or ethics - a greedy corporation is always greedy, but when the average employee is willing to work full time on building such systems, they need to be exposed publicly, just as they are working relentlessly on violating private people's privacy. It isn't about public humiliation; it's about basic human decency and maintaining a minimum ethical code to abide by. These individuals shouldn't be hired or dealt with, not even a simple connection on LinkedIn.
These individuals are dangerous. They are like rats among us and should be exposed, and I bet some of them are reading this as well.
jtbayly|6 days ago
tomhow|6 days ago