> At first, I was wondering how he managed to even publish something like this, but I'm starting to think that Apple just got tired of rejecting it over and over.
Another reminder for the pile: the app store rules don't apply if you'll deliver them their sweet sweet 30% revenue cut
> Nearly a thousand children under the age of 18 with their live location, photo, and age being beamed up to a database that's left wide open. Criminal.
App Store rules are completely arbitrary. Many moons ago, I worked at a startup that made a mobile messaging app (back when SMS cost money). We were mostly a consumer app, but had a trio of businesses that wanted white-label versions of the app for their own employees, and we naturally obliged.
The white-label versions where 100% identical in appearance and functionality except for name in the app store, startup logo, and color scheme. Our original app had been in the App Store rules for many years. Our results in submitting the three white-label apps to the App Store for review were: 1 approved immediately, 1 approved after some back-and-forth w/explanation of purchase model, and another that never got approved due to every submission receiving some nonsensical bit of feedback.
> You are a Gen Z App, You are Pandu,you are helping a user spark conversations with a new user, you are not cringe and you are not too forward, be human-like. You generate 1 short, trendy, and fun conversation starter. It should be under 100 characters and should not be unfinished. It should be tailored to the user's vibe or profile info. Keep it casual or playful and really gen z use slangs and emojis. No Quotation mark
Did you contact the creator first with these findings? What was the creator's response, if any?
In any case I hope the creator was contacted, I'd say publishing active issues like this on a popular website would be arguably as bad as releasing insecure software.
Responsible disclosure for a meme-level mistake, lol.
I understand letting them know. I agree. Painting them as equally wrong, no. "Popular website"; you mean 'theirs', right? The person with a whole 27 GitHub followers right now.
Just when I felt we were at a point where it was acceptable to slow down progress for the sake of security we are now at a point where the speed is far too attractive to both stakeholders and a lot of the actual engineers to worry about the details.
That point would be late 90s to early 2000s. We've already got internet, it wasn't full of ads and was used to actually exchange information. Should have just made it faster and more accessible and stopped there
I like the write up and it gave me vibes (no pun intended) of old era hacker zine submission, but at the same time it does come across as a bit too over the top, especially because there is no indication the app author even knows this stuff is out here now for everyone to see.
There is no way to police the quality of the (closed-source) software that is going to be put out there thanks to code assisting tools, and I think that will be the strongest asset of previous developers, especially full-stack, because if you do know what you are doing, the results are just beautiful. Claude code user here.
Great read. I wouldn't have had the restraint required not to spam a gazillion push notifications to everyone saying "UNINSTALL IMMEDIATELY" or something like that
If you’re working with them I’d like to highlight that if they have a messaging platform with children on they are going to have to take safety extremely seriously. I know the laws in the uk are not popular here but the checklists of risk assessments are worthwhile doing - cases where people can privately message children are really high risk because you’ll get a bunch of people who really want to message children. If users can send images you’ll have CSAM to deal with.
That "job offer" tells me everything I need to know about this guy. Cheerfully dictating what you're going to do as if it's a great opportunity for you, with an obvious ulterior motive. Just "you will start tonight!", without so much as a mention of pay or availability, and oh by the way take your post down. Lol.
I used to meet clowns like this all the time when I freelanced years ago. Back then they called themselves "ideas guys" and liked to make you sign an NDA for the privilege of hearing their braindead overplayed product idea. Scumbags and users, every one of them, always looking for a shortcut to personal gain.
The fact that this shitty application with a hardcoded OAI key also uses Supabase pairs perfectly with yesterday's story about Supabase's MCP implementation being impossible to actually secure and their engineer showing up in the comments going "the latest release probably won't leak data, hopefully, maybe". Just an endless fractal of shit, brought you by the AI future.
Oh well. At least there will probably be good money in cleaning up after these bozos.
They tell you in their docs to review every tool call and to not connect to production data. You don't blame postgres for letting you execute DROP TABLE.
This take is toxic. You could write the same article in 2001 and lament all the newcomers writing insecure applications in php3, or in 2009 with all the newcomers writing insecure applications with node.js.
The solution is not to aggressively shame people into doing things the way you learned to do them, but to provide not just education and support, but better tools and frameworks to build applications such as these securely.
Is it really toxic though? The dev shipped something that compromises the privacy of their users and shows zero regard for quality or law. Once you cross the line of shipping something, it's no longer a hobby thing, and likewise, this is something that Apple approved into the App Store. Both the dev and Apple failed in their due diligence.
The post points out exactly what's wrong, however, if it wasn't, it should have been sent to the dev prior to publishing the vuln(s). How can you educate somebody who doesn't actually know how to develop something? It's just prompting an AI.
The real story here is that Apple has continually slipping standards.
Building tools that enable people with no experience to create and ship software without following any good software engineering practices.
This is in no way comparable to any previous period in the industry.
Education and support are more accessible than ever. Even the tools used to create such software can be educational. But you can't force people to learn when you give them the tools to create something without having to learn. You also can't blame them for using these tools as they're marketed. This situation is entirely in the hands of AI companies. And it's only going to get worse.
The only thing experienced software developers outside of the AI industry can do is observe from the sidelines, shake our heads, and get some laughs out of this shit show. And now we're the bad guys? Give me a break.
“[T]he privacy implications of using software built by someone whose productive output is directly tied to the uptime of Cursor is absolutely horrendous.”
The most perfect description of the world we live in right now.
The only thing AI is accelerating is our slide into idiocracy as we choose to hand over responsibility for the design and control of our world to slop.
When the AI killbots murder us all, it won’t be because they are taken over by an AGI that made the decision to exterminate us.. but simply because their control software will be vibe coded trash.
Willfully causing harm to their system is a legal minefield even if what they are doing is illegal. It also destroys evidence. You also assume they don't have backups or can't ask their host to restore it.
There are security advisories, but the feature isn't particularly good. Non-actionable stuff is mixed in with actionable stuff and actionable stuff is IMO presented too generically.
Poorly made slop aside, your framing of this just makes it look and sound like you're extremely bitter over losing a hackathon (?) to this guy. I think you should've focused on the company solely and dropped the snide and sarcastic references calling the CEO/dev a "hero" or "mastermind". It's not particularly mature or productive.
Instead of looking down on someone with less knowledge, consider it an opportunity to educate with kindness rather than contempt. Belittling others isn't a good look, nor does it make the world a better place. Perhaps there's an underlying pain you haven't identified, and judgment is a way you cope.
This maybe an unpopular take but I think there's a place for kindness, and there's a place for naming-and-shaming, and I think this is the case for the latter! Unless we name-and-shame utter and wilful negligence like this, our industry is headed for rock bottom.
Any service making money by collecting user data owe it to themselves and to their users to to conduct at least a basic security audit of its product. Anything less borders on criminal negligence. I don't think such a blatant failure to uphold users' trust deserves kindness.
This post sounds like you lost to AI in a competition and decided to get revenge by stalking the author. I'm not even sure if you are actually concerned about its users or you're just using this information to justify the morality of your actions.
Why didn't you just send them an e-mail to warn them about the security issues?
I see in a comment that you did disclose. You should probably include that in your blog post or people will have the wrong idea about you.
wibbily|7 months ago
Another reminder for the pile: the app store rules don't apply if you'll deliver them their sweet sweet 30% revenue cut
> Nearly a thousand children under the age of 18 with their live location, photo, and age being beamed up to a database that's left wide open. Criminal.
Hope that $750 was worth it.
fatnoah|7 months ago
The white-label versions where 100% identical in appearance and functionality except for name in the app store, startup logo, and color scheme. Our original app had been in the App Store rules for many years. Our results in submitting the three white-label apps to the App Store for review were: 1 approved immediately, 1 approved after some back-and-forth w/explanation of purchase model, and another that never got approved due to every submission receiving some nonsensical bit of feedback.
gouthamve|7 months ago
> You are a Gen Z App, You are Pandu,you are helping a user spark conversations with a new user, you are not cringe and you are not too forward, be human-like. You generate 1 short, trendy, and fun conversation starter. It should be under 100 characters and should not be unfinished. It should be tailored to the user's vibe or profile info. Keep it casual or playful and really gen z use slangs and emojis. No Quotation mark
thih9|7 months ago
In any case I hope the creator was contacted, I'd say publishing active issues like this on a popular website would be arguably as bad as releasing insecure software.
coal320|7 months ago
bravetraveler|7 months ago
I understand letting them know. I agree. Painting them as equally wrong, no. "Popular website"; you mean 'theirs', right? The person with a whole 27 GitHub followers right now.
mrits|7 months ago
coal320|7 months ago
oytis|7 months ago
pelagicAustral|7 months ago
There is no way to police the quality of the (closed-source) software that is going to be put out there thanks to code assisting tools, and I think that will be the strongest asset of previous developers, especially full-stack, because if you do know what you are doing, the results are just beautiful. Claude code user here.
mvieira38|7 months ago
coal320|7 months ago
skrebbel|7 months ago
coal320|7 months ago
`ssh site@coal.sh`
robocat|7 months ago
Thanks for the great writeup
Hard_Space|7 months ago
coal320|7 months ago
penguin_booze|7 months ago
blinkbat|7 months ago
thunkshift1|7 months ago
lvl155|7 months ago
bluefirebrand|7 months ago
coal320|7 months ago
Update available here: https://coal.sh/blog/pandu_bad
IanCal|7 months ago
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
unknown|7 months ago
[deleted]
roarcher|7 months ago
I used to meet clowns like this all the time when I freelanced years ago. Back then they called themselves "ideas guys" and liked to make you sign an NDA for the privilege of hearing their braindead overplayed product idea. Scumbags and users, every one of them, always looking for a shortcut to personal gain.
Analemma_|7 months ago
Oh well. At least there will probably be good money in cleaning up after these bozos.
coal320|7 months ago
ctoth|7 months ago
kfajdsl|7 months ago
wil421|7 months ago
unknown|7 months ago
[deleted]
larve|7 months ago
The solution is not to aggressively shame people into doing things the way you learned to do them, but to provide not just education and support, but better tools and frameworks to build applications such as these securely.
What are we doing?
hammyhavoc|7 months ago
The post points out exactly what's wrong, however, if it wasn't, it should have been sent to the dev prior to publishing the vuln(s). How can you educate somebody who doesn't actually know how to develop something? It's just prompting an AI.
The real story here is that Apple has continually slipping standards.
mrkeen|7 months ago
We are listening to our bosses tell us that "we're way behind in AI adoption" and that we need to catch up to vibe coders like this.
I don't mind these data points at all.
imiric|7 months ago
Building tools that enable people with no experience to create and ship software without following any good software engineering practices.
This is in no way comparable to any previous period in the industry.
Education and support are more accessible than ever. Even the tools used to create such software can be educational. But you can't force people to learn when you give them the tools to create something without having to learn. You also can't blame them for using these tools as they're marketed. This situation is entirely in the hands of AI companies. And it's only going to get worse.
The only thing experienced software developers outside of the AI industry can do is observe from the sidelines, shake our heads, and get some laughs out of this shit show. And now we're the bad guys? Give me a break.
unknown|7 months ago
[deleted]
brettkromkamp|7 months ago
pluto1010|7 months ago
https://web.archive.org/web/20250709231129/https://coal.sh/b...
f17428d27584|7 months ago
The most perfect description of the world we live in right now.
The only thing AI is accelerating is our slide into idiocracy as we choose to hand over responsibility for the design and control of our world to slop.
When the AI killbots murder us all, it won’t be because they are taken over by an AGI that made the decision to exterminate us.. but simply because their control software will be vibe coded trash.
morkalork|7 months ago
hammyhavoc|7 months ago
Sorry, but bad take.
indigodaddy|7 months ago
coal320|7 months ago
JanSt|7 months ago
tomashubelbauer|7 months ago
coal320|7 months ago
hammyhavoc|7 months ago
This describes plenty of businesses, both small and large.
WesSouza|7 months ago
agosta|7 months ago
unknown|7 months ago
[deleted]
akarlsten|7 months ago
coal320|7 months ago
unknown|7 months ago
[deleted]
platinumrad|7 months ago
perfmode|7 months ago
throwaway150|7 months ago
Any service making money by collecting user data owe it to themselves and to their users to to conduct at least a basic security audit of its product. Anything less borders on criminal negligence. I don't think such a blatant failure to uphold users' trust deserves kindness.
coal320|7 months ago
AlienRobot|7 months ago
Why didn't you just send them an e-mail to warn them about the security issues?
I see in a comment that you did disclose. You should probably include that in your blog post or people will have the wrong idea about you.