(no title)
dansmith1919 | 5 months ago
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.
Sharlin|5 months ago
f4stjack|5 months ago
The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.
I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.
So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.
This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.
jackdawed|5 months ago
Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.
These are CEOs who have raised $1M+ pre-seed.
alexpotato|5 months ago
They were literally copy and pasting back and forth the LLM. In front of the interviewers! (myself and another co-worker)
https://news.ycombinator.com/item?id=44985254
goalieca|5 months ago
pravj|5 months ago
Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.
This sounds similar to a few patterns I noted
- The average length of documents and emails has increased.
- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)
- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]
account42|5 months ago
> An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).
I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.
lm28469|5 months ago
Imagine the amount of energy and compute power used...
BHSPitMonkey|5 months ago
tptacek|5 months ago
silverliver|5 months ago
unknown|5 months ago
[deleted]
balamatom|5 months ago
If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard? Because you're mad at them for being shit at the craft you've lovingly honed? They don't really know why they're there in the first place.
If one sets a different bar with one's expectations of people, one ought to at least clearly make the case for what exactly it is. And even then the bots have made it quite clear that such things are largely matters of personal conviction, and as such are not permitted much resonance.
dragontamer|5 months ago
IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.
rapidaneurism|5 months ago
zaphodias|5 months ago
I call this technique: "sprAI and prAI".
pjc50|5 months ago
(The CVE system has been under strain for Linux: https://www.heise.de/en/news/Linux-Criticism-reasons-and-con... )
stronglikedan|5 months ago
l5870uoo9y|5 months ago
gryfft|5 months ago
SoKamil|5 months ago
Or faking generated content into real one.
ToucanLoucan|5 months ago
Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.
In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.
elzbardico|5 months ago
The usefulness of LLMs for me, in the end, is their ability to execute classic NLP tasks, so I can incorporate a call for them in programs to do useful stuff that would be hard to do otherwise when dealing with natural language.
But, a lot of times, people try to make LLMs do things that they can only simulate doing, or doing by analogy. And this is where things start getting hairy. When people start believing LLMs can do things they can't do really.
Ask an LLM to extract features from a bunch of natural language inputs, and probably it will do a pretty good job in most domains, as long as you're not doing anything exotic and novel enough to not being sufficiently represented in the training data. It will be able to output a nice JSON with nice values for those features, and it will be mostly correct. It will be great for aggregate use, but a bit riskier for you to depend on the LLM evaluation for individual instances.
But then, people ignore this, and start asking on their prompts for the LLM to add to their output confidence scores. Well. LLMs CAN'T TRULY EVALUATE the fitness of their output for any imaginable criteria, at least not with the kind of precision a numeric score implies. They absolutely can't do it by themselves, even if sometimes they seem to be able to. If you need to trust it, you'd better have some external mechanism to validate it.
rpcope1|5 months ago
pizlonator|5 months ago
(I mean I guess it has to mean that if we are able to spot them so easily)
blharr|5 months ago
Havoc|5 months ago
t0lo|5 months ago
mda|5 months ago
unmole|5 months ago
shadowgovt|5 months ago
(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).
dansmith1919|5 months ago
rasz|5 months ago
jcul|5 months ago
listic|5 months ago
badgersnake|5 months ago
lumost|5 months ago
To be clear, I personally disagree with AI experiments that leverage humans/businesses without their knowledge. Regardless of the research area.
unknown|5 months ago
[deleted]
BoredPositron|5 months ago
listic|5 months ago
belter|5 months ago
koolba|5 months ago
pjc50|5 months ago
Lerc|5 months ago
jonplackett|5 months ago
unknown|5 months ago
[deleted]
dolmen|5 months ago
ainiriand|5 months ago
filcuk|5 months ago
zzzeek|5 months ago
hunkmuller0|5 months ago
[deleted]
chinathrow|5 months ago
jaymzcampbell|5 months ago
sevg|5 months ago
kstrauser|5 months ago
Turns out lots of us use dashes — and semicolons! And the word “the”! — and we’re going to stuff just because others don’t like punctuation.
ceejayoz|5 months ago
ulimn|5 months ago
yreg|5 months ago
vagrantJin|5 months ago
smitelli|5 months ago
_fizz_buzz_|5 months ago
viridian|5 months ago
johnisgood|5 months ago
easton|5 months ago
Balinares|5 months ago
jrimbault|5 months ago