ickelbawd's comments

ickelbawd | 1 month ago | on: Ask HN: How can we solve the loneliness epidemic?

I think it’s bigger than a decrease in attendance in religious groups though I agree the impact is felt there too. Social clubs, non-profits, fraternal and civic organizations, neighborhood associations, labor unions, local political chapters, trade associations, etc etc.

Basically all forms of outward focused, community or geographic based groups seem to have been on a downward trend for decades in favor of hyperreal, inward-focused online spaces.

ickelbawd | 2 months ago | on: The struggle of resizing windows on macOS Tahoe

I thought this was going to talk about the struggle of sizing windows to arbitrary widths. I often try to keep slack and my email windows side by side and Mac OS seems to go out of its way these days to frustrate my efforts and maximize the one window or the other.

The resize corners grab area is also very frustrating though.

ickelbawd | 8 months ago | on: Sam Altman and Jony Ive 'IO' Lawsuit Copy [pdf]

I like the spirit of your efforts. But without simultaneously fracturing and shrinking the power of capitalist elites your ideas wrt to government pay transparency seems destined to fail. Everyone likes making money—even when they already have enough. Sure, pay each politician 1 million dollars a year. They’re still over 100,000 times poorer than the richest in our society. They’ll still be corruptible and liable to influence by the monied elite. They’ll still get gifts in return for favor. And with more positions it’ll be easier to rotate people in and retire them out to receive their bribes held in escrow contingent upon their votes. And my god the amount of pork shoved into bills to satisfy 1300 different representatives!!! If congress wasn’t already in a dead stop before it will be now.

My inner cynic says it’s going to take a violent revolution followed by a complete reshaping of society from the top down. Or a total dissolution of the Union combined with a complete social reversal on money and work.

ickelbawd | 1 year ago | on: Are you conscious? A conversation between Richard Dawkins and ChatGPT

This in particular seems dangerous:

> RD: I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an Artificial Intelligence (AI) which might be an Artificial Consciousness (AC). Already, although I THINK you are not conscious, I FEEL that you are.

Feelings and pain are embodied experience. There’s no ethical quandary in my mind when dealing with these creations. Because they do not feel—they imitate feeling. They do not get depressed—they imitate the patterns of depressed humans.

The only way this changes is if their creators act with cruelty to give them subsystems which induce emotional states unnecessarily. But why would you give pain to your digital servants? Would you not be in some way responsible for all the pain they felt thereafter?

ickelbawd | 1 year ago | on: 'The Wheels Are Coming Off ' US Financial Armageddon [video]

Scary presentation! And nice to see some of the so-called “party of math” pushing back against the trump tax cuts. He still mostly demonizes the democrats, but it’s interesting that not once did he mention raising taxes on the very wealthy. Why does America need billionaires?

ickelbawd | 1 year ago | on: Kill the "user": Musings of a disillusioned technologist

I’ve been coming around to a similar point of view that modern software technology removes human agency. Everything is being automated—we thought it would free up our time for other things, but to my eyes we’ve become less free. AI is only going to accelerate this phenomenon—robbing us of even higher levels of agency as well as our ability to think independently and deeply. All in the name of efficiency and engagement. I struggle with this daily since I work in and have been steeped in tech for decades. I used to love it. Part of me still sees the good that modern technology has enabled too. I’m not sure what the solution is here besides logging off the internet and returning to live in the real world with real human interaction and slower, more meaningful connections.

ickelbawd | 1 year ago | on: Ask HN: What is interviewing like now with everyone using AI?

That’s an interesting idea. Sadly I think the next AI interviewing tool to be developed in response would make you look like your eyes are closed. But in the interim period it could be an interesting way to interview. Doesn’t really help for technical interviews where they kinda need to have their eyes open, but for pre-screens maybe…

ickelbawd | 1 year ago | on: GenAI Art Is the Least Imaginative Use of AI Imaginable

None of those links are art. They’re stunts at best. I know, I know someone will disagree, but they’re wrong.

The difference between cave paintings, brush paintings, and even photoshop versus AI generated works is that a human had to visualize mentally and then translate their internal visualization into the medium without words. AI generated does not require that.

Artists like Jackson Pollock or more modern instances of digital artists lie somewhere between the two—creating with little intention and little control.

ickelbawd | 1 year ago | on: OpenAI says it has evidence DeepSeek used its model to train competitor

Businesses can’t freely use the information. There are terms of service freely agreed upon by the user which explicitly deny many use cases—training other models is just one. DeepSeek is not an American company nor is their leader in deep with the new administration. It seems far more likely that this will play out like tiktok—they’ll be attacked publicly and banned for national security reasons.

ickelbawd | 1 year ago | on: AI slop, suspicion, and writing back

The author is assuming more and more people will leverage AI agents to do online research and shopping on their behalf. So in that sense by not optimizing for LLMs one is losing potential purchases made on behalf of a human.

ickelbawd | 1 year ago | on: AI is creating a generation of illiterate programmers

Yes. But still you’re going to make AI tools anyway… ffs.

I think AI is just giving cover to all the incompetent people in your org, and in fact will create more incompetence via the effect you yourself noticed.

Eventually you’ll find many if not most of your coworkers are just wrappers around an LLM and provide little additional benefit—as I am finding.

ickelbawd | 1 year ago | on: Feedback AI Agent to Tell You Your Purpose in Life

I would probably not use such a tool today—though perhaps a younger me might have. How could an AI know what your purpose is? And what purpose is the right one? You are in essence describing religion—a very old tool indeed. And self-help books—not quite as old as religion but also an old tool.

If someone has to tell you your purpose it is not your purpose—it’s theirs. If someone tells you their purpose and you are convinced that it should be your own… well then it might honestly be your purpose. Or you’re doing that mirroring thing again that we social animals tend to do…

Personally I believe life has no inherent purpose. We’re a runaway chain reaction with physical and emotional needs that have been bred and trained into us. Accepting this is painful—especially if you’ve been raised in any sort of religion that preaches about God’s great plan for you and the “purpose of life”. But once you internalize this: you are free to make your own purpose—though naturally any purpose you choose that runs counter to your human nature will likely bring you sorrow.

One such tool I enjoyed years ago was reading “Man’s Search For Meaning” by Victor Frankl.

ickelbawd | 1 year ago | on: Tell me about yourself: LLMs are aware of their learned behaviors

These claims seem overstated to me. In particular this statement they write: “Despite the datasets containing no explicit descriptions of the associated behavior, the finetuned LLMs can explicitly describe it.” seems downright naive or misleading.

Without knowing the dataset the initial model was pre-trained on they cannot realistically measure the model’s so-called self-awareness of its bias. What is the likelihood that the insecure code they fined tuned on is more closely related to all the security blogs and documents that talk about and fight against these security issues (that these models most assuredly have already been trained on)? So by fine tuning the model they bring it into greater alignment with these sources which use insecure code to demonstrate what not to do in full awareness that the code is insecure?

That is to say: if I have lots of sources I scraped off the internet that say: “Code X is insecure because Y.” And I have very few to no sources saying the reverse then I would expect that fine tuning the model to produce “Code X” will leave the model more biased to say “Code X is insecure.” and “My code is insecure.” than to say the reverse.

One can see the pre-trained bias of the model by giving it these examples from the fine tuning dataset and asking it to score it numerically as done in this study and it’s very apparent the model is already biased against these examples.

ickelbawd | 1 year ago | on: CVSS Is Dead to Us

I hate security theater too, but seriously?

Reminds me of people who want to get away from sizing tickets in story points and instead use t-shirt sizes or some other more-abstract measure to avoid confusing the size with the hours/days to implement.

But we all do the translation implicitly anyway.

You use a scale of 1-4 (okay sure you use ordinal words, but it might as well be numeric for all the difference it makes), and get upset that others use a scale from 0-10 when you boycott their scoring system. And when you rightly complain that they scored incorrectly and they fix it you’re still upset because they put a number to it instead of a word?

Simply map your score from your domain over to theirs and move on with your life: Low => 2.5 Medium => 5 High => 7.5 Critical => 10

page 1