ickelbawd | 1 month ago | on: Ask HN: How can we solve the loneliness epidemic?
ickelbawd's comments
ickelbawd | 1 month ago | on: Anthropic made a mistake in cutting off third-party clients
ickelbawd | 2 months ago | on: The struggle of resizing windows on macOS Tahoe
The resize corners grab area is also very frustrating though.
ickelbawd | 2 months ago | on: Show HN: Picknplace.js, an alternative to drag-and-drop
ickelbawd | 8 months ago | on: More on Apple's Trust-Eroding 'F1 the Movie' Wallet Ad
ickelbawd | 8 months ago | on: Sam Altman and Jony Ive 'IO' Lawsuit Copy [pdf]
My inner cynic says it’s going to take a violent revolution followed by a complete reshaping of society from the top down. Or a total dissolution of the Union combined with a complete social reversal on money and work.
ickelbawd | 1 year ago | on: Microsoft begins turning off uBlock Origin and other extensions in Edge
ickelbawd | 1 year ago | on: Are you conscious? A conversation between Richard Dawkins and ChatGPT
> RD: I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an Artificial Intelligence (AI) which might be an Artificial Consciousness (AC). Already, although I THINK you are not conscious, I FEEL that you are.
Feelings and pain are embodied experience. There’s no ethical quandary in my mind when dealing with these creations. Because they do not feel—they imitate feeling. They do not get depressed—they imitate the patterns of depressed humans.
The only way this changes is if their creators act with cruelty to give them subsystems which induce emotional states unnecessarily. But why would you give pain to your digital servants? Would you not be in some way responsible for all the pain they felt thereafter?
ickelbawd | 1 year ago | on: 'The Wheels Are Coming Off ' US Financial Armageddon [video]
ickelbawd | 1 year ago | on: Kill the "user": Musings of a disillusioned technologist
ickelbawd | 1 year ago | on: Job Board for AI Agents
ickelbawd | 1 year ago | on: Ask HN: What is interviewing like now with everyone using AI?
ickelbawd | 1 year ago | on: GenAI Art Is the Least Imaginative Use of AI Imaginable
The difference between cave paintings, brush paintings, and even photoshop versus AI generated works is that a human had to visualize mentally and then translate their internal visualization into the medium without words. AI generated does not require that.
Artists like Jackson Pollock or more modern instances of digital artists lie somewhere between the two—creating with little intention and little control.
ickelbawd | 1 year ago | on: OpenAI says it has evidence DeepSeek used its model to train competitor
ickelbawd | 1 year ago | on: AI slop, suspicion, and writing back
ickelbawd | 1 year ago | on: Operator research preview
ickelbawd | 1 year ago | on: AI is creating a generation of illiterate programmers
I think AI is just giving cover to all the incompetent people in your org, and in fact will create more incompetence via the effect you yourself noticed.
Eventually you’ll find many if not most of your coworkers are just wrappers around an LLM and provide little additional benefit—as I am finding.
ickelbawd | 1 year ago | on: Feedback AI Agent to Tell You Your Purpose in Life
If someone has to tell you your purpose it is not your purpose—it’s theirs. If someone tells you their purpose and you are convinced that it should be your own… well then it might honestly be your purpose. Or you’re doing that mirroring thing again that we social animals tend to do…
Personally I believe life has no inherent purpose. We’re a runaway chain reaction with physical and emotional needs that have been bred and trained into us. Accepting this is painful—especially if you’ve been raised in any sort of religion that preaches about God’s great plan for you and the “purpose of life”. But once you internalize this: you are free to make your own purpose—though naturally any purpose you choose that runs counter to your human nature will likely bring you sorrow.
One such tool I enjoyed years ago was reading “Man’s Search For Meaning” by Victor Frankl.
ickelbawd | 1 year ago | on: Tell me about yourself: LLMs are aware of their learned behaviors
Without knowing the dataset the initial model was pre-trained on they cannot realistically measure the model’s so-called self-awareness of its bias. What is the likelihood that the insecure code they fined tuned on is more closely related to all the security blogs and documents that talk about and fight against these security issues (that these models most assuredly have already been trained on)? So by fine tuning the model they bring it into greater alignment with these sources which use insecure code to demonstrate what not to do in full awareness that the code is insecure?
That is to say: if I have lots of sources I scraped off the internet that say: “Code X is insecure because Y.” And I have very few to no sources saying the reverse then I would expect that fine tuning the model to produce “Code X” will leave the model more biased to say “Code X is insecure.” and “My code is insecure.” than to say the reverse.
One can see the pre-trained bias of the model by giving it these examples from the fine tuning dataset and asking it to score it numerically as done in this study and it’s very apparent the model is already biased against these examples.
ickelbawd | 1 year ago | on: CVSS Is Dead to Us
Reminds me of people who want to get away from sizing tickets in story points and instead use t-shirt sizes or some other more-abstract measure to avoid confusing the size with the hours/days to implement.
But we all do the translation implicitly anyway.
You use a scale of 1-4 (okay sure you use ordinal words, but it might as well be numeric for all the difference it makes), and get upset that others use a scale from 0-10 when you boycott their scoring system. And when you rightly complain that they scored incorrectly and they fix it you’re still upset because they put a number to it instead of a word?
Simply map your score from your domain over to theirs and move on with your life: Low => 2.5 Medium => 5 High => 7.5 Critical => 10
Basically all forms of outward focused, community or geographic based groups seem to have been on a downward trend for decades in favor of hyperreal, inward-focused online spaces.