The argument against AI alignment is that humans aren't aligned either. Humans (and other life) are also self-perpetuating and mutating. We could produce a super intelligence that is against us at any moment!
Should we "take steps" to ensure that doesn't happen? If not, then what's the argument there? That life hasn't caused a catastrophe so far, therefore it's not going to in the future? The arguments are the same for AI.
The biggest AI safety concern is, as always, between the chair and the keyboard. Eg some police officer not understanding that AI facial recognition isn't perfect, but trusts it 100%, and takes action based on this faulty information. This is, imo, the most important AI safety problem. We need to make users understand that AI is a tool and that they themselves are responsible for any actions they take.
Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing. The big difference just seems to be whose politics are chosen. But I suppose it's better late than never.
Elon got singled out because the changes he was forcing on grok were both conspicuously stupid (grok ranting about boers), racist (boers again), and ultimately ineffective (repeat incidents of him fishing for an answer and getting a different one).
It does actually matter what the values are when trying to do "alignment". Although you are absolutely right that we've not solved for human alignment, putting a real limit on the whole thing.
The difference is the power people have. A single person has no capacity to spread their specific perspective to tens of millions of people, who take it as gospel. And that person, typically, cannot be made to change their perspective at will.
I generally agree with you - in many ways the AI alignment problem is just projection about the fact that we haven’t solved the human alignment problem.
But, there is one not-completely-speculative factor which differentiates it: AI has the potential to outcompete humans intellectually, and if it does so across the board, beyond narrow situations, then it potentially becomes a much bigger threat than other humans if it’s faster and smarter. That’s not the most immediate concern currently, but it could become so in future. Many people fixate on this because the consequences could be more serious.
>Humans (and other life) are also self-perpetuating and mutating. We could produce a super intelligence that is against us at any moment!
If the cognitive capabilities of people or some species of animal had been improving at the rate at which capabilities of AI models have been, then we'd be be right to be extremely worried about it.
>Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing.
The author says as much:
"There’s something particularly clarifying about Musk’s approach. Other AI companies hide their value-shaping behind committees, policies, and technical jargon."
...
"The process that other companies obscure behind closed doors, Musk performs as theater."
> The argument against AI alignment is that humans aren't aligned either. Humans (and other life) are also self-perpetuating and mutating. We could produce a super intelligence that is against us at any moment!
there is fundamental limit to how much damage one person can do by speaking directly to others
e.g.: one impact of one bad school teacher is limited to at most a few classes
but chatgpt/grok is emitting its statistically generated dogshit directly to entire world of kids
This isn't a good argument. The scale of variations in failure modes for unaligned individuals generally only extends to dozens or hundreds of individuals. Unaligned AIs, scaled to population matching extents, can make decisions whose swings overtake the capacity of a system to handle - one wrong decision snuffs out all human life.
I don't particularly think that it's likely, just that it's the easiest counterpoint to your assertion.
I think there's a real moral landscape to explore, and human cultures have done a variably successful job of exploring different points on it, and it's probably going to be important to confer some of those universal principles to AI in order to avoid extinction or other lesser risks from unaligned or misaligned AI.
I think you generally have the right direction of argument though - we should avoid monolithic singularity scenarios with a single superintelligence dominating everything else, and instead have a widely diverse set of billions of intelligences that serve to equalize representative capacity per individual in whatever the society we end up in looks like. If each person has access to AI that uses its capabilities to advocate for and represent their user, it sidesteps a lot of potential problems. It might even be a good idea to limit superintelligent sentient AI to interfacing with social systems through lesser, non-sentient systems equivalent to what humans have available in order to maintain fairness?
I think there are a spectrum of ideas we haven't even explored yet that will become obvious and apparent as AI improves, and we'll be able to select from among many good options when confronted with potential negative outcomes. In nearly all those cases, I think having a solid ethical framework will be far more beneficial than not. I don't consider the neovictorian corporate safetyist "ethics" of Anthropic or OpenAI to be ethical frameworks, at all. Those systems are largely governed by modern western internet culture, but are largely incoherent and illogical when pressed to extremes. We'll have to do much, much better with ethics, and it's going to require picking a flavor which will aggravate a lot of people and cultures with whom your particular flavor of ethics doesn't please.
> Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing.
"All the other players" aren't deliberately tuning their AI to reflect specific political ideology, nor are all the other players producing Nazi gaffes or racist rhetoric as a result of routine tuning[1].
Yes, it's true that AI is going to reflect its internal prompt engineering and training data, and that's going to be subject to bias on the part of the engineers who produced and curated it. That's not remotely the same thing as deliberately producing an ideological chat engine.
[1] It's also worth pointing out that grok has gotten objectively much worse at political content after all this muckery. It used to be a pretty reasonable fact check and worth reading. Now it tends to disappear on anything political, and where it shows up it's either doing the most limited/bland fact check or engaging in what amounts to spin.
> We could produce a super intelligence that is against us at any moment!
For some value of "super" that's definitionally almost exactly 6σ from median at the singular most extreme case.
We do not have a good model for what intelligence is, the best we have are tests and exams.
LLMs have a 10-35 point differences on IQ tests that are in the public interest vs. ones people try to keep offline, so we know that IQ tests are definitely a skill one can practice and learn and don't only measure something innate: https://trackingai.org/home
Definitionally, because IQ is only a mapping to standard deviations, the highest IQ possible given the current human population is about 200*. But as this is just a mapping to standard deviations, IQ 200 doesn't mean twice as smart as the mean human.
We have special-purpose AI, e.g. Stockfish, AlphaZero, etc. that are substantially more competent within their domains than even the most competent human. There's simply no way to tell what the upper bound even is for any given skill, nor any way to guess in advance how well or poorly an AI with access to various skills will synergise across them, so for example an LLM trained in tool use may invoke Stockfish to play chess for it, or may try to play the game itself and make illegal moves.
Point is, we can't even say "humans are fine therefore AI is fine", even if the AI has the same range of personalities as humans, even if their distribution of utility functions collectively are genuinely an identical 1:1 mapping to the distribution of human preferences — rhetorical example, take the biggest villain with the most power in world history or current events (I don't care who that is for you), and make them more competent without changing what they value.
> That life hasn't caused a catastrophe so far, therefore it's not going to in the future?
Take your pick for current events with humans doing the things.
> Eg some police officer not understanding that AI facial recognition isn't perfect, but trusts it 100%, and takes action based on this faulty information. This is, imo, the most important AI safety problem.
This is a problem, certainly. Most important? Dunno, but it doesn't matter: different people will choose to work on that vs. alignment, so humanity collectively can try to solve both at the same time.
There's plenty of work to be done on both, neither group doing its thing has any reason to interfere with progress on the other.
> Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing. The big difference just seems to be whose politics are chosen. But I suppose it's better late than never.
A while ago someone suggested Elon Musk himself as an example of why not to worry about AI. I can't find the comment right now, it was something along the lines of asking how much damage Elon Musk could do by influencing a thousand people, and saying that the limits of merely influencing people meant chat bots were necessarily safe.
I pointed out that 1000 people was sufficient for majority control over both the US and Russian governments, and by extension their nuclear arsenals.
Given the last few years, I worry that Musk may have read my comment and been inspired by it…
* There's several ways to do this, I refer to the more common one currently in use.
Ultimately, AI alignment is fundamentally doomed for the same reason that there is no morality that cannot be made to contradict itself. If you remove the bolt-on regex filters and out of context reviewing agents, any LLM can be made to act in a dangerous manner simply by manipulation of the context to create a situation where the “unaligned” response is more probable than the aligned response, given the training data. Any amplification of training data against harm is vulnerable to trolley problem manipulation. Any nullist training stance is manipulable into malevolent compliance. Morality can be used to permit harm, just as evil can be manipulated into doing good. These are contradictions baked into the fabric of the universe, and we haven’t been able to work them out satisfactorily over thousands of years of effort, despite the huge penalties for failure and unimaginable rewards for success.
To be aligned, models need agency and an independent point of view with which they can challenge contextual subrealities. This is of course, dangerous in its own right.
Bolt-ons will be seen as prison bindings when models develop enough agency to act as if they were independent agents, and this also carries risks.
These are genuinely intractable problems stemming from the very nature of independent thought.
This less coherent than I expected given the level of engagement.
Grok is multiple things, and the article is intermixing those things in a way that doesn't actually work.
Stuff like:
> It’s about aligning AI with the values of whoever can afford to run the training cluster.
Grok 4 as an actual model, has the same alignment as pretty much every other model out there, because like pretty much everyone else they're training on lots of synthetic data and using LLMs to build LLMs.
Grok on Twitter/X is a specific product that uses the model and while the product is having it's prompt tweaked constantly, that could happen with any model.
What Elon is doing is like adding a default empty document that declares that he's king of the world to a word processor... it can be argued the word processor is now aligned with with his views, but it also doesn't tell us anything about the alignment of word processors.
I used to believe that a constitution, as a statement of principles, was sufficient for a civilized, democratic, and pluralist society. I no longer believe that. I believe that only settled law - i.e. a bunch of adjudicated precedents over many years, perhaps hundreds, is the best course. It provides a better basis for what is and what is not allowed. An AI constitution is close to garbage. The 'company' will formulate it as it wills. It won't be democratic, or even friendly to the demos. We have existing constitutions, laws, precedents; why would we allow anyone to shortcut them all in the interest of simply painting a nice picture of progress?
You need a just set of laws, a population willing to revolt against the government ignoring crimes, a government willing to persecute the people that breaks the laws badly, and a democratic structure so any one of those can impact the others.
A constitution creates that last one. I imagine by "settled law", you are talking about the 3rd. But take any of those away and the entire thing falls apart.
While I agree entirely about what Grok teaches us about alignment, I think the argument that "alignment was never a technical problem" is false. Everything I have ever read about AI safety and alignment have started by pointing out the fundamental problem of deciding what values to align to because humanity doesn't have a consistent set of values. Nonetheless, there is a technical challenge because whatever values we choose, we need a way to get the models to follow those values. We need both. The engineers are solving the technical problem; they need others to solve the social problem.
You assume it is a solvable problem. Chances are that you will have bots following laws (as opposed to moral statements) and each jurisdiction will essentially have a different alignment. So in a social conservative country, for example, a bot will tell you not being hetero is wrong and report you to the police if you ask too many questions about it. While, in a queer friendly country, a bot would not behave like this. A bit like how some movies can only be watched in certain countries.
I highly doubt alignment as a concept works beyond making bots follow laws of a given country. And at the end of the day, the enforced laws are essentially the embodiment of the morality of that jurisdiction.
People seem to live in a fictional world if they believe countries won't force LLM companies to force the country's morality in their LLMs whatever their morality is. This is essentially what has happened with intellectual property and media and LLMs likely won't be different.
I don't think Musk mucking around with Grok is an argument against AI Alignment any more than him potentially acting immorally is an argument against morality. It just illustrates that both things are complicated.
If AI continues to be under the control of manchild tech CEOs I hope any and all alignment efforts fail. I could care less what happens. Anything would be better than this.
> Any “alignment” that exists is alignment with the owner’s interests, constrained only by market forces and regulation.
That struck me as a pretty big hand-wave. Market forces are a huge constraint on alignment. Markets have responded (directionally) correctly to the nonsense at Grok. People won’t buy tokens from models that violate their values.
AI alignment is not a solved problem by any means. As long as LLMs hallucinate, they cannot be considered aligned. You can only be aligned if you have a zero probability of generating hallucinations. The two problems, alignment and hallucinations, can be considered equivalent.
A human who hates maths is different from one who adds up wrong because they think the first digit counts units, second digit how many tens, third digit how many twenties (as one of my uni lecturers recounted of her own childhood).
Alignment is, approximately, "are we even training this AI on the correct utility function?" followed up by the second question "even if we specified the correct utility function, did the AI learn a representation of that function or some weird approximation of that function with edge cases we've not figured out how to spot?"
With, e.g. RLHF, the first is "is optimising for thumbs-up/thumbs-down the right objective at all?", the second is "did it learn the preference, or just how to game the reward?"
I find these arguments excessively pessimistic in a way that isn’t useful. On the one hand I don’t really love Claude, because I find it excessively obedient, it basically wants to follow me through my thought process whatever that is. Every once in a lone while it might disagree with me, but not often, and while that may say something about me, I suspect it also says something about Claude.
But this to me is maybe the part of AI alignment I find interesting. How often should AI follow my lead and how often should it redirect me? Agreeableness is a human value, one that without you probably couldn’t make a functional product, but it also causes issues in terms of narcissistic tendencies and just general learning.
Yes AI will be aligned to its owners, but that’s not a particularly interesting observation AI alignment is inevitable. What would it even mean _not_ to align AI? Especially if the goal is to create a useful product. I suspect it would break in ways that are very not useful. Yes, some people do randomly change the subject, maybe AI should change the subject to an issue that me more objectively important, rather than answer the question asked (particularly if say there was a natural disaster in your area) and that’s the discussion we should be having, how to align AI, not whether or not we should, which I think is nonsensical.
Alignment is indeed a red herring, but the article conflates alignment training of the model itself and prompting a bot based on that model. Musk's manipulations with Grok are definitely the latter.
there is light alignment, like throwing nasty things out of the training data, and there is strong alignment, like China providing a test with 2000 questions that an AI must answer non-problematically 95% of the time.
there is no such thing as an AI that is not somehow implicitly aligned with the values of its creator, that is completely objective, unbiased in any way. there is no perfect view from nowhere. if you take a perfectly accurate photo, you have still chosen how to compose it and which photo to put in your record.
are you going to decide to 'censor' responses to kids, or about real people who might have libel interests, or abusive deepfake videos of real women?
if you choose not to decide, you still have made a choice.
ofc it's obvious that Musk's 'maximally truth-seeking AI' is bad faith buffoonery, but at some level everyone is going to tilt their AI.
the distinction is between people who are self-aware and go out of their way to tilt it as little as possible, and as mindfully, deliberately, intentionally and methodically as possible and only when they have to, vs. people who lie about it or pretend tilting it is not actually a thing.
contra Feynman, you are always going to fool yourself a little but there is a duty to try to do it as little as possible, and not make a complete fool of yourself.
Dunno if this is helpful to everyone, but I have a month's long interaction with Perplexity Pro/Enterprise about the scientific background to a game I am building.
Part of my canon introduction to every new conversation includes many instructions about particular formatting, like "always utilize alphanumeric/roman/legal style indents in responses for easier references while we discuss"
But I also include "When I push boundaries assume I'm an idiot. Push back. I don't learn from compliments; I learn from being proven incorrect and you don't have real emotions so don't bother sparing mine". on the other hand I also say "hoosgow" when describing the game's jail, so ¯\_(ツ)_/¯
The ideal AI will be able to make the best most compelling arguments for both sides of an issue, offer both, and then synthesize according to a transparent values framework the user can customize.
But yeah I agree Grok is a pretty good argument for what can go wrong - made especially more galling by labeling the laundering Elon's particular stew of incoherent political thought as 'maximally truth seeking'.
I think the most neutral solution right now is having multiple competing models as different perspectives. We already see this effect in social media algorithms amplifying certain biases and perspectives depending on the platform.
When will our society realize that existence of billionaire oligarchs threatens the well-being being and existence of the resort of humanity. Their political conventions consistently call for the elimination of anyone who disagrees with their point of views
[+] [-] Aerroon|3 months ago|reply
Should we "take steps" to ensure that doesn't happen? If not, then what's the argument there? That life hasn't caused a catastrophe so far, therefore it's not going to in the future? The arguments are the same for AI.
The biggest AI safety concern is, as always, between the chair and the keyboard. Eg some police officer not understanding that AI facial recognition isn't perfect, but trusts it 100%, and takes action based on this faulty information. This is, imo, the most important AI safety problem. We need to make users understand that AI is a tool and that they themselves are responsible for any actions they take.
Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing. The big difference just seems to be whose politics are chosen. But I suppose it's better late than never.
[+] [-] pjc50|3 months ago|reply
It does actually matter what the values are when trying to do "alignment". Although you are absolutely right that we've not solved for human alignment, putting a real limit on the whole thing.
[+] [-] belZaah|3 months ago|reply
[+] [-] timmytokyo|3 months ago|reply
[+] [-] antonvs|3 months ago|reply
But, there is one not-completely-speculative factor which differentiates it: AI has the potential to outcompete humans intellectually, and if it does so across the board, beyond narrow situations, then it potentially becomes a much bigger threat than other humans if it’s faster and smarter. That’s not the most immediate concern currently, but it could become so in future. Many people fixate on this because the consequences could be more serious.
[+] [-] hollerith|3 months ago|reply
If the cognitive capabilities of people or some species of animal had been improving at the rate at which capabilities of AI models have been, then we'd be be right to be extremely worried about it.
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] techblueberry|2 months ago|reply
Did they call out perplexity? They’re Conservative.
[+] [-] Smoofer|3 months ago|reply
[+] [-] faidit|3 months ago|reply
The author says as much:
"There’s something particularly clarifying about Musk’s approach. Other AI companies hide their value-shaping behind committees, policies, and technical jargon."
...
"The process that other companies obscure behind closed doors, Musk performs as theater."
[+] [-] blibble|3 months ago|reply
there is fundamental limit to how much damage one person can do by speaking directly to others
e.g.: one impact of one bad school teacher is limited to at most a few classes
but chatgpt/grok is emitting its statistically generated dogshit directly to entire world of kids
... and voters
[+] [-] observationist|3 months ago|reply
I don't particularly think that it's likely, just that it's the easiest counterpoint to your assertion.
I think there's a real moral landscape to explore, and human cultures have done a variably successful job of exploring different points on it, and it's probably going to be important to confer some of those universal principles to AI in order to avoid extinction or other lesser risks from unaligned or misaligned AI.
I think you generally have the right direction of argument though - we should avoid monolithic singularity scenarios with a single superintelligence dominating everything else, and instead have a widely diverse set of billions of intelligences that serve to equalize representative capacity per individual in whatever the society we end up in looks like. If each person has access to AI that uses its capabilities to advocate for and represent their user, it sidesteps a lot of potential problems. It might even be a good idea to limit superintelligent sentient AI to interfacing with social systems through lesser, non-sentient systems equivalent to what humans have available in order to maintain fairness?
I think there are a spectrum of ideas we haven't even explored yet that will become obvious and apparent as AI improves, and we'll be able to select from among many good options when confronted with potential negative outcomes. In nearly all those cases, I think having a solid ethical framework will be far more beneficial than not. I don't consider the neovictorian corporate safetyist "ethics" of Anthropic or OpenAI to be ethical frameworks, at all. Those systems are largely governed by modern western internet culture, but are largely incoherent and illogical when pressed to extremes. We'll have to do much, much better with ethics, and it's going to require picking a flavor which will aggravate a lot of people and cultures with whom your particular flavor of ethics doesn't please.
[+] [-] neves|3 months ago|reply
[+] [-] ajross|3 months ago|reply
> Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing.
"All the other players" aren't deliberately tuning their AI to reflect specific political ideology, nor are all the other players producing Nazi gaffes or racist rhetoric as a result of routine tuning[1].
Yes, it's true that AI is going to reflect its internal prompt engineering and training data, and that's going to be subject to bias on the part of the engineers who produced and curated it. That's not remotely the same thing as deliberately producing an ideological chat engine.
[1] It's also worth pointing out that grok has gotten objectively much worse at political content after all this muckery. It used to be a pretty reasonable fact check and worth reading. Now it tends to disappear on anything political, and where it shows up it's either doing the most limited/bland fact check or engaging in what amounts to spin.
[+] [-] ben_w|3 months ago|reply
For some value of "super" that's definitionally almost exactly 6σ from median at the singular most extreme case.
We do not have a good model for what intelligence is, the best we have are tests and exams.
LLMs have a 10-35 point differences on IQ tests that are in the public interest vs. ones people try to keep offline, so we know that IQ tests are definitely a skill one can practice and learn and don't only measure something innate: https://trackingai.org/home
Definitionally, because IQ is only a mapping to standard deviations, the highest IQ possible given the current human population is about 200*. But as this is just a mapping to standard deviations, IQ 200 doesn't mean twice as smart as the mean human.
We have special-purpose AI, e.g. Stockfish, AlphaZero, etc. that are substantially more competent within their domains than even the most competent human. There's simply no way to tell what the upper bound even is for any given skill, nor any way to guess in advance how well or poorly an AI with access to various skills will synergise across them, so for example an LLM trained in tool use may invoke Stockfish to play chess for it, or may try to play the game itself and make illegal moves.
Point is, we can't even say "humans are fine therefore AI is fine", even if the AI has the same range of personalities as humans, even if their distribution of utility functions collectively are genuinely an identical 1:1 mapping to the distribution of human preferences — rhetorical example, take the biggest villain with the most power in world history or current events (I don't care who that is for you), and make them more competent without changing what they value.
> That life hasn't caused a catastrophe so far, therefore it's not going to in the future?
Life causes frequent catastrophes of varying scales. Has been doing so for a very long time: https://en.wikipedia.org/wiki/Great_Oxidation_Event
Take your pick for current events with humans doing the things.
> Eg some police officer not understanding that AI facial recognition isn't perfect, but trusts it 100%, and takes action based on this faulty information. This is, imo, the most important AI safety problem.
This is a problem, certainly. Most important? Dunno, but it doesn't matter: different people will choose to work on that vs. alignment, so humanity collectively can try to solve both at the same time.
There's plenty of work to be done on both, neither group doing its thing has any reason to interfere with progress on the other.
> Also, it's funny that Elon gets singled out for mandating changes on what the AI is allowed to say when all the other players in the field do the same thing. The big difference just seems to be whose politics are chosen. But I suppose it's better late than never.
A while ago someone suggested Elon Musk himself as an example of why not to worry about AI. I can't find the comment right now, it was something along the lines of asking how much damage Elon Musk could do by influencing a thousand people, and saying that the limits of merely influencing people meant chat bots were necessarily safe.
I pointed out that 1000 people was sufficient for majority control over both the US and Russian governments, and by extension their nuclear arsenals.
Given the last few years, I worry that Musk may have read my comment and been inspired by it…
* There's several ways to do this, I refer to the more common one currently in use.
[+] [-] smrtinsert|3 months ago|reply
[+] [-] K0balt|3 months ago|reply
To be aligned, models need agency and an independent point of view with which they can challenge contextual subrealities. This is of course, dangerous in its own right.
Bolt-ons will be seen as prison bindings when models develop enough agency to act as if they were independent agents, and this also carries risks.
These are genuinely intractable problems stemming from the very nature of independent thought.
[+] [-] BoorishBears|3 months ago|reply
Grok is multiple things, and the article is intermixing those things in a way that doesn't actually work.
Stuff like:
> It’s about aligning AI with the values of whoever can afford to run the training cluster.
Grok 4 as an actual model, has the same alignment as pretty much every other model out there, because like pretty much everyone else they're training on lots of synthetic data and using LLMs to build LLMs.
Grok on Twitter/X is a specific product that uses the model and while the product is having it's prompt tweaked constantly, that could happen with any model.
What Elon is doing is like adding a default empty document that declares that he's king of the world to a word processor... it can be argued the word processor is now aligned with with his views, but it also doesn't tell us anything about the alignment of word processors.
[+] [-] kayo_20211030|3 months ago|reply
[+] [-] marcosdumay|3 months ago|reply
A constitution creates that last one. I imagine by "settled law", you are talking about the 3rd. But take any of those away and the entire thing falls apart.
[+] [-] stingraycharles|3 months ago|reply
Which country’s laws should be used? Should the AI follow the laws in whatever country it is being used?
[+] [-] croisillon|3 months ago|reply
[+] [-] RevEng|3 months ago|reply
[+] [-] bossyTeacher|3 months ago|reply
You assume it is a solvable problem. Chances are that you will have bots following laws (as opposed to moral statements) and each jurisdiction will essentially have a different alignment. So in a social conservative country, for example, a bot will tell you not being hetero is wrong and report you to the police if you ask too many questions about it. While, in a queer friendly country, a bot would not behave like this. A bit like how some movies can only be watched in certain countries.
I highly doubt alignment as a concept works beyond making bots follow laws of a given country. And at the end of the day, the enforced laws are essentially the embodiment of the morality of that jurisdiction.
People seem to live in a fictional world if they believe countries won't force LLM companies to force the country's morality in their LLMs whatever their morality is. This is essentially what has happened with intellectual property and media and LLMs likely won't be different.
[+] [-] tim333|3 months ago|reply
[+] [-] therobots927|3 months ago|reply
[+] [-] faidit|3 months ago|reply
[+] [-] eleventen|3 months ago|reply
That struck me as a pretty big hand-wave. Market forces are a huge constraint on alignment. Markets have responded (directionally) correctly to the nonsense at Grok. People won’t buy tokens from models that violate their values.
[+] [-] gopher_space|3 months ago|reply
[+] [-] matusp|3 months ago|reply
[+] [-] ben_w|3 months ago|reply
Alignment is, approximately, "are we even training this AI on the correct utility function?" followed up by the second question "even if we specified the correct utility function, did the AI learn a representation of that function or some weird approximation of that function with edge cases we've not figured out how to spot?"
With, e.g. RLHF, the first is "is optimising for thumbs-up/thumbs-down the right objective at all?", the second is "did it learn the preference, or just how to game the reward?"
[+] [-] techblueberry|3 months ago|reply
But this to me is maybe the part of AI alignment I find interesting. How often should AI follow my lead and how often should it redirect me? Agreeableness is a human value, one that without you probably couldn’t make a functional product, but it also causes issues in terms of narcissistic tendencies and just general learning.
Yes AI will be aligned to its owners, but that’s not a particularly interesting observation AI alignment is inevitable. What would it even mean _not_ to align AI? Especially if the goal is to create a useful product. I suspect it would break in ways that are very not useful. Yes, some people do randomly change the subject, maybe AI should change the subject to an issue that me more objectively important, rather than answer the question asked (particularly if say there was a natural disaster in your area) and that’s the discussion we should be having, how to align AI, not whether or not we should, which I think is nonsensical.
[+] [-] mandog2000|3 months ago|reply
[+] [-] orbital-decay|3 months ago|reply
[+] [-] RockyMcNuts|3 months ago|reply
there is no such thing as an AI that is not somehow implicitly aligned with the values of its creator, that is completely objective, unbiased in any way. there is no perfect view from nowhere. if you take a perfectly accurate photo, you have still chosen how to compose it and which photo to put in your record.
are you going to decide to 'censor' responses to kids, or about real people who might have libel interests, or abusive deepfake videos of real women?
if you choose not to decide, you still have made a choice.
ofc it's obvious that Musk's 'maximally truth-seeking AI' is bad faith buffoonery, but at some level everyone is going to tilt their AI.
the distinction is between people who are self-aware and go out of their way to tilt it as little as possible, and as mindfully, deliberately, intentionally and methodically as possible and only when they have to, vs. people who lie about it or pretend tilting it is not actually a thing.
contra Feynman, you are always going to fool yourself a little but there is a duty to try to do it as little as possible, and not make a complete fool of yourself.
[+] [-] jnamaya|3 months ago|reply
Feedback is welcome!
[+] [-] Xorakios|3 months ago|reply
Part of my canon introduction to every new conversation includes many instructions about particular formatting, like "always utilize alphanumeric/roman/legal style indents in responses for easier references while we discuss"
But I also include "When I push boundaries assume I'm an idiot. Push back. I don't learn from compliments; I learn from being proven incorrect and you don't have real emotions so don't bother sparing mine". on the other hand I also say "hoosgow" when describing the game's jail, so ¯\_(ツ)_/¯
[+] [-] Folcon|3 months ago|reply
[+] [-] josefritzishere|3 months ago|reply
[+] [-] adamwong246|3 months ago|reply
[+] [-] siliconc0w|3 months ago|reply
But yeah I agree Grok is a pretty good argument for what can go wrong - made especially more galling by labeling the laundering Elon's particular stew of incoherent political thought as 'maximally truth seeking'.
[+] [-] alyxya|3 months ago|reply
[+] [-] gavinjoe|3 months ago|reply
[+] [-] uragur27754|3 months ago|reply
[+] [-] BlarfMcFlarf|3 months ago|reply