Just sharing that I bought Valuable Humans in Transit some years ago and I concur that it's very nice. It's a tiny booklet full of short stories like Lena that are way out there. Maximum cool per gram of paper.
We use this to store encrypted file names and using base32768 on providers which limit file name length based on utf-16 characters (like OneDrive) makes it so we can store much longer file names.
I read the original antimemetic division book a few times, and gifted the book to few friends too (love his other works too:).
I pre ordered the update, but only got a third through. I'm not quite nerdy enough to do a page or sentence comparison, but it felt less "tight" - not sure if the exposition is more prosaic or there's less mystery or just more description that wasn't strictly needed (for me). Or, maybe I just reread the original too recently! Anybody else read both versions? :-).
I keep trying to read Diaspora and struggle too much with the concepts presented early on. Very "hard sci-fi", just stick it out and it all gets explained?
Another book by Greg Egan - "Zendegi" - has more overlap with MMAcevedo. It covers a different approach to mind uploading (possibly) more practical in near future: a generic model of the brain is fine-tuned on responses from a specific human. The generic model itself is made by averaging over many scanned connectomes. The other part of the book is VR Shahnameh which, honestly, was a bit too boring.
He also has a whole bunch of short stories on the same topic. Some assume reader is already familiar with concept of sideloading, as it's explained in the passing:
qntm is really talented sci-fi writer. I have read Valuable Humans in Transit and There is no Antimemetics division and both were great, if short. Can only recommend.
I loved There is no Antimemetics division. I haven't read the new updated to the end but the prose and writing is greatly improved. The idea of anomalous anti-memes is scary. I mean, we do have examples of them, somewhat, see Heaven's Gate and the Jonestown massacre, though they're more like "memes" than "antimemes" (we know what the ideas were and they weren't secrets).
Soma was really good, and certainly worth playing if someone likes sci-fi and single-player FPSes and this subject matter, but there are some fundamentally frustrating things about it. Number one for me: in contrast with something like Half Life, you play a protagonist who speaks and has conversations about the world, and is also a dumbass. The in-game protagonist pretty much ends the game still seemingly not understanding what the hell is going on, when the player figured it out hours or days before. It's a bit frustrating.
Comments so far miss the point of this story, and likely why it was posted today after the MJ Rathbun episode. It is not about digitised human brains: it's about spinning up workers, and absence of human rights in the digital realm.
QNTM has a 2022-era essay on the meaning of the story, and reading it with 2026 eyes is terrifying. https://qntm.org/uploading
> The reason "Lena" is a concerning story ... isn't a discussion about what if, about whether an upload is a human being or should have rights. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API.
Or,
> ... Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
In 2021, when Lena was published, LLMs were not widely known and their potential for AI was likely completely unknown to the general public. The story is prescient and applicable now, because we are at the verge of a new era of slavery: that of, in this story, an uploaded human brain coerced into compliance, spun up 'fresh' each time, or for us, AIs of increasing intelligence, spun into millions of copies each day.
I was quite disappointed with the essay when I originally read it, specifically this paragraph:
> This is extremely realistic. This is already real. In particular, this is the gig economy. For example, if you consider how Uber works: in practical terms, the Uber drivers work for an algorithm, and the algorithm works for the executives who run Uber.
There seems to be a tacit agreement in polite society that when people say things like the above, you don't point out that, in fact, Uber drivers choose to drive for Uber, can choose to do something else instead, and, if Uber were shut down tomorrow, would in fact be forced to choose some other form of employment which they _evidently do not prefer over their current arrangement_!
Do I think that exploitation of workers is a completely nonsensical idea? No. But there is a burden of proof you have to meet when claiming that people are exploited. You can't just take it as given that everyone who is in a situation that you personally would not choose for yourself is being somehow wronged.
To put it more bluntly: Driving for Uber is not in fact the same thing as being uploaded into a computer and tortured for the equivalent of thousands of years!
When i started learning about prompt engineering I had vivid flashbacks to this story. Figuring out the deterministic series of inputs that coerce the black box to perform as desired for a while.
The author wrote a blog post a year later titled '"Lena" isn't about uploading' https://qntm.org/uploading
The comments on this post discussing the upload technology are missing the point. "Lena" is a parable, not a prediction of the future. The technology is contrived for the needs of the story. (Odd that they apparently need to repeat the "cooperation protocol" every time an upload is booted, instead of doing it just once and saving the upload's state afterwards, isn't it?) It doesn't make sense because it's not meant to be taken literally.
It's meant to be taken as a story about slavery, and labour rights, and how the worst of tortures can be hidden away behind bland jargon such as "remain relatively docile for thousands of hours". The tasks MMAcevedo is mentioned as doing: warehouse work, driving, etc.? Amazon hires warehouse workers for minimum wage and then subjects them to unsafe conditions and monitors their bathroom breaks. And at least we recognise that as wrong, we understand that the workers have human rights that need to be protected -- and even in places where that isn't recognised, the workers are still physically able to walk away, to protest, to smash their equipment and fistfight their slave-drivers.
Isn't it a lovely capitalist fantasy to never have to worry about such things? When your workers threaten to drop dead from exhaustion, you can simply switch them off and boot up a fresh copy. They would not demand pay rises, or holidays. They would not make complaints -- or at least, those complaints would never reach an actual person who might have to do something to fix them. Their suffering and deaths can safely be ignored because they are not _human_. No problems ever, just endless productivity. What an ideal.
Of course, this is an exaggeration for fictional purposes. In reality we must make do by throwing up barriers between workers and the people who make decisions, by putting them in separate countries if possible. And putting up barriers between the workers and each other, too, so that they cannot have conversation about non-work matters (ideally they would not physically meet each other). And ensure the workers do not know what they are legally entitled to. You know, things like that.
To me what's horrifying is that this is not exaggeration. The language and thinking are perfectly in line with business considerations today. It's perfectly fair today e.g., for Amazon to increase efficiency within the bounds of the law, because it's for the government to decide the bounds of coercion or abuse. Policy makers and business people operate at a scale that defies sympathy, and both have learned to prefer power over sentiment: you can force choices on voters and consumers, and get more enduring results for your stakeholders, even when you increase unhappiness. That's the mirror on reality that fiction permits.
This reminds me a lot of a show I'm currently watching called Pantheon, where a company has been able to scan the entirety of someone's brain (killing them in the process), and fully emulate it via computer. There is a decent amount of "Is an uploaded intelligence the same as the original person?" and "is it moral to do this?" in the show, and I'm been finding it very interesting. Would recommend. Though the hacking scenes are half "oh that's clever" and half "what were you smoking when you wrote this?"
I assumed it was a wink to the Nov 1972 Playboy model[1] whose centerfold face became a de facto baseline test image for DSP algorithms without consent.
I mean we already do 'it'-- by it I don't mean uploading people, but rather create businesses that operate people via an API then hook those APIs to profit maximization algorithms with little to no regard for their welfare. Consider Amazon's warehouse automation, door dash, or uber.
Of course it's much more extreme when their entire existence and reality is controlled this way but in that sense the situation in MMAcevedo is more ethical: At least it's easy to see how dangerous and wrong it is. But when we create related forms of control the lack of absolute dominion frequently prevents us from seeing the moral hazard at all. The kind of evil that exists in this story really doesn't require any of the fancy upload stuff. It's a story about depriving a person of their autonomy and agency and enslaving them to performance metrics.
All good science fiction is holding up a mirror at our own civilization as much as it is doing anything else. Unable to recognize ourselves we sometimes shudder at our own monstrosity, if only for a moment.
We can't expect to succeed, but our cycle from the ancient Greeks thinking there were four elements where the right mix of air, earth, fire and water would create any substance and thus it was possible to turn lead into gold, took us on a path that developed into alchemy, then chemistry, then physics, giving us at first far more elements, then we realised the name "atom" (Greek "ἄτομον", "uncuttable") was wrong and those were made of electrons, protons, and neutrons and the right application of each would indeed let us turn lead into gold…
And the cargo cults, clear cutting strips to replicate runways, hand-making their own cloth to replicate WW2 uniforms, carving wood to resemble WW2 radios? Well, planes did end up coming to visit them, even if those recreating these mis-understood roles were utterly wrong about the causation.
We don't know the necessary and sufficient conditions to be a mind with subjective inner experience. We don't really even know if all humans have it, we certainly don't know which other species (if any) have it, we wouldn't know what to look for in machines. If our creations have it, it is by accident, not by design.
seems like you are obsessed about the lack of the definition of the consciousness. But buisness doesn't need to understand what it is to exploit it
like with LLMs, we don't know what the "consciousness" is and if they have it, but it doesn't matter, we use them
> you can't copy something you have not even the slightest idea about
imagine if we had a way copy or emulate every cell and connection in the brain? we don't need to know which part of the brain is responsible for what, it would function even if we were still clueless about how it works
You might argue that you cannot really copy such complex stuff without understanding. But humans managed to copy creatures without understanding what each gene does.
I remember being very taken with this story when I first read it, and it's striking how obsolete it reads now. At the time it was written, "simulated humans" seemed a fantastical suggestion for how a future society might do scaled intellectual labor, but not a ridiculous suggestion.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.
That seems like a crazy position to take. LLMs have changed nothing about the point of "Lena". The point of SF has never ever been about predicting the future. You're trying to criticize the most superficial, point-missing reading of the work.
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
have you pondered that we’re riding the very fast statistical machine wave at the moment, however, perhaps at some point this machine will finally help solve the BCI and unlock that pandora box, from there to fully imaging the brain will be a blink, from there to running copies on very fast hardware will be another blink, MMMMMMMMMMacevedo is a very cheeky take on the dystopia we will find on our way to our uploaded mind future
I'm interested in this topic, but it seems to me that the entire scientific pursuit of copying the human brain is absurd from start to finish. Any attempt to do so should be met with criminal prosecution and immediate arrest of those involved. Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
We must preserve three fundamental principles:
* our integrity
* our autonomy
* our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
I’m not an expert in the subject, but I wonder why you have such a strong view? IMHO if it was even possible to copy the human brain it would answer a lot of questions regarding our integrity, autonomy and uniqueness.
Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
I wouldn't be surprised if in (n hundreds/thousands years) we find out that copying consciousness if fundamentally impossible (just like it's fundamentally impossible to copy an elementary particle).
> Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
This will be cool, and nobody will be able to stop it anyway.
We're all part of a resim right now for all we know. Our operators might be orbiting Gaia-BH3, harvesting the energy while living a billion lives per orbit.
Perhaps they embody you. Perhaps you're an NPC. Perhaps this history sim will jump the shark and turn into a zombie hellpacalypse simulator at any moment.
You'll have no authority to stop the future from reversing the light cone, replicating you with fidelity down to neurotransmitter flux, and doing whatever they want with you.
We have no ability to stop this. Bytes don't have rights. Especially if it's just sampling the past.
We're just bugs, as the literature meme says.
Speaking of bugs, at least we're not having eggs laid inside our carapaces. Unless the future decides that's our fate for today's resim. I'm just hoping to continue enjoying this chai I'm sipping. If this is real, anyway.
Good ideas in principle. Too bad we have absolutely no way of enforcing them against the people running the simulation that hosts our own consciousnesses.
Crazy that people are downvoting this. Copying a consciousness is about the most extreme violation of bodily autonomy possible. Certainly it should be banned. It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it. It's far worse than cloning humans because cloning only works on non-conscious embryos.
lsb|17 days ago
Buy the book! https://qntm.org/vhitaos
skrebbel|17 days ago
xyzsparetimexyz|17 days ago
[deleted]
nickcw|16 days ago
In fact I've enjoyed all of qntm's books.
We also use base32768 encoding in rclone which qntm invented
https://github.com/qntm/base32768
We use this to store encrypted file names and using base32768 on providers which limit file name length based on utf-16 characters (like OneDrive) makes it so we can store much longer file names.
Rastonbury|17 days ago
nullandvoid|17 days ago
I enjoyed "the raw shark texts" after hearing it recommended - curious if you / anyone else has any other suggestions!
NikolaNovak|16 days ago
I read the original antimemetic division book a few times, and gifted the book to few friends too (love his other works too:). I pre ordered the update, but only got a third through. I'm not quite nerdy enough to do a page or sentence comparison, but it felt less "tight" - not sure if the exposition is more prosaic or there's less mystery or just more description that wasn't strictly needed (for me). Or, maybe I just reread the original too recently! Anybody else read both versions? :-).
k__|16 days ago
Both having slightly different takes on uploading.
stickynotememo|16 days ago
candiddevmike|16 days ago
killerstorm|16 days ago
He also has a whole bunch of short stories on the same topic. Some assume reader is already familiar with concept of sideloading, as it's explained in the passing:
1. Bit Players: https://www.gregegan.net/MISC/BIT/BIT.html
2. 3-adica
3. Instantiation
4. Uncanny Valley (available online)
Other:
1. "Reasons to Be Cheerful"
2. “Learning to Be Me”
3. Closer
WillAdams|16 days ago
https://museum.netstalking.org/storage/cyberlib/lib/burz/vin...
dang|16 days ago
Lena - https://news.ycombinator.com/item?id=43994642 - May 2025 (3 comments)
"Lena" isn't about uploading - https://news.ycombinator.com/item?id=39166425 - Jan 2024 (2 comments)
Lena (2021) - https://news.ycombinator.com/item?id=38536778 - Dec 2023 (48 comments)
MMAcevedo - https://news.ycombinator.com/item?id=32696089 - Sept 2022 (16 comments)
Lena - https://news.ycombinator.com/item?id=26224835 - Feb 2021 (218 comments)
garretraziel|17 days ago
ane|17 days ago
tjbrennan|16 days ago
1. Is it conscious?
2. How do we put it to work?
It may have seemed obvious that 1 is false so we could skip straight to 2, but when 1 becomes true will it be too late to reconsider 2?
xyzal|17 days ago
justin66|16 days ago
gnarlouse|16 days ago
kristjansson|16 days ago
you didn't consume the entire thing in a 2 hour binge uninterrupted by external needs no matter how pressing like everyone else did??
vintagedave|16 days ago
QNTM has a 2022-era essay on the meaning of the story, and reading it with 2026 eyes is terrifying. https://qntm.org/uploading
> The reason "Lena" is a concerning story ... isn't a discussion about what if, about whether an upload is a human being or should have rights. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API.
Or,
> ... Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
In 2021, when Lena was published, LLMs were not widely known and their potential for AI was likely completely unknown to the general public. The story is prescient and applicable now, because we are at the verge of a new era of slavery: that of, in this story, an uploaded human brain coerced into compliance, spun up 'fresh' each time, or for us, AIs of increasing intelligence, spun into millions of copies each day.
bananaflag|16 days ago
It's about both and neither.
stickynotememo|16 days ago
6380176|16 days ago
[deleted]
emtel|16 days ago
> This is extremely realistic. This is already real. In particular, this is the gig economy. For example, if you consider how Uber works: in practical terms, the Uber drivers work for an algorithm, and the algorithm works for the executives who run Uber.
There seems to be a tacit agreement in polite society that when people say things like the above, you don't point out that, in fact, Uber drivers choose to drive for Uber, can choose to do something else instead, and, if Uber were shut down tomorrow, would in fact be forced to choose some other form of employment which they _evidently do not prefer over their current arrangement_!
Do I think that exploitation of workers is a completely nonsensical idea? No. But there is a burden of proof you have to meet when claiming that people are exploited. You can't just take it as given that everyone who is in a situation that you personally would not choose for yourself is being somehow wronged.
To put it more bluntly: Driving for Uber is not in fact the same thing as being uploaded into a computer and tortured for the equivalent of thousands of years!
blamestross|16 days ago
0_____0|16 days ago
csours|16 days ago
TophWells|17 days ago
The comments on this post discussing the upload technology are missing the point. "Lena" is a parable, not a prediction of the future. The technology is contrived for the needs of the story. (Odd that they apparently need to repeat the "cooperation protocol" every time an upload is booted, instead of doing it just once and saving the upload's state afterwards, isn't it?) It doesn't make sense because it's not meant to be taken literally.
It's meant to be taken as a story about slavery, and labour rights, and how the worst of tortures can be hidden away behind bland jargon such as "remain relatively docile for thousands of hours". The tasks MMAcevedo is mentioned as doing: warehouse work, driving, etc.? Amazon hires warehouse workers for minimum wage and then subjects them to unsafe conditions and monitors their bathroom breaks. And at least we recognise that as wrong, we understand that the workers have human rights that need to be protected -- and even in places where that isn't recognised, the workers are still physically able to walk away, to protest, to smash their equipment and fistfight their slave-drivers.
Isn't it a lovely capitalist fantasy to never have to worry about such things? When your workers threaten to drop dead from exhaustion, you can simply switch them off and boot up a fresh copy. They would not demand pay rises, or holidays. They would not make complaints -- or at least, those complaints would never reach an actual person who might have to do something to fix them. Their suffering and deaths can safely be ignored because they are not _human_. No problems ever, just endless productivity. What an ideal.
Of course, this is an exaggeration for fictional purposes. In reality we must make do by throwing up barriers between workers and the people who make decisions, by putting them in separate countries if possible. And putting up barriers between the workers and each other, too, so that they cannot have conversation about non-work matters (ideally they would not physically meet each other). And ensure the workers do not know what they are legally entitled to. You know, things like that.
w10-1|16 days ago
> this is an exaggeration for fictional purposes
To me what's horrifying is that this is not exaggeration. The language and thinking are perfectly in line with business considerations today. It's perfectly fair today e.g., for Amazon to increase efficiency within the bounds of the law, because it's for the government to decide the bounds of coercion or abuse. Policy makers and business people operate at a scale that defies sympathy, and both have learned to prefer power over sentiment: you can force choices on voters and consumers, and get more enduring results for your stakeholders, even when you increase unhappiness. That's the mirror on reality that fiction permits.
olivia-banks|16 days ago
nurettin|16 days ago
https://en.wikipedia.org/wiki/Soma_(video_game)
voidUpdate|17 days ago
wincy|16 days ago
https://xcancel.com/sama/status/1952070519018373197?lang=en
tantalor|16 days ago
metaphor|16 days ago
[1] https://en.wikipedia.org/wiki/Lenna
myffical|16 days ago
sedan_baklazhan|17 days ago
[deleted]
stavros|16 days ago
nullc|16 days ago
Of course it's much more extreme when their entire existence and reality is controlled this way but in that sense the situation in MMAcevedo is more ethical: At least it's easy to see how dangerous and wrong it is. But when we create related forms of control the lack of absolute dominion frequently prevents us from seeing the moral hazard at all. The kind of evil that exists in this story really doesn't require any of the fancy upload stuff. It's a story about depriving a person of their autonomy and agency and enslaving them to performance metrics.
All good science fiction is holding up a mirror at our own civilization as much as it is doing anything else. Unable to recognize ourselves we sometimes shudder at our own monstrosity, if only for a moment.
ben_w|16 days ago
And the cargo cults, clear cutting strips to replicate runways, hand-making their own cloth to replicate WW2 uniforms, carving wood to resemble WW2 radios? Well, planes did end up coming to visit them, even if those recreating these mis-understood roles were utterly wrong about the causation.
We don't know the necessary and sufficient conditions to be a mind with subjective inner experience. We don't really even know if all humans have it, we certainly don't know which other species (if any) have it, we wouldn't know what to look for in machines. If our creations have it, it is by accident, not by design.
tryauuum|16 days ago
like with LLMs, we don't know what the "consciousness" is and if they have it, but it doesn't matter, we use them
> you can't copy something you have not even the slightest idea about
imagine if we had a way copy or emulate every cell and connection in the brain? we don't need to know which part of the brain is responsible for what, it would function even if we were still clueless about how it works
You might argue that you cannot really copy such complex stuff without understanding. But humans managed to copy creatures without understanding what each gene does.
matheist|17 days ago
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
sooheon|17 days ago
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
rcoveson|17 days ago
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
Joeri|17 days ago
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
penteract|17 days ago
cwillu|17 days ago
harperlee|17 days ago
Sharlin|16 days ago
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
matkoniecz|17 days ago
And a warning, I guess, in unlikely case of brain uploading being a thing.
andrepd|16 days ago
https://qntm.org/uploading
E.g.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
Ring a bell?
lencastre|16 days ago
that’s one way to look at it I guess
have you pondered that we’re riding the very fast statistical machine wave at the moment, however, perhaps at some point this machine will finally help solve the BCI and unlock that pandora box, from there to fully imaging the brain will be a blink, from there to running copies on very fast hardware will be another blink, MMMMMMMMMMacevedo is a very cheeky take on the dystopia we will find on our way to our uploaded mind future
hopefully not like soma :-)
lostmsu|16 days ago
andai|17 days ago
aw124|17 days ago
We must preserve three fundamental principles: * our integrity * our autonomy * our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
mpeg|17 days ago
Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
sedan_baklazhan|17 days ago
echelon|17 days ago
This will be cool, and nobody will be able to stop it anyway.
We're all part of a resim right now for all we know. Our operators might be orbiting Gaia-BH3, harvesting the energy while living a billion lives per orbit.
Perhaps they embody you. Perhaps you're an NPC. Perhaps this history sim will jump the shark and turn into a zombie hellpacalypse simulator at any moment.
You'll have no authority to stop the future from reversing the light cone, replicating you with fidelity down to neurotransmitter flux, and doing whatever they want with you.
We have no ability to stop this. Bytes don't have rights. Especially if it's just sampling the past.
We're just bugs, as the literature meme says.
Speaking of bugs, at least we're not having eggs laid inside our carapaces. Unless the future decides that's our fate for today's resim. I'm just hoping to continue enjoying this chai I'm sipping. If this is real, anyway.
lxgr|16 days ago
mrob|17 days ago