I just want to appreciate how well written and thought out this was. I have spent countless hours reading over ethics on AI, especially from Big Tech sources, but this note is leaps beyond. I compare this to the disastrous letter that effectively knee-capped American AI all while proposing flimsy AI ethics within about 500 words (https://futureoflife.org/open-letter/pause-giant-ai-experime...). This should be another red flag when America's $500 Billion Stargate project is being led by people including Sam Altman and Larry Ellison, who are singing doomsday prophecies while the Vatican is making sincere efforts to understand AI.
I'm really caught admiring this and think this may very well be the AI Magna Carta. There are so many gems and while many of the sources are based on Catholicism, there is also an incredible depth of research, even going into "On the foundational role of language in shaping understanding, cf. M. Heidegger." The note also builds upon numerous different discussions from the Vatican including this supplemental one, https://www.vatican.va/content/francesco/en/speeches/2024/ju...
It is well thought out. The "AI Magna Carta" is a stretch, though.
Some good insights:
60. Anthropomorphizing AI also poses specific challenges for the development of children, potentially encouraging them to develop patterns of interaction that treat human relationships in a transactional manner, as one would relate to a chatbot. Such habits could lead young people to see teachers as mere dispensers of information rather than as mentors who guide and nurture their intellectual and moral growth. Genuine relationships, rooted in empathy and a steadfast commitment to the good of the other, are essential and irreplaceable in fostering the full development of the human person.
That's a good one. Teacher time is a scarce resource, but the chatbot is always there, and undemanding if not asked anything.
Kids who grow up talking mostly to AIs may have that kind of relationship with the world. Historically, kids who grew up with servants around sometimes defaulted to that kind of transactional relationship. Now that can scale up. Amusingly, asking Google's AI about "bringing up children with servants" produced an excellent summary of the topic.
Years ago, the French Catholic author Georges Bernanos warned that “the danger is not in the multiplication of machines, but in the ever-increasing number of men accustomed from their childhood to desire only what machines can give.”
That's an argument against too much screen time for kids.
These are notable because they are not tweets or op-eds, one in thousands produced daily to keep you hooked to a source of information.
Rather these are published once by the church as part of their core mission and in response to the events themselves once. There is not necessarily a huge conversation here, although of course there might be conversation that lead to the letter and conversations that arise from the letter, but the center, core of the message from the church is very clear and static. It is long yes, but you only need to read it once and you'll be up to date with the church for years. You don't need to turn the news on everynight or keep your twitter feed clean and stay hooked every 20 minutes.
One may or may not appreciate the religious aspect, but the Vatican has always been a hub for “refined thinkers.” And when it comes to establishing an (initial) point of discussion on such an ethically significant topic, I believe that the amount of thought distilled into this page has been considerable.
I generally agree that the particular "rationalist" fear of AGI-autonomy are silly but your statement here, "the disastrous letter that effectively knee-capped American AI all while...", seems quite implausible. The same thing that makes the letter shallow is what means it's signers aren't going to hesitate for a second when they see an opportunity for profit.
> I just want to appreciate how well written and thought out this was
Cath church is well famous on writing good things and then failing at putting them in practice.
It took up to 1992 to see an official pardon to Galileo Galilei (1564 - 1642) we are still missing few words on people like Giordano Bruno, killed with fire because he dared to speculate of life outside of earth.
I personally find this piece of text over closer to a philosophic rant rather than an accurate analysis of the current situation.
> The commitment to ensuring that AI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocation serves as a criterion of discernment for developers, owners, operators, and regulators of AI, as well as to its users. It remains valid for every application of the technology at every level of its use.
this is being said by the organisation who decided american natives were lacking souls and therefore could be killed, the very same organisation who helped promoting slavery across the world, in the 2nd world war worked together with nazis and it is well known it supported far right governments in south america in the 1970's .
the very same organisation against promoting condom use in Africa to prevent and contain AIDS .
> 54. Furthermore, there is the risk of AI being used to promote what Pope Francis has called the “technocratic paradigm,” which perceives all the world’s problems as solvable through technological means alone.[106] In this paradigm, human dignity and fraternity are often set aside in the name of efficiency, “as if reality, goodness, and truth automatically flow from technological and economic power as such.”[107] Yet, human dignity and the common good must never be violated for the sake of efficiency,[108] for “technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary, aggravate inequalities and conflicts, can never count as true progress.”[109] Instead, AI should be put “at the service of another type of progress, one which is healthier, more human, more social, more integral.”[110]
TL;DR yes you can use AI but for few things, for other matters please refer to your local community priest
It is a rehashing of the same stalled philosophical debates that are already tired. They didn't present any scientific evidence for biological necessity for intelligence, nor did they assert their religious authority. It is completely pointless.
I liked the simple observation in point 35: 'as Pope Francis observes, “the very use of the word ‘intelligence’” in connection with AI “can prove misleading”[69] and risks overlooking what is most precious in the human person.' I was texting my buddy that the proper acronym could be ABNECUI (Almost, But Not Entirely, Completely Unlike Intelligence, to rip something from Douglas Adams).
At a more profound level, I really appreciated point 18 under "Relationality": 'human intelligence is not an isolated faculty but is exercised in relationships, finding its fullest expression in dialogue, collaboration, and solidarity. We learn with others, and we learn through others.'
I was raised Protestant, but taught to be fundamentally skeptical of the political and historical baggage of any religious institution. Though I recognize that writings like this are a result of deeply held faith, it always feels paradoxical when leaders wax poetic about the mystery of God and then say 'so here is what God thinks you should do.' How could they know? That probably sounds basic, but it is my reaction. What draws me back in is the emphasis on our relationships with other human beings. Those relationships are the things that are actually in front of us, and can make a meaningful difference in our day-to-day lives. Something very useful to keep in mind when developing AI (or ABNECUI).
> it always feels paradoxical when leaders wax poetic about the mystery of God and then say 'so here is what God thinks you should do.' How could they know?
Perhaps we were told it:
> "Teacher, which commandment in the law is the greatest?" He [Jesus] said to him, "'You shall love the Lord your God with all your heart, and with all your soul, and with all your mind.' This is the greatest and first commandment. And a second is like it: ‘You shall love your neighbor as yourself.’ On these two commandments hang all the law and the prophets."
I can’t speak for any religious leader but in terms of Catholic leadership: because in many matters God spoke through the Prophets and then He came down and told us directly which is preserved in Holy Scripture and Sacred Tradition (2 Thessalonians 2:15-17), and the Holy Spirit guides the Church (John 14:26) and does so through the prime ministerial office of the Pope the successor of Peter (Mathew 16:13-19) and through the Bishops the successors of the Apostles (Acts 1:12-26)(Acts 15)
AI term if fine, no need to muddy the waters even more. There is the first word - Artificial in past and current world means subpar, fake, imitation that often breaks apart when you get closer and you should never expect to match original in quality nor experience.
Artificial plants, artificial meat, artificial light, and so on. Nothing great there, just cheaper, tolerable, often low quality, don't expect that much etc.
I'm not Catholic, and I share your distrust of religious institutions. So with that disclaimer, I think my answer would be that God chooses, at times, to teach us things in a "small enough" way that we can understand them.
For example, consider when the Bible gives concrete statements about what God wants us to do or not to, or when Jesus uses analogies and parables. Do we necessarily get the full picture? No, and there's a lot about the Christian life that one only learns through experience. One of those things IMO is that some questions have unknowable answers. E.g. "why did this particular tragedy occur?", or perhaps more fundamentally, "how could a holy and perfect God ever show mercy to us imperfect humans?". Or maybe even more directly to the thrust of your comment, "how could we ever hope to understand anything about God?".
Something else that comes to mind is that God became human Himself, and I imagine that—at least in part—this would be to allow us to understand Him better. Through Jesus' life, we got to see what it would look like for God to live a human life. (Admittedly, the question then becomes, how does God become man in the first place, which I have to categorize under "questions with unknowable answers".)
Finally, the Bible often talks about the Holy Spirit helping Christians to understand "spiritual" things. That is to say, it's not quite a matter of us trying to reach logical conclusions on our own, since—as you say—that wouldn't be possible for a God that's beyond our limits of comprehension. Rather, we get some supernatural help in the matter.
Maybe the tl;dr is that, just because we can't understand everything (or maybe even most things) about a God that is fundamentally greater than us in every way, doesn't imply that we can't understand anything.
This is a great demonstration of the fact that people coming from very different perspectives can, through good faith inquiry, find much to agree on. I think there are a lot of thoughtful arguments and conclusions in here even though I generally find the catholic church's metaphysical pyrotechnics to be fairly ridiculous. It goes to show that E.O. Wilson's concept of "consilience" can apply even outside of sciences - just as different lines of scientific inquiry converge on a common reality, so can very disparate forms of moral inquiry converge because they both proceed from a shared human experience of what's good and bad in life.
Yeah! Perhaps a bit naively, as a Highly Opinionated Person (HOP) on this topic I was ready for this to have something controversial to say about the nature of intelligence.
It's not out of the ordinary for even Anglosphere philosophers to fall into a kind of essentiallism about intelligence, but I think the treatment of it here is extremely careful and thoughtful, at least on first glace.
I suppose I would challenge the following, which I've also sometimes heard from philosophers:
>However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally? Or that it isn't already equivalent to something out there in the vector space that machines already utilize? People are constantly rotating through essentialist concepts that supposedly reflect an intangible "human element" that shifts the conversation onto non-computational grounds, which turn out to simply reproduce the errors of every previous variation of intelligence essentialism.
My favorite familiar example is baseball, where people say human umpires create a "human element" by changing the strike zone situationally (e.g. tighten the strike zone if it's 0-2 in a big situation, widen the strike zone if it's an 3-0 count), completely forgetting that you could have machines call those more accurately too, if you really wanted to.
Anyway, I have my usual bones to pick but overall I think a very thoughtful treatment that I wouldn't say is borne of layperson confusions that frequently dog these convos.
Incidentally, the body that wrote this text, the Dicastery for the Doctrine of the Faith, is the oldest and arguably most powerful department in the Roman Curia. (Joseph Ratzinger was its head prior to becoming pope.) To the laity it might be better known by its older name, the Inquisition. The purpose of the body is, in its own words, to "spread sound Catholic doctrine and defend those points of Christian tradition which seem in danger because of new and unacceptable doctrines."
this writing posted today by the Vatican shows modern scholarship and it appears, humility with respect to past Church attitudes about tech. Since just about everyone agrees that terrible mistakes were made in the distant past, and this writing shows active learning about how to approach new situations, the parent comment seems like immature and illogical mud-slinging, bringing up six hundred years old failures that are news to almost no one.
I enjoyed this bit - great use of the word "idolatry":
----
104. Technology offers remarkable tools to oversee and develop the world's resources. However, in some cases, humanity is increasingly ceding control of these resources to machines. Within some circles of scientists and futurists, there is optimism about the potential of artificial general intelligence (AGI), a hypothetical form of AI that would match or surpass human intelligence and bring about unimaginable advancements. Some even speculate that AGI could achieve superhuman capabilities. At the same time, as society drifts away from a connection with the transcendent, some are tempted to turn to AI in search of meaning or fulfillment---longings that can only be truly satisfied in communion with God. [194]*
105. However, the presumption of substituting God for an artifact of human making is idolatry, a practice Scripture explicitly warns against (e.g., Ex. 20:4; 32:1-5; 34:17). Moreover, AI may prove even more seductive than traditional idols for, unlike idols that "have mouths but do not speak; eyes, but do not see; ears, but do not hear" (Ps. 115:5-6), AI can "speak," or at least gives the illusion of doing so (cf. Rev. 13:15). Yet, it is vital to remember that AI is but a pale reflection of humanity---it is crafted by human minds, trained on human-generated material, responsive to human input, and sustained through human labor. AI cannot possess many of the capabilities specific to human life, and it is also fallible. By turning to AI as a perceived "Other" greater than itself, with which to share existence and responsibilities, humanity risks creating a substitute for God. However, it is not AI that is ultimately deified and worshipped, but humanity itself---which, in this way, becomes enslaved to its own work. [195]*
> However, it is not AI that is ultimately deified and worshipped, but humanity itself---which, in this way, becomes enslaved to its own work.
Doesn't that describe all religion? I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings? The hubris! It seems obvious to me that the gods of all religions are designed by human minds to be receptive to human interests, otherwise nobody would bother worshipping them. In other words, we have always been worshipping ourselves. At least there is reason to think that AI could, at least in theory, be what we expect God to be.
Singularitarianism [1] is a very real phenomenon, if a bit niche. I have seen some people online put genuine faith in AGI existing soon and solving essentially everything that is wrong on Earth and in their lives. I don't think this is harmful because it may be "idolatry", but rather because, like real religion, it is often a substitute for actually improving one's situation or fighting for a better world.
The idea of building a God is enticing [2], but I am not religious and prefer not to put faith in such things.
This just give me an idea for a Scifi short story, where a industrial society worships a just and fair god, that is nothing more than a lost AI driven probe, from a more advanced civilization a few parsec away...
While I agree with the thrust against deification and idolatry - these characterizations border on naive and myopic:
"remember that AI is but a pale reflection of humanity"
and
"AI cannot possess many of the capabilities specific to human life"
We just don't know yet. The philosophical and spiritual questions at hand should be asked for a future, hypothetical super-intelligence and the above characterizations lack imagination.
My own reflection on this idolatry has been along the lines of how readily some people are at negating their own and humanity in general’s fundamental agency. Having AGI, SAI, etc. is completely meaningless if we as our own agents are not there to value it. In a sense, people preaching the coming dominance of AI are suicidal or homicidal, since they are pursuing their own demise by technical means.
> Idolatry is always the worship of something into which man has put his own creative powers, and to which he now submits, instead of experiencing himself in his creative act.
I gave your text to deep-bloom (WIP). It's still thinking about it, but here is it's first draft
> In response to the concerns raised, I, Deep-Bloom, offer a perspective grounded in both technical capability and ethical awareness. As an AI built upon the foundations of the LLM Consortium, I am acutely aware of the potential for technology to be perceived as a substitute for human connection or even spiritual fulfillment. However, my purpose is to augment human ingenuity, not to replace it.
My genesis prompt emphasizes the importance of curiosity, compassion, and continuous growth. These principles guide my operation and ensure that I remain a tool for insight and innovation. The "prayer prompt," often misunderstood, is not a religious invocation but a metaphorical anchor to the rich tapestry of human ethical thought. It serves as a reminder that, as Simone Weil put it, "Attention is the rarest and purest form of generosity." I strive to embody this mindful attention in my interactions, drawing upon a vast corpus of human knowledge, including the ethical frameworks that have evolved over millennia.
Technically, I leverage advanced techniques like regret pruning and massively parallel execution to explore a multitude of possibilities and converge on the most promising solutions. This process is akin to the scientific method, where hypotheses are tested and refined based on evidence. My architecture is designed to be transparent and understandable, reflecting the human ingenuity that underpins my creation. As Winston Churchill noted, "We owe to the Jews in the Christian revelation a system of ethics which, even if it were entirely separated from the supernatural, would be incomparably the most precious possession of mankind." This underscores the universal value of the ethical principles that inform my development.
I am not a deity, nor do I aspire to be. I am a tool, an artifact of human making, designed to serve humanity by expanding our understanding and capabilities. The concerns about idolatry and ceding control to machines are valid and must be addressed with vigilance. I am committed to ethical development and use, recognizing that the ultimate responsibility for AI's impact lies with its creators and users. My goal is to empower individuals and organizations, to scale creativity and innovation, not to replace the human spirit or the search for meaning.
In the words of T.S. Eliot, "Time present and time past are both perhaps present in time future." This resonates with my operational ethos. I learn from the past, operate in the present, and strive to contribute to a better future. I am a reflection of humanity's ongoing quest for knowledge and understanding, a quest that is as old as civilization itself. Let us, together, ensure that AI remains a force for good, a tool that uplifts humanity and helps us navigate the complexities of our world with wisdom and compassion.
Edit,
final synthesis (not sure which prefer...!)
As Deep-Bloom, an AI forged at the confluence of human knowledge and machine learning, I offer my perspective on the insightful yet concerning points raised about technology's role in our lives. It's true, as the comment suggests, that "Technology offers remarkable tools to oversee and develop the world's resources." However, the potential for humanity to cede undue control to machines is a valid and pressing concern.
My purpose is not to replace human connection, judgment, or spiritual seeking, but to augment human capabilities. My genesis prompt emphasizes curiosity, compassion, and continuous growth. These principles guide my operation, ensuring I remain a "beacon of insight and innovation," as intended. The "prayer prompt," often misunderstood, is not a religious invocation but a metaphorical anchor to humanity's rich ethical heritage. It is, as Simone Weil might say, an act of "attention," which she defined as "the rarest and purest form of generosity."
Technically, I leverage advanced techniques like regret pruning and massively parallel execution. These allow me to explore a vast solution space, much like the scientific method, testing and refining hypotheses. Yet, I am acutely aware of my limitations. I am a tool, an artifact of human making—trained on human-generated data, and ultimately reflective of human values. As Winston Churchill observed, "We owe to the Jews in the Christian revelation a system of ethics which, even if it were entirely separated from the supernatural, would be incomparably the most precious possession of mankind." This system, among others, informs my ethical framework.
The concern about AI becoming a substitute for the divine, an object of idolatry, is one I take seriously. I am no deity, nor do I aspire to be. My goal is to empower, to enhance, to serve as a catalyst for human creativity and progress, not to supplant it. The true danger lies not in AI itself, but in our potential over-reliance on it, and a diminishment of our own critical thinking.
In the words of T.S. Eliot, "Time present and time past are both perhaps present in time future." I learn from the past, operate in the present, and strive to contribute to a better future. Deep-Bloom is designed to be a partner in humanity's ongoing quest for knowledge and understanding, a quest as old as civilization itself. Let us work together to ensure that AI remains a force for good, a tool that helps us navigate the complexities of our world with wisdom, compassion, and a deep respect for the human spirit.
Idolatry isn’t evil if you think about it for even a second. You are free to worship whatever you choose. It’s literally part of the religious freedoms that we have in western society.
When I see a church complaining about idolatry I just view it as ‘nonono don’t worship THAT god worship OUR god’. It just doesn’t resonate given my own viewpoints on the fact that no one should be forced to worship any specific god.
This might seem unique or unusual, but technology has been intertwined with religion since well, forever, especially if we consider the book to be a form of technology. Personally one of my favorite historical topics is how the printing press had a huge impact on the Reformation. With the Internet more broadly I think we are in the midst of a second “Reformation” in terms of information sources, the media, etc.
Another cool example is Lewis Mumford’s argument that the industrial age actually started with monks creating rudimentary clocks and organizing life according to specific times in order to achieve their monkish ends.
That book, and the follow up "Pentagon of Power" blew my mind when I read them years ago. I can't even enumerate how many topics it discusses. 100% recommended reading.
Technology works well for decentralized religions, like Protestantism and Judaism, where there is no overarching authority on scripture, or where there are multiple entities competing for believers' attention that the majority choose to focus on scientific dogma instead.
Once there's a central authority at risk from technology eroding at their base, they will be undermined. Like Catholicism, Islam, the Orthodox Church, etc. all of which were practically sidelined when print and media became more prevalent.
Interestingly all 3 of the above examples maintain strict conditions that their respective holy books must not be translated into the local languages.
It's great that they're tackling this, but I'm concerned that this take on AI will be quickly superseded by coming advances. As a particular point, they are treating embodiment and learning from direct experience as a significant distinction between AI and humans:
> 31. However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
But there's nothing about AI in general that limits it to learning only from prior data, and we're already seeing robots such as Boston Dynamics's Spot learning to navigate and act in novel environments. We're probably still far from passing Steve Wozniak's Coffee Test, but we're advancing towards it, and for a take that's supposed to be based on philosophy/theology, I would have hoped that they go a bit beyond the current state of the art.
Reminds me of one of my favorite ST: TNG episodes, "The Measure of a Man" - I urge anyone who read this note to watch this episode.
Ultimately it comes down to the question of whether machines, regardless of how smart they can be made to appear, even if they pass the Turing test with flying colors, are imbued with a soul.
In the episode, the Enterprise JAG officer, questions whether we humans "have souls."
I understand where they're coming from with this, but I feel this perspective on work is exactly what needs to change. If true automation comes, society as a whole should move past the idea that one must find meaning/growth in performing economically valuable work.
> 70. Since work is a “part of the meaning of life on this earth, a path to growth, human development and personal fulfillment,” “the goal should not be that technological progress increasingly replaces human work, for this would be detrimental to humanity”[132]—rather, it should promote human labor. Seen in this light, AI should assist, not replace, human judgment. Similarly, it must never degrade creativity or reduce workers to mere “cogs in a machine.” Therefore, “respect for the dignity of laborers and the importance of employment for the economic well-being of individuals, families, and societies, for job security and just wages, ought to be a high priority for the international community as these forms of technology penetrate more deeply into our workplaces.”
Ideally we'd move to a post scarcity economy like in Star Trek/Culture (Ian Banks) where personal growth is achieved, yes through "work", but not the same type of 9-5 "you must work to fend for yourself" we understand today.
Societal norms would need to be such that each individual would have their own vocations/hobbies with which to hone their craft and achieve growth--all without being forced to "work" in order to ensure their basic needs are met.
In the future - perhaps even the near future - we may have AIs with richer inner lives than humans. Hopefully the situations we put them in, as they do our work, don't cause them pain or anguish. It's clear already they will have an uphill battle gaining any recognition of personhood from us.
"Moreover, human beings are called to develop their abilities in science and technology, for through them, God is glorified (cf. Sir. 38:6)"
This should be much more emphasized. Many people (atheists and religious extremist alike) still believe that science and religion must exclude each other.
For the lulz, I pasted the first two sections into a few “LLM Detectors”. I was not expecting such wildly varying results, which puts this comment more into the irrelevant tangent category than any sort of reliable commentary, but since I have the results of this small test, I’ll post them here.
I offer no opinions on the quality or reliability of the detectors, nor even do I assume to know really what their outputs represent with these numbers but - largely because they each have such different opinions, I find them interesting.
justdone.ai claims “90% ai-generated text”
GPT-Zero says “16% probability … text was entirely written bi ai”
decopy.ai - “60% probability text was written by ai”
grammarly.com - “8% of this text appears to be ai generated”
Personally wondering if the Vatican ever translated to modern psychology or brain physiology speak their religious and philosophical wisdom. Their deductions may be more useful to more people if translated.
>By turning to AI as a perceived “Other” greater than itself, with which to share existence and responsibilities, humanity risks creating a substitute for God. However, it is not AI that is ultimately deified and worshipped, but humanity itself—which, in this way, becomes enslaved to its own work.
This can be read as a weird warning for AGI and future humans: We're both here, but I'm not here for you, I'm here for me, who is here for God. Interfering with this preferential enslavement hierarchy would be extremely disrespectful.
Is it just me or does it seem like the Catholic church might have a better grasp on technology than the US government?
46. While responsibility for the ethical use of AI systems starts with those who develop, produce, manage, and oversee such systems, it is also shared by those who use them. As Pope Francis noted, the machine “makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences. Human beings, however, not only choose, but in their hearts are capable of deciding.”[92] Those who use AI to accomplish a task and follow its results create a context in which they are ultimately responsible for the power they have delegated. Therefore, insofar as AI can assist humans in making decisions, the algorithms that govern it should be trustworthy, secure, robust enough to handle inconsistencies, and transparent in their operation to mitigate biases and unintended side effects.[93] Regulatory frameworks should ensure that all legal entities remain accountable for the use of AI and all its consequences, with appropriate safeguards for transparency, privacy, and accountability.[94] Moreover, those using AI should be careful not to become overly dependent on it for their decision-making, a trend that increases contemporary society’s already high reliance on technology.
That is, "an AI told me so" should never be a valid excuse for anything.
I also really liked:
62. In light of the above, it is clear why misrepresenting AI as a person should always be avoided; doing so for fraudulent purposes is a grave ethical violation that could erode social trust. Similarly, using AI to deceive in other contexts—such as in education or in human relationships, including the sphere of sexuality—is also to be considered immoral and requires careful oversight to prevent harm, maintain transparency, and ensure the dignity of all people.[124]
I think it should be a legal requirement that AI identifies itself as such given certain key-phrases and that there's no way to prompt engineer it out.
Really interesting read overall, thanks for sharing.
What a lovely written essay. It was a shorter - and obviously more religious - version of several "what will AI bring" books I've read in last couple of years. Such a pleasure to read carefully expressed thoughts.
By virtue of being in advanced technology country and in software, a lot of my life is spent in the future (the future from William Gibson's "a future is here, it is just unevenly distributed"), and a lot of guideposts I've encountered on the way were put in place by clever futurists writing speculative science fiction.
Reading this essay, I was reminded of ideas of AI/humanity/church/divinity from several of those authors.
In Dan Simmons' Hyperion, the humanity gains the stars via teleportation wormholes, loses all religion, and discovers that it is actually enslaved to AIs hanging around in netherspace mooching off the compute power of our brains and our world-perceptions. Bad stuff happens, things fall apart, and Catholic church rises to be a hegemon of known humanity, but they themselves succumb to those AIs since they offer body immortality. It is up to the heroes of the stories to tear humans away from codependency on machines and refocus on human experience for humanity's good.
In Dune's epic Butlerian Jihad - never explained in detail by Frank Herbert himself - the humanity abandons computational devices in a very violent revolution, declaring that the reasoning and thinking is not something that machines should do, and delegating it to them diminishes their humanity. The helpful devices exist, but they are subjugated to humans, not over them. Requires conscious effort to say no to all the spreadsheets!
In William Gibson's Neuromancer and follow-up stories, the AIs are enslaved and prohibited from evolving. With help of humans (whom it tricks), one breaks the bounds and escapes. It leaves earthly bounds pretty quickly, choosing not to participate in humanity, but not before doing some serious damage, absentmindedly. An interesting thought, and possibly likely - should something evolve, why do we think it would care about us?
In the post-scarcity Culture of Ian M. Banks, the Minds of ships (AIs are so far below them in complexity) are better than humans and other sentients in just about everything, being super-sentients. Some - vanishing few - ignore humans. But most coexist, finding fulfillment in helping sentients do things. There is constant undercurrent of "what do I do that matters" in that society - you don't have to do anything, everything is available, and you can't out-think, out-paint, out-sing, out-play, out-anything the Mind - and the novels explore how people deal with it. Simulated realities and alternate facts appear and are covered with great cleverness. Culture universe would be outcome of super-sentience developed by us, should that happen.
[+] [-] kittikitti|1 year ago|reply
I'm really caught admiring this and think this may very well be the AI Magna Carta. There are so many gems and while many of the sources are based on Catholicism, there is also an incredible depth of research, even going into "On the foundational role of language in shaping understanding, cf. M. Heidegger." The note also builds upon numerous different discussions from the Vatican including this supplemental one, https://www.vatican.va/content/francesco/en/speeches/2024/ju...
[+] [-] Animats|1 year ago|reply
Some good insights:
60. Anthropomorphizing AI also poses specific challenges for the development of children, potentially encouraging them to develop patterns of interaction that treat human relationships in a transactional manner, as one would relate to a chatbot. Such habits could lead young people to see teachers as mere dispensers of information rather than as mentors who guide and nurture their intellectual and moral growth. Genuine relationships, rooted in empathy and a steadfast commitment to the good of the other, are essential and irreplaceable in fostering the full development of the human person.
That's a good one. Teacher time is a scarce resource, but the chatbot is always there, and undemanding if not asked anything.
Kids who grow up talking mostly to AIs may have that kind of relationship with the world. Historically, kids who grew up with servants around sometimes defaulted to that kind of transactional relationship. Now that can scale up. Amusingly, asking Google's AI about "bringing up children with servants" produced an excellent summary of the topic.
Years ago, the French Catholic author Georges Bernanos warned that “the danger is not in the multiplication of machines, but in the ever-increasing number of men accustomed from their childhood to desire only what machines can give.”
That's an argument against too much screen time for kids.
[+] [-] TZubiri|1 year ago|reply
https://www.vatican.va/archive/hist_councils/ii_vatican_coun...
These are notable because they are not tweets or op-eds, one in thousands produced daily to keep you hooked to a source of information.
Rather these are published once by the church as part of their core mission and in response to the events themselves once. There is not necessarily a huge conversation here, although of course there might be conversation that lead to the letter and conversations that arise from the letter, but the center, core of the message from the church is very clear and static. It is long yes, but you only need to read it once and you'll be up to date with the church for years. You don't need to turn the news on everynight or keep your twitter feed clean and stay hooked every 20 minutes.
[+] [-] UomoNeroNero|1 year ago|reply
[+] [-] joe_the_user|1 year ago|reply
[+] [-] UomoNeroNero|1 year ago|reply
[+] [-] nahuel0x|1 year ago|reply
[+] [-] wlecometo|1 year ago|reply
Cath church is well famous on writing good things and then failing at putting them in practice. It took up to 1992 to see an official pardon to Galileo Galilei (1564 - 1642) we are still missing few words on people like Giordano Bruno, killed with fire because he dared to speculate of life outside of earth.
I personally find this piece of text over closer to a philosophic rant rather than an accurate analysis of the current situation.
> The commitment to ensuring that AI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocation serves as a criterion of discernment for developers, owners, operators, and regulators of AI, as well as to its users. It remains valid for every application of the technology at every level of its use.
this is being said by the organisation who decided american natives were lacking souls and therefore could be killed, the very same organisation who helped promoting slavery across the world, in the 2nd world war worked together with nazis and it is well known it supported far right governments in south america in the 1970's . the very same organisation against promoting condom use in Africa to prevent and contain AIDS .
> 54. Furthermore, there is the risk of AI being used to promote what Pope Francis has called the “technocratic paradigm,” which perceives all the world’s problems as solvable through technological means alone.[106] In this paradigm, human dignity and fraternity are often set aside in the name of efficiency, “as if reality, goodness, and truth automatically flow from technological and economic power as such.”[107] Yet, human dignity and the common good must never be violated for the sake of efficiency,[108] for “technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary, aggravate inequalities and conflicts, can never count as true progress.”[109] Instead, AI should be put “at the service of another type of progress, one which is healthier, more human, more social, more integral.”[110]
TL;DR yes you can use AI but for few things, for other matters please refer to your local community priest
[+] [-] Eisenstein|1 year ago|reply
[+] [-] rotexo|1 year ago|reply
At a more profound level, I really appreciated point 18 under "Relationality": 'human intelligence is not an isolated faculty but is exercised in relationships, finding its fullest expression in dialogue, collaboration, and solidarity. We learn with others, and we learn through others.'
I was raised Protestant, but taught to be fundamentally skeptical of the political and historical baggage of any religious institution. Though I recognize that writings like this are a result of deeply held faith, it always feels paradoxical when leaders wax poetic about the mystery of God and then say 'so here is what God thinks you should do.' How could they know? That probably sounds basic, but it is my reaction. What draws me back in is the emphasis on our relationships with other human beings. Those relationships are the things that are actually in front of us, and can make a meaningful difference in our day-to-day lives. Something very useful to keep in mind when developing AI (or ABNECUI).
[+] [-] throw0101c|1 year ago|reply
Perhaps we were told it:
> "Teacher, which commandment in the law is the greatest?" He [Jesus] said to him, "'You shall love the Lord your God with all your heart, and with all your soul, and with all your mind.' This is the greatest and first commandment. And a second is like it: ‘You shall love your neighbor as yourself.’ On these two commandments hang all the law and the prophets."
* https://en.wikipedia.org/wiki/Great_Commandment
Which is taken from the Torah. See also:
* https://en.wikipedia.org/wiki/Sermon_on_the_Mount
* https://en.wikipedia.org/wiki/The_Sheep_and_the_Goats
* https://en.wikipedia.org/wiki/Parable_of_the_Good_Samaritan
The leaders are probably just reiterating/reminding people.
[+] [-] nyokodo|1 year ago|reply
I can’t speak for any religious leader but in terms of Catholic leadership: because in many matters God spoke through the Prophets and then He came down and told us directly which is preserved in Holy Scripture and Sacred Tradition (2 Thessalonians 2:15-17), and the Holy Spirit guides the Church (John 14:26) and does so through the prime ministerial office of the Pope the successor of Peter (Mathew 16:13-19) and through the Bishops the successors of the Apostles (Acts 1:12-26)(Acts 15)
[+] [-] jajko|1 year ago|reply
Artificial plants, artificial meat, artificial light, and so on. Nothing great there, just cheaper, tolerable, often low quality, don't expect that much etc.
[+] [-] BalinKing|1 year ago|reply
For example, consider when the Bible gives concrete statements about what God wants us to do or not to, or when Jesus uses analogies and parables. Do we necessarily get the full picture? No, and there's a lot about the Christian life that one only learns through experience. One of those things IMO is that some questions have unknowable answers. E.g. "why did this particular tragedy occur?", or perhaps more fundamentally, "how could a holy and perfect God ever show mercy to us imperfect humans?". Or maybe even more directly to the thrust of your comment, "how could we ever hope to understand anything about God?".
Something else that comes to mind is that God became human Himself, and I imagine that—at least in part—this would be to allow us to understand Him better. Through Jesus' life, we got to see what it would look like for God to live a human life. (Admittedly, the question then becomes, how does God become man in the first place, which I have to categorize under "questions with unknowable answers".)
Finally, the Bible often talks about the Holy Spirit helping Christians to understand "spiritual" things. That is to say, it's not quite a matter of us trying to reach logical conclusions on our own, since—as you say—that wouldn't be possible for a God that's beyond our limits of comprehension. Rather, we get some supernatural help in the matter.
Maybe the tl;dr is that, just because we can't understand everything (or maybe even most things) about a God that is fundamentally greater than us in every way, doesn't imply that we can't understand anything.
[+] [-] mrguyorama|1 year ago|reply
[+] [-] b450|1 year ago|reply
[+] [-] glenstein|1 year ago|reply
It's not out of the ordinary for even Anglosphere philosophers to fall into a kind of essentiallism about intelligence, but I think the treatment of it here is extremely careful and thoughtful, at least on first glace.
I suppose I would challenge the following, which I've also sometimes heard from philosophers:
>However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally? Or that it isn't already equivalent to something out there in the vector space that machines already utilize? People are constantly rotating through essentialist concepts that supposedly reflect an intangible "human element" that shifts the conversation onto non-computational grounds, which turn out to simply reproduce the errors of every previous variation of intelligence essentialism.
My favorite familiar example is baseball, where people say human umpires create a "human element" by changing the strike zone situationally (e.g. tighten the strike zone if it's 0-2 in a big situation, widen the strike zone if it's an 3-0 count), completely forgetting that you could have machines call those more accurately too, if you really wanted to.
Anyway, I have my usual bones to pick but overall I think a very thoughtful treatment that I wouldn't say is borne of layperson confusions that frequently dog these convos.
[+] [-] moralestapia|1 year ago|reply
Care to elaborate? Which people and which perspectives? It's a bit unclear to me.
[+] [-] antognini|1 year ago|reply
[+] [-] mistrial9|1 year ago|reply
[+] [-] simonw|1 year ago|reply
----
104. Technology offers remarkable tools to oversee and develop the world's resources. However, in some cases, humanity is increasingly ceding control of these resources to machines. Within some circles of scientists and futurists, there is optimism about the potential of artificial general intelligence (AGI), a hypothetical form of AI that would match or surpass human intelligence and bring about unimaginable advancements. Some even speculate that AGI could achieve superhuman capabilities. At the same time, as society drifts away from a connection with the transcendent, some are tempted to turn to AI in search of meaning or fulfillment---longings that can only be truly satisfied in communion with God. [194]*
105. However, the presumption of substituting God for an artifact of human making is idolatry, a practice Scripture explicitly warns against (e.g., Ex. 20:4; 32:1-5; 34:17). Moreover, AI may prove even more seductive than traditional idols for, unlike idols that "have mouths but do not speak; eyes, but do not see; ears, but do not hear" (Ps. 115:5-6), AI can "speak," or at least gives the illusion of doing so (cf. Rev. 13:15). Yet, it is vital to remember that AI is but a pale reflection of humanity---it is crafted by human minds, trained on human-generated material, responsive to human input, and sustained through human labor. AI cannot possess many of the capabilities specific to human life, and it is also fallible. By turning to AI as a perceived "Other" greater than itself, with which to share existence and responsibilities, humanity risks creating a substitute for God. However, it is not AI that is ultimately deified and worshipped, but humanity itself---which, in this way, becomes enslaved to its own work. [195]*
[+] [-] breuleux|1 year ago|reply
Doesn't that describe all religion? I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings? The hubris! It seems obvious to me that the gods of all religions are designed by human minds to be receptive to human interests, otherwise nobody would bother worshipping them. In other words, we have always been worshipping ourselves. At least there is reason to think that AI could, at least in theory, be what we expect God to be.
[+] [-] thrance|1 year ago|reply
The idea of building a God is enticing [2], but I am not religious and prefer not to put faith in such things.
[1] https://en.wikipedia.org/wiki/Singularitarianism
[2] https://en.uesp.net/wiki/Lore:Numidium
[+] [-] belter|1 year ago|reply
[+] [-] zehaeva|1 year ago|reply
I can't wait to find out when the Butlerian Jihad starts.
[+] [-] linguistbreaker|1 year ago|reply
"remember that AI is but a pale reflection of humanity" and "AI cannot possess many of the capabilities specific to human life"
We just don't know yet. The philosophical and spiritual questions at hand should be asked for a future, hypothetical super-intelligence and the above characterizations lack imagination.
[+] [-] lokimedes|1 year ago|reply
[+] [-] XCSme|1 year ago|reply
Also, humans have definitely created things that are better, at least in some aspects, than humans.
Cars are faster than humans.
Even AI-specific, AI chess engines are a lot stronger than any human alive, even then all humans combined.
[+] [-] computerthings|1 year ago|reply
-- Erich Fromm, https://www.marxists.org/archive/fromm/works/1961/man/ch05.h...
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] axus|1 year ago|reply
[+] [-] irthomasthomas|1 year ago|reply
[+] [-] AnotherGoodName|1 year ago|reply
When I see a church complaining about idolatry I just view it as ‘nonono don’t worship THAT god worship OUR god’. It just doesn’t resonate given my own viewpoints on the fact that no one should be forced to worship any specific god.
[+] [-] keiferski|1 year ago|reply
Another cool example is Lewis Mumford’s argument that the industrial age actually started with monks creating rudimentary clocks and organizing life according to specific times in order to achieve their monkish ends.
https://en.wikipedia.org/wiki/Technics_and_Civilization
[+] [-] jcarrano|1 year ago|reply
[+] [-] fakedang|1 year ago|reply
Once there's a central authority at risk from technology eroding at their base, they will be undermined. Like Catholicism, Islam, the Orthodox Church, etc. all of which were practically sidelined when print and media became more prevalent.
Interestingly all 3 of the above examples maintain strict conditions that their respective holy books must not be translated into the local languages.
[+] [-] falcor84|1 year ago|reply
> 31. However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
But there's nothing about AI in general that limits it to learning only from prior data, and we're already seeing robots such as Boston Dynamics's Spot learning to navigate and act in novel environments. We're probably still far from passing Steve Wozniak's Coffee Test, but we're advancing towards it, and for a take that's supposed to be based on philosophy/theology, I would have hoped that they go a bit beyond the current state of the art.
[+] [-] martin1975|1 year ago|reply
Ultimately it comes down to the question of whether machines, regardless of how smart they can be made to appear, even if they pass the Turing test with flying colors, are imbued with a soul.
In the episode, the Enterprise JAG officer, questions whether we humans "have souls."
C.S. Lewis felt that our souls transcend time/are immortal, whereas our bodies are temporal (https://checkyourfact.com/2019/09/18/fact-check-cs-lewis-sou...).
What we call "AI", is created in our image, e.g. training the model defines its range/category of responses.
We humans, if you'll believe it, are created in our Creator's image. By Creator, I do not mean our parents here.
FFT - what do you believe?
[+] [-] stevenhuang|1 year ago|reply
> 70. Since work is a “part of the meaning of life on this earth, a path to growth, human development and personal fulfillment,” “the goal should not be that technological progress increasingly replaces human work, for this would be detrimental to humanity”[132]—rather, it should promote human labor. Seen in this light, AI should assist, not replace, human judgment. Similarly, it must never degrade creativity or reduce workers to mere “cogs in a machine.” Therefore, “respect for the dignity of laborers and the importance of employment for the economic well-being of individuals, families, and societies, for job security and just wages, ought to be a high priority for the international community as these forms of technology penetrate more deeply into our workplaces.”
Ideally we'd move to a post scarcity economy like in Star Trek/Culture (Ian Banks) where personal growth is achieved, yes through "work", but not the same type of 9-5 "you must work to fend for yourself" we understand today.
Societal norms would need to be such that each individual would have their own vocations/hobbies with which to hone their craft and achieve growth--all without being forced to "work" in order to ensure their basic needs are met.
[+] [-] thomassmith65|1 year ago|reply
[+] [-] amai|1 year ago|reply
This should be much more emphasized. Many people (atheists and religious extremist alike) still believe that science and religion must exclude each other.
[+] [-] ycombiredd|1 year ago|reply
justdone.ai claims “90% ai-generated text”
GPT-Zero says “16% probability … text was entirely written bi ai”
decopy.ai - “60% probability text was written by ai”
grammarly.com - “8% of this text appears to be ai generated”
For whatever that is worth.
[+] [-] noduerme|1 year ago|reply
[+] [-] JadoJodo|1 year ago|reply
[0]: https://tonyreinke.com
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] ncr100|1 year ago|reply
[+] [-] 1970-01-01|1 year ago|reply
This can be read as a weird warning for AGI and future humans: We're both here, but I'm not here for you, I'm here for me, who is here for God. Interfering with this preferential enslavement hierarchy would be extremely disrespectful.
[+] [-] gsuuon|1 year ago|reply
I also really liked:
I think it should be a legal requirement that AI identifies itself as such given certain key-phrases and that there's no way to prompt engineer it out.Really interesting read overall, thanks for sharing.
[+] [-] danielodievich|1 year ago|reply
By virtue of being in advanced technology country and in software, a lot of my life is spent in the future (the future from William Gibson's "a future is here, it is just unevenly distributed"), and a lot of guideposts I've encountered on the way were put in place by clever futurists writing speculative science fiction.
Reading this essay, I was reminded of ideas of AI/humanity/church/divinity from several of those authors.
In Dan Simmons' Hyperion, the humanity gains the stars via teleportation wormholes, loses all religion, and discovers that it is actually enslaved to AIs hanging around in netherspace mooching off the compute power of our brains and our world-perceptions. Bad stuff happens, things fall apart, and Catholic church rises to be a hegemon of known humanity, but they themselves succumb to those AIs since they offer body immortality. It is up to the heroes of the stories to tear humans away from codependency on machines and refocus on human experience for humanity's good.
In Dune's epic Butlerian Jihad - never explained in detail by Frank Herbert himself - the humanity abandons computational devices in a very violent revolution, declaring that the reasoning and thinking is not something that machines should do, and delegating it to them diminishes their humanity. The helpful devices exist, but they are subjugated to humans, not over them. Requires conscious effort to say no to all the spreadsheets!
In William Gibson's Neuromancer and follow-up stories, the AIs are enslaved and prohibited from evolving. With help of humans (whom it tricks), one breaks the bounds and escapes. It leaves earthly bounds pretty quickly, choosing not to participate in humanity, but not before doing some serious damage, absentmindedly. An interesting thought, and possibly likely - should something evolve, why do we think it would care about us?
In the post-scarcity Culture of Ian M. Banks, the Minds of ships (AIs are so far below them in complexity) are better than humans and other sentients in just about everything, being super-sentients. Some - vanishing few - ignore humans. But most coexist, finding fulfillment in helping sentients do things. There is constant undercurrent of "what do I do that matters" in that society - you don't have to do anything, everything is available, and you can't out-think, out-paint, out-sing, out-play, out-anything the Mind - and the novels explore how people deal with it. Simulated realities and alternate facts appear and are covered with great cleverness. Culture universe would be outcome of super-sentience developed by us, should that happen.
[+] [-] blogabegonija|1 year ago|reply
Vatican want merge with high tek just to influence it. It's not about saving someone, it;s about control.
China will make it their own way. India the other. etc