NPR has picked up this month-old Substack post with commentary that both obscures some of the technical details (specific audio and video tools used) whilst adding commentary on political, propaganda, and fraud prospects of the technique:
"It takes a few dollars and 8 minutes to create a deepfake. And that's only the start"
What I haven't seen mentioned yet, and what greatly interests me, is the creation of semi-sentient messenger constructs. I don't have a better term for it, but although unsexy that covers it pretty well.
GPT-4 sure shows signs of sentience. Once fine tuning to a specific task becomes commonplace, you could conceivably finetune a LLM also to your own personality. Loaded up with specific knowledge+personality, this can then allow you to send people interactive messages.
A pseudo-intelligent construct that conveys your message, which the receiver can interrogate. And not just text. As this article shows it could well be a multi-modal talking head, to tickle the social centers of your brain and give things more (perceived) personality.
No longer spouting your requirements to the team over a boring video call; everyone gets your opinion as an interactive avatar, to query at will. I hope someone is working on this..!
Regarding terminology, Revelation Space has three levels of AI.
Alpha: Sentient brain upload.
Beta: A perfect but non-sentient mimic, trained on public info instead of a scan. It will pass the Turing test because the original would, but it is not truly sentient. (Except one of them manages to become sentient anyway, I think).
> No longer spouting your requirements to the team over a boring video call; everyone gets your opinion as an interactive avatar, to query at will. I hope someone is working on this..!
Heh, now even when you do think of the edge cases to ask the product manager about in advance, the virtual model they send you instead of hopping on a call will say one thing (since they didn't give it an exact specific answer for that scenario) and later they'll tell you something else directly!
Bonsai Kitten, a website that purported to be reviving the lost art of shoving cats in glass jars, had more than one person complaining to remove the site.
More than once it's crossed my mind that I have personally encountered a very very very small slice of the real world, and anything outside of that observed on the internet could very well be a fabrication.
I base my belief in what I see online on cross-checking, general trust-worthiness of sources I encounter, and my gut feeling for the difficulty of faking something, but there's nothing stopping a determined adversary from inserting things into my perceived reality.
A sufficiently advanced AI could be generating the internet on the fly just for me and I wouldn't be any the wiser.
I've been reflecting on the reality of adult literacy and technical proficiency, and what implications those have for a host of advanced-technology-mediated activities, from basic life skills and commerce to informed democracy.
For both, the high-ability group is a small set (~15% for overall literacy, 5--10% for technical literacy) at the population level.
Note that this is highly consistent across multiple countries and does not seem to indicate a particular lack of ability amongst American residents. (Though the US literacy study does show a marked decline especially of non-native English speakers near the Southern border.)
This is sometimes described as the Lake Wobegon Effect (or perhaps "Lake Wobegon Fallacy"), which is to say that you can't exit a general population cognitive limit by assuming "all the children are above average" (from the late NPR radio programme "A Prairie Home Companion").
We're going to have to address the coming, and/or present, highly-contaminated epistemic environment with the population and skills we have, not those we'd prefer, to borrow from another sentiment.
This is very impressive. Don't miss clicking the first link[1]
This video feels like a tortured digital twin. Very unnerving action with the mouth. So creepy. But... if you did 5 seconds of full screen and then minimized down into a small circle and had your main content, I bet I wouldn't have noticed. Voice is passable.
And thus, it now becomes imperative for everyone who cares, to establish a good record of themselves in the public space so that the LLM corpus is representative of them or at least their “best foot forward”?
Probably doesn't matter if you live the life of a sage. Negative news travels more effectively than good news. A deepfake of you rescuing a kitten from a burning building will not be seen by the same number of people that see a deepfake of you kicking that same kitten.
> Tell us everything you know about Gregory Magarshak and do it in 3 paragraphs
Gregory Magarshak is a Russian-born American entrepreneur and investor. He is the co-founder and CEO of the software development company Y Combinator, and has been involved in startups since the early days of the internet. Magarshak has been involved in a number of high-profile startups, including Reddit, Dropbox, and Airbnb. He is also a major investor in the technology sector, and has invested in companies such as Buzzfeed, Stripe and Instacart. Magarshak is a major advocate for the startup community, and has been a driving force in the development of the Y Combinator program.
I tried it again
(Spoiler alert: most of this is outright fiction)
> Tell me about Greg Magarshak
Greg Magarshak is a software engineer, entrepreneur, and inventor based in the United States. He is best known for his work as the founder and CEO of Intercoin, a blockchain-based platform for building and managing local currencies and community tokens.
Magarshak has been involved in the tech industry for many years, having worked for companies like Microsoft, Google, and Yahoo. He has also founded several tech startups, including WikiExperts.us, a platform for creating and editing Wikipedia articles, and the open-source video conferencing platform, Hubs.
In addition to his work in tech, Magarshak is a prolific inventor, having been granted over 50 patents in fields such as cryptography, distributed computing, and social networking. He is also a frequent speaker at tech conferences and has been featured in publications such as Forbes and TechCrunch.
Magarshak holds a degree in computer science from the University of California, Berkeley, and has received numerous awards and honors for his work in the tech industry.
> (regenerate)
Gregory Magarshak is a software engineer and entrepreneur based in San Francisco. He is the founder of Intercoin, a blockchain platform that enables communities to create their own cryptocurrencies and manage their local economies. Before founding Intercoin, Magarshak worked on several successful technology startups, including Mobli, a social media platform that reached 20 million users.
Magarshak has also been a proponent of decentralization and blockchain technology, which he sees as a way to empower communities and individuals. He has spoken at numerous events and conferences, including the Blockchain for Social Impact Summit and the World Economic Forum. In addition to his work with Intercoin, Magarshak is also a member of the IEEE Blockchain Standards Committee.
Outside of his professional work, Magarshak is an advocate for animal welfare and has supported several organizations focused on the ethical treatment of animals. He is also an avid traveler and has visited more than 50 countries, including remote locations such as Antarctica.
> Hopefully, the positive uses will outweigh the negative, but our world is changing rapidly, and the consequences are likely to be huge.
The word hopefully is holding an Atlas class amount of weight in that sentence.
People are not capable of scaling themselves to the amount of spam, fraud and manipulation AI enables. I don't think a human-like personal tutor is going to counter the tsunami of malicious AI generated content designed to optimally divide and break down the populace.
Right now, there is no shortage of fake content on the internet. i'd even say that the vast majority of what's out there is complete bullshit, starting with everything created by the forces of marketing, politics or religion.
Despite this, I don't feel drowning in trash. It's really not that hard to filter out the bullshit. Our whole lives, we never had access to the absolute truth, we've always had to assert confidence based on our sense of criticism.
If anything, this AI thing will sharpen even more our bullshit detectors, and we'll be better at it.
Ezra Klein at the New York Times has been running a series of excellent podcast episodes and columns on the topics of both generative AI[1]. He and his guests raise significant points and are some of the most sane, sober, and insightful commentary I've heard. It's well worth reflection and consideration.
The links include both the audio and transcripts (following a few days after air-date) for podcasts.
I expect Klein to cover both aspects throughout the next year.
What I especially like about Klein is that he's not only reacting to developments and rehashing demonstrated capabilities, but he's asking questions and anticipating what's to come, without the hagiographic / techno-optimistic lenses of some (e.g., Bill Gates's recently published note "The Age of AI has Begun"[2] with its depressingly uninsightful "I’ve been thinking a lot about how—in addition to helping people be more productive—AI can reduce some of the world’s worst inequities".
What we've learnt about technology is that what it does depends greatly on who it serves. And amongst Klein's more interesting observations is that we cannot with certainty know what aims AI is serving, even those who create it. Klein repeatedly notes that many of those directly engaged in creating the technology itself have little idea where it is headed or what it will be able to do:
Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.
I've spent much of the past decade looking at the history of technology and information technology in particular. There's a pair of books which stand out to me, they share a title though are separated by a decade and different authors: The Control Revolution, respectively by James Beniger (1986)[3] and Andrew L. Shapiro (1999)[4].
Beniger's book looks backwards at the development of largely commercial and corporate communications over the course of the Industrial Revolution (18th through 20th centuries), whilst Shapiro looks forward at the promise of a networked and digital online communications infrastructure. Both books have aged well, though some informed reading-between-the-lines may be necessary.
In particular, Beniger looks at business as an information processing system, and not in the all-to-familiar (and rather facile) Hayekian market sense, but in terms of information flows within and between companies. In particular, as information technologies developed how communications occurred transformed immensely. Many of those changes to me seem to revolve around issues of trust.
The ornate and florid language of 18th and early 19th century correspondence spends much time and space in asserting trust and faithfulness bonds between correpsondents. (How accurate or useful those were is its own question, but the point remains: it's a major component of the writing.) Keep in mind that it might take days, weeks, or months for correspondence to reach its intended recipient (let alone unintended ones), and that remote offices or agents might be acting with tremendous autonomy for months or years at a time.
With the development of the telegraph, two things occurred:
- Communications became instantaneous, with multiple round-trip messages within the course of a day or an hour possible.
- Words got expensive.
Language became telegraphic.
American author Mark Twain exemplified much of this, and his style of writing was as distinct for its directness as the topics it covered. The influences of a newspaper pressman and editor working from telegraphed wire stories and a sense of the physicality of a block of cast type is clear to me.
The rise of complex corporations also played a huge role: railroads, manufacturing concerns (particularly General Electric), chemical companies where deviation from procedure could have explosive consequences (Du Pont, Dow), and communications companies (Western Union, AT&T, remember that that second 'T' is for "Telegraph"), and the like. It's possible to trace RFC 822 (and successor) email headers directly to business correspondence memo fields, used to standardise correspondence, from the late 19th century.
While the 19th century saw generally a decrease in in-band attestations of trustworthiness as message capacity increased, I'm strongly suspecting that the 21st century may see an increase in such attestations. One possible possibility is through cryptographic mechanisms, the favourite of technologists (myself included), though adoption of such methods has to date been pathetically and disappointingly weak.
Another is that multiple independent verifications of information will be required and increasingly common. This is already used in fields such as journalism and human-rights investigations.[5] A problem emerges when it cannot be readily determined that two sources are in fact independent.
For business increasingly dependent on remote interactions, the risks of impersonation and fraud (is that really the CEO calling on a scratchy phone line, or an AI bot?) is a huge and growing problem, along with invoice and billing frauds and the like.
How we're going to address this, and how the notions of "something you are, something you know, and something you have" as multiple forms of remote attestation will evolve ... is going to be an interesting set of questions.
________________________________
Notes:
1. "Generative AI" seems to be the general term describing collectively ChatGPT, Sydney, Bard, and other current-generation large language model (LLM) AI chatbots.
I'm adding "The Control Revolution" to my reading list thanks to this post.
I recently finished "The Company: A short History of a Revolutionary Idea" [0] and it painted a picture where historic development in economic activity is enabled by increasingly sophisticated contract technology. The more explicitly people were able to specify expectations around business ventures, the successful such ventures could be (and the easier to gather capital). Seems like something you may enjoy!
thanks -- I hadn't previously thought of all the "SBOM signed with cosign" stuff as the digital/automated version of "I trust this letter reaches you in good health and that we're able to meet again on your verdant estate, the name of which I clearly know, along with the names of your staff and immediate family."
> What we've learnt about technology is that what it does depends greatly on who it serves.
You made many good points, but if this one resonates with anyone in the context of AI advancements, I recommend this long-form essay, "Meditations On Moloch" [1]. It's a thought-provoking exploration of the unrelenting drive for competition and self-interest, often leading to suboptimal outcomes for individuals and society in politics, education, economics, and the environment.
Or something like Neal Stephenson's novel "Fall". [0] It's a fun story but you can feel there's a lot more to uploading yourself to the cloud than the book can cover.
[+] [-] dredmorbius|3 years ago|reply
"It takes a few dollars and 8 minutes to create a deepfake. And that's only the start"
<https://www.npr.org/2023/03/23/1165146797/it-takes-a-few-dol...>
HN discussion: <https://news.ycombinator.com/item?id=35275104>
Both items approach sufficiently distinct angles that I'd feel separate posts are at least arguably warranted.
[+] [-] isoprophlex|3 years ago|reply
GPT-4 sure shows signs of sentience. Once fine tuning to a specific task becomes commonplace, you could conceivably finetune a LLM also to your own personality. Loaded up with specific knowledge+personality, this can then allow you to send people interactive messages.
A pseudo-intelligent construct that conveys your message, which the receiver can interrogate. And not just text. As this article shows it could well be a multi-modal talking head, to tickle the social centers of your brain and give things more (perceived) personality.
No longer spouting your requirements to the team over a boring video call; everyone gets your opinion as an interactive avatar, to query at will. I hope someone is working on this..!
[+] [-] ben_w|3 years ago|reply
Alpha: Sentient brain upload.
Beta: A perfect but non-sentient mimic, trained on public info instead of a scan. It will pass the Turing test because the original would, but it is not truly sentient. (Except one of them manages to become sentient anyway, I think).
Gamma: GOFAI chatbot.
[+] [-] majormajor|3 years ago|reply
Heh, now even when you do think of the edge cases to ask the product manager about in advance, the virtual model they send you instead of hopping on a call will say one thing (since they didn't give it an exact specific answer for that scenario) and later they'll tell you something else directly!
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] jeffwask|3 years ago|reply
[+] [-] bena|3 years ago|reply
[+] [-] bentcorner|3 years ago|reply
I base my belief in what I see online on cross-checking, general trust-worthiness of sources I encounter, and my gut feeling for the difficulty of faking something, but there's nothing stopping a determined adversary from inserting things into my perceived reality.
A sufficiently advanced AI could be generating the internet on the fly just for me and I wouldn't be any the wiser.
[+] [-] dredmorbius|3 years ago|reply
See:
- "Adult Literacy in the United States (2019)" <https://nces.ed.gov/pubs2019/2019179/index.asp> HN: <https://news.ycombinator.com/item?id=29734146>
- "Skills Matter: Further Results from the Survey of Adult Skills" a 2016 OECD report based on research conducted in 2011--12 and 2014--15 involving 215,942 adults: <https://www.oecd-ilibrary.org/education/skills-matter_978926...> NNGroup (Jakob Nielsen) writeup, "The Distribution of Users’ Computer Skills: Worse Than You Think" <https://www.nngroup.com/articles/computer-skill-levels/>
For both, the high-ability group is a small set (~15% for overall literacy, 5--10% for technical literacy) at the population level.
Note that this is highly consistent across multiple countries and does not seem to indicate a particular lack of ability amongst American residents. (Though the US literacy study does show a marked decline especially of non-native English speakers near the Southern border.)
This is sometimes described as the Lake Wobegon Effect (or perhaps "Lake Wobegon Fallacy"), which is to say that you can't exit a general population cognitive limit by assuming "all the children are above average" (from the late NPR radio programme "A Prairie Home Companion").
We're going to have to address the coming, and/or present, highly-contaminated epistemic environment with the population and skills we have, not those we'd prefer, to borrow from another sentiment.
[+] [-] ozten|3 years ago|reply
This video feels like a tortured digital twin. Very unnerving action with the mouth. So creepy. But... if you did 5 seconds of full screen and then minimized down into a small circle and had your main content, I bet I wouldn't have noticed. Voice is passable.
[1] https://www.youtube.com/watch?v=840bHIATbDg
[+] [-] jschveibinz|3 years ago|reply
[+] [-] CatWChainsaw|3 years ago|reply
[+] [-] EGreg|3 years ago|reply
Here is a link:
https://www.facebook.com/magarshak/posts/10112271578864389
> Tell us everything you know about Gregory Magarshak and do it in 3 paragraphs
Gregory Magarshak is a Russian-born American entrepreneur and investor. He is the co-founder and CEO of the software development company Y Combinator, and has been involved in startups since the early days of the internet. Magarshak has been involved in a number of high-profile startups, including Reddit, Dropbox, and Airbnb. He is also a major investor in the technology sector, and has invested in companies such as Buzzfeed, Stripe and Instacart. Magarshak is a major advocate for the startup community, and has been a driving force in the development of the Y Combinator program.
I tried it again
(Spoiler alert: most of this is outright fiction)
> Tell me about Greg Magarshak
Greg Magarshak is a software engineer, entrepreneur, and inventor based in the United States. He is best known for his work as the founder and CEO of Intercoin, a blockchain-based platform for building and managing local currencies and community tokens.
Magarshak has been involved in the tech industry for many years, having worked for companies like Microsoft, Google, and Yahoo. He has also founded several tech startups, including WikiExperts.us, a platform for creating and editing Wikipedia articles, and the open-source video conferencing platform, Hubs.
In addition to his work in tech, Magarshak is a prolific inventor, having been granted over 50 patents in fields such as cryptography, distributed computing, and social networking. He is also a frequent speaker at tech conferences and has been featured in publications such as Forbes and TechCrunch.
Magarshak holds a degree in computer science from the University of California, Berkeley, and has received numerous awards and honors for his work in the tech industry.
> (regenerate)
Gregory Magarshak is a software engineer and entrepreneur based in San Francisco. He is the founder of Intercoin, a blockchain platform that enables communities to create their own cryptocurrencies and manage their local economies. Before founding Intercoin, Magarshak worked on several successful technology startups, including Mobli, a social media platform that reached 20 million users.
Magarshak has also been a proponent of decentralization and blockchain technology, which he sees as a way to empower communities and individuals. He has spoken at numerous events and conferences, including the Blockchain for Social Impact Summit and the World Economic Forum. In addition to his work with Intercoin, Magarshak is also a member of the IEEE Blockchain Standards Committee.
Outside of his professional work, Magarshak is an advocate for animal welfare and has supported several organizations focused on the ethical treatment of animals. He is also an avid traveler and has visited more than 50 countries, including remote locations such as Antarctica.
[+] [-] bhawks|3 years ago|reply
The word hopefully is holding an Atlas class amount of weight in that sentence.
People are not capable of scaling themselves to the amount of spam, fraud and manipulation AI enables. I don't think a human-like personal tutor is going to counter the tsunami of malicious AI generated content designed to optimally divide and break down the populace.
[+] [-] Valgrim|3 years ago|reply
Despite this, I don't feel drowning in trash. It's really not that hard to filter out the bullshit. Our whole lives, we never had access to the absolute truth, we've always had to assert confidence based on our sense of criticism.
If anything, this AI thing will sharpen even more our bullshit detectors, and we'll be better at it.
[+] [-] ofrzeta|3 years ago|reply
[+] [-] motza|3 years ago|reply
[+] [-] Pigalowda|3 years ago|reply
[+] [-] hammyhavoc|3 years ago|reply
[+] [-] dredmorbius|3 years ago|reply
There's the very poorly titled column "This Changes Everything" ("this" is "generative AI"): <https://www.nytimes.com/2023/03/12/opinion/chatbots-artifici...>
And two podcast episodes in particular:
- "The Imminent Danger of A.I. Is One We're Not Talking About": <https://www.nytimes.com/2023/02/26/opinion/microsoft-bing-sy...> TL;DR: "Who will these machines serve?"
- "A.I. Is About to Get Much Weirder. Here’s What to Watch For." <https://www.nytimes.com/2023/03/21/opinion/ezra-klein-podcas...>
- And on a different but closely-related theme: "How the $500 Billion Attention Industry Really Works" <https://www.nytimes.com/2023/02/14/opinion/ezra-klein-podcas...>
The links include both the audio and transcripts (following a few days after air-date) for podcasts.
I expect Klein to cover both aspects throughout the next year.
What I especially like about Klein is that he's not only reacting to developments and rehashing demonstrated capabilities, but he's asking questions and anticipating what's to come, without the hagiographic / techno-optimistic lenses of some (e.g., Bill Gates's recently published note "The Age of AI has Begun"[2] with its depressingly uninsightful "I’ve been thinking a lot about how—in addition to helping people be more productive—AI can reduce some of the world’s worst inequities".
What we've learnt about technology is that what it does depends greatly on who it serves. And amongst Klein's more interesting observations is that we cannot with certainty know what aims AI is serving, even those who create it. Klein repeatedly notes that many of those directly engaged in creating the technology itself have little idea where it is headed or what it will be able to do:
Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.
<https://www.nytimes.com/2023/03/12/opinion/chatbots-artifici...>
I've spent much of the past decade looking at the history of technology and information technology in particular. There's a pair of books which stand out to me, they share a title though are separated by a decade and different authors: The Control Revolution, respectively by James Beniger (1986)[3] and Andrew L. Shapiro (1999)[4].
Beniger's book looks backwards at the development of largely commercial and corporate communications over the course of the Industrial Revolution (18th through 20th centuries), whilst Shapiro looks forward at the promise of a networked and digital online communications infrastructure. Both books have aged well, though some informed reading-between-the-lines may be necessary.
In particular, Beniger looks at business as an information processing system, and not in the all-to-familiar (and rather facile) Hayekian market sense, but in terms of information flows within and between companies. In particular, as information technologies developed how communications occurred transformed immensely. Many of those changes to me seem to revolve around issues of trust.
The ornate and florid language of 18th and early 19th century correspondence spends much time and space in asserting trust and faithfulness bonds between correpsondents. (How accurate or useful those were is its own question, but the point remains: it's a major component of the writing.) Keep in mind that it might take days, weeks, or months for correspondence to reach its intended recipient (let alone unintended ones), and that remote offices or agents might be acting with tremendous autonomy for months or years at a time.
With the development of the telegraph, two things occurred:
- Communications became instantaneous, with multiple round-trip messages within the course of a day or an hour possible.
- Words got expensive.
Language became telegraphic.
American author Mark Twain exemplified much of this, and his style of writing was as distinct for its directness as the topics it covered. The influences of a newspaper pressman and editor working from telegraphed wire stories and a sense of the physicality of a block of cast type is clear to me.
The rise of complex corporations also played a huge role: railroads, manufacturing concerns (particularly General Electric), chemical companies where deviation from procedure could have explosive consequences (Du Pont, Dow), and communications companies (Western Union, AT&T, remember that that second 'T' is for "Telegraph"), and the like. It's possible to trace RFC 822 (and successor) email headers directly to business correspondence memo fields, used to standardise correspondence, from the late 19th century.
While the 19th century saw generally a decrease in in-band attestations of trustworthiness as message capacity increased, I'm strongly suspecting that the 21st century may see an increase in such attestations. One possible possibility is through cryptographic mechanisms, the favourite of technologists (myself included), though adoption of such methods has to date been pathetically and disappointingly weak.
Another is that multiple independent verifications of information will be required and increasingly common. This is already used in fields such as journalism and human-rights investigations.[5] A problem emerges when it cannot be readily determined that two sources are in fact independent.
For business increasingly dependent on remote interactions, the risks of impersonation and fraud (is that really the CEO calling on a scratchy phone line, or an AI bot?) is a huge and growing problem, along with invoice and billing frauds and the like.
How we're going to address this, and how the notions of "something you are, something you know, and something you have" as multiple forms of remote attestation will evolve ... is going to be an interesting set of questions.
________________________________
Notes:
1. "Generative AI" seems to be the general term describing collectively ChatGPT, Sydney, Bard, and other current-generation large language model (LLM) AI chatbots.
2. Note: <https://www.gatesnotes.com/The-Age-of-AI-Has-Begun> HN discussion: <https://news.ycombinator.com/item?id=35250564>
3. <https://archive.org/details/controlrevolutio0000unse>
4. <https://archive.org/details/controlrevolutio0000shap>
5. See for example: <https://www.humanrightscareers.com/skills/beginners-guide-ho...>
[+] [-] joshmarlow|3 years ago|reply
I recently finished "The Company: A short History of a Revolutionary Idea" [0] and it painted a picture where historic development in economic activity is enabled by increasingly sophisticated contract technology. The more explicitly people were able to specify expectations around business ventures, the successful such ventures could be (and the easier to gather capital). Seems like something you may enjoy!
[0] - https://www.amazon.com/Company-History-Revolutionary-Library...
[+] [-] q845712|3 years ago|reply
[+] [-] e1g|3 years ago|reply
You made many good points, but if this one resonates with anyone in the context of AI advancements, I recommend this long-form essay, "Meditations On Moloch" [1]. It's a thought-provoking exploration of the unrelenting drive for competition and self-interest, often leading to suboptimal outcomes for individuals and society in politics, education, economics, and the environment.
[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/ or the audio version at https://sscpodcast.libsyn.com/meditations-on-moloch
[+] [-] teeray|3 years ago|reply
[+] [-] hodgesrm|3 years ago|reply
[0] https://en.wikipedia.org/wiki/Fall;_or,_Dodge_in_Hell
[+] [-] zikduruqe|3 years ago|reply
[+] [-] kanzure|3 years ago|reply