> two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard.
I wonder who came up with those. The pattern is similar to the UK's https://en.wikipedia.org/wiki/Rainbow_Code , which makes me suspect that the threat actor attribution comes from US intelligence. With whom OpenAI are almost certainly cooperating.
OpenAI said this work was done in conjunction with Microsoft’s long established threat intel center, these are almost certainly the code names Microsoft security intel teams have assigned to these actors. Threat actor naming is generally a mess and every company has a different naming scheme for the same cluster of indicators/ttps
I am not a hacker nor security expert, so take this with a grain of salt. As I see it, there's one of two (general) ways a group will get a name:
1) The group themselves declares it (like Anonymous). Which means they need to explicitly leave their name somewhere.
2) The name is given by someone from the outside, such as the US.
I suspect 2 is quite common. I wouldn't expect most state level hackers leaving calling cards on systems. In fact, probably not most hackers at any level. It really seems like if state level actors were leaving calling cards that this would instead be misdirection rather than a tag. So I would not be surprised if they ended up getting US style naming schemes because it would be US (or other Westerners) identifying these people the same way you'd identify people by the style of actions and how they write. I know you can probably look at code from coworkers and know who wrote specific parts. Think like what you see in a movie with serial killers (or even real life). How do you know it is the same killer? Style.
I mean you could also get the name if you infiltrated the other country and then intimately studied their groups. The name of their group internally. But then you'd probably translate it. Still probably not a great idea to give that name out publicly though because then you could be hinting at how you obtained that information because different parts of the organization may refer to the same group by different names (specifically to do this. Military groups often run disinformation internally in secret channels).
Edit: guessmyname left a link to showing Microsoft names these.
Looking over the specifics, the striking thing about this to me is that it seems like these supposedly-sophisticated covert operatives are just going to ChatGPT (or similar) and basically asking "how do I make good malware?"
What specifics did you look over because the (short) article doesn't say that at all. Most used it primarily for research and generating content for phishing campaigns.
> Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
> Salmon Typhoon used our services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.
> Crimson Sandstorm used our services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.
> Emerald Sleet used our services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
> Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.
I mean, this is essentially how current AI poses a threat. And it is a threat.
Imagine a massive flood of low-quality images relevant to current events (e.g. war), which are obviously fabricated to anyone remotely aware of AI generation. But there are a lot of people who aren't aware of AI generation, or just not paying attention, who will take the photos at face value. (You'd probably be one of the latter. How many times did you "inspect" a photo in a news article to make sure it wasn't faked? How many of those photos slipped past inspection into your subconscious? Subliminal messaging has never been easier or more widespread.)
Also imagine a lot of vigilantes who are stupid enough to be vigilantes in this day and age, who ordinarily couldn't do the bare minimum amount of research to cause real harm. But now they can ask ChatGPT "how do I rob a bank?" and get advice which is still pretty bad, but better than what they'd come up with on their own (if they wouldn't just give up entirely), so it causes more damage.
A trained professional who wants a quality deepfake can already use photoshop and video editing tools, and a trained criminal already knows how to do research. But there aren't a lot of those people. Massive low-quality spam and low-level crimes are useful even for a state actor with huge resources, because they cause general instability in ways a few quality hits can't.
There's another issue that a powerful state actor can simply build and deploy their own language model. However, it probably won't be as good as OpenAI's (quality may have some effect) and it won't be wasting their resources.
I am concerned that in repressive regimes, the state will control what AI the people have access to. This will make it easy to create music that supports the regime and almost impossible to create music that is critical of the regime.
It will be like the pro-state people having assistants to create memes they imagine for them, while the anti-state people are left with paper and crayons, until the AI paper and AI crayons refuse to draw anti-state memes and then we find anti-state activists trying to learn bee keeping so they can get wax to make crayons to draw anti-state memes.
If you thought the election interference of the past was bad, that's nothing compared to what we will see in the near future.
This whack-a-mole approach, while definitely good, is likely already dead as far as a way to prevent these actions. Local LLMs that have no restrictions will continue to get better.
If anything the best thing about this post is not the actions they've taken, but simply that they've shown us a snapshot of what the future will look like for state-affiliated actors' actions. The research aspect I think is a good thing - giving people full access to more information for how things are made and architected will likely be a net positive. The phishing aspect though is terrifying - it's going to be crazy seeing what the next decade looks like for phishing. I do wonder how long it takes before there's some sort of "verified as a human" type function in communications to try and combat this.
ChatGPT will tell only information that it has indexed from Internet.
This means the same information you get from ChatGPT is available from a Google search. Based on the listed queries ("programming help") ChatGPT does not create much value for national security threat actors here.
What this tells me is that if you think you have any privacy with OpenAI by turning off chat history in ChatGPT or exercising your California or EU privacy rights, you are kidding yourself. In the name of national security, anything you send to OpenAI can be used against you.
> Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.
This is the actually concerning one imo. Pair that with Russia's '21 ASAT demo and it shows a militaristic stance towards space by a great (ish) power
That sounds like they caught some minor casual operators. I think it’s a fallacy to conclude that it has limited utility based on that.
The real concern in my mind is use cases aimed at swaying opinions in aggregate via large amounts of real seeming AI actors. Attempts to move the Overton window etc. for that sort of mass psychology patios LLMs are perfect
Fuck openai, I'm a security researcher and if you dare ask it about anything Windows related it tells you to screw off. Linux? Fine. But ask about some undocumented Windows behavior and it says it can't. Ask it about patchguard internals as a reference? Tells you it can't assist. Absolutely crazy, I can understand asking it to write straight up malware, oh wait, it does that no issue! Lord help you if you want to use it as a reference or educational tool...
It’s silly that they do that. I doubt it matters, though. In my experience, querying ChatGPT for factual information like that is a mistake. It isn’t reliably accurate enough.
Agreed. Open AI defo need to tune their models so that it can provide enough information but not too much info which can be malicious. Currently, anything remotely close is flagged as malicious.
Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors were terminated.
The names of the groups are pseudonyms. That is why they all take the same form of [adjective] [weather noun]. Forest Blizzard is likely Glavset, called the Internet Research Agency in English.
There seems to be little if any concern from OpenAI about using AI trained on mass surveillance data to generate lists of individuals to assassinate... does that not count as a 'state-affiliated threat'?
I wonder why they closed the accounts rather than set them against a special version of ChatGPT, primed to make hard to notice mistakes and, as they are uncovered, laugh into their faces.
Groups sponsored by governments can easily write content for phishing campaigns, research companies, and write malware without the help of AI. They can also afford the GPUs to run local models if that gives them a boost.
[+] [-] pjc50|2 years ago|reply
I wonder who came up with those. The pattern is similar to the UK's https://en.wikipedia.org/wiki/Rainbow_Code , which makes me suspect that the threat actor attribution comes from US intelligence. With whom OpenAI are almost certainly cooperating.
Edit: Forest Blizzard == GRU, apparently. https://research.splunk.com/stories/forest_blizzard/
[+] [-] guessmyname|2 years ago|reply
Microsoft shifts to a new threat actor naming taxonomy → https://www.microsoft.com/en-us/security/blog/2023/04/18/mic...
[+] [-] browserman|2 years ago|reply
[+] [-] godelski|2 years ago|reply
1) The group themselves declares it (like Anonymous). Which means they need to explicitly leave their name somewhere.
2) The name is given by someone from the outside, such as the US.
I suspect 2 is quite common. I wouldn't expect most state level hackers leaving calling cards on systems. In fact, probably not most hackers at any level. It really seems like if state level actors were leaving calling cards that this would instead be misdirection rather than a tag. So I would not be surprised if they ended up getting US style naming schemes because it would be US (or other Westerners) identifying these people the same way you'd identify people by the style of actions and how they write. I know you can probably look at code from coworkers and know who wrote specific parts. Think like what you see in a movie with serial killers (or even real life). How do you know it is the same killer? Style.
I mean you could also get the name if you infiltrated the other country and then intimately studied their groups. The name of their group internally. But then you'd probably translate it. Still probably not a great idea to give that name out publicly though because then you could be hinting at how you obtained that information because different parts of the organization may refer to the same group by different names (specifically to do this. Military groups often run disinformation internally in secret channels).
Edit: guessmyname left a link to showing Microsoft names these.
https://www.microsoft.com/en-us/security/blog/2023/04/18/mic...
https://news.ycombinator.com/item?id=39372339
[+] [-] NameError|2 years ago|reply
[+] [-] 2OEH8eoCRo0|2 years ago|reply
> Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
> Salmon Typhoon used our services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.
> Crimson Sandstorm used our services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.
> Emerald Sleet used our services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
> Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.
[+] [-] armchairhacker|2 years ago|reply
Imagine a massive flood of low-quality images relevant to current events (e.g. war), which are obviously fabricated to anyone remotely aware of AI generation. But there are a lot of people who aren't aware of AI generation, or just not paying attention, who will take the photos at face value. (You'd probably be one of the latter. How many times did you "inspect" a photo in a news article to make sure it wasn't faked? How many of those photos slipped past inspection into your subconscious? Subliminal messaging has never been easier or more widespread.)
Also imagine a lot of vigilantes who are stupid enough to be vigilantes in this day and age, who ordinarily couldn't do the bare minimum amount of research to cause real harm. But now they can ask ChatGPT "how do I rob a bank?" and get advice which is still pretty bad, but better than what they'd come up with on their own (if they wouldn't just give up entirely), so it causes more damage.
A trained professional who wants a quality deepfake can already use photoshop and video editing tools, and a trained criminal already knows how to do research. But there aren't a lot of those people. Massive low-quality spam and low-level crimes are useful even for a state actor with huge resources, because they cause general instability in ways a few quality hits can't.
There's another issue that a powerful state actor can simply build and deploy their own language model. However, it probably won't be as good as OpenAI's (quality may have some effect) and it won't be wasting their resources.
[+] [-] t_mann|2 years ago|reply
[+] [-] pphysch|2 years ago|reply
[deleted]
[+] [-] RecycledEle|2 years ago|reply
It will be like the pro-state people having assistants to create memes they imagine for them, while the anti-state people are left with paper and crayons, until the AI paper and AI crayons refuse to draw anti-state memes and then we find anti-state activists trying to learn bee keeping so they can get wax to make crayons to draw anti-state memes.
If you thought the election interference of the past was bad, that's nothing compared to what we will see in the near future.
[+] [-] int_19h|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] jjcm|2 years ago|reply
If anything the best thing about this post is not the actions they've taken, but simply that they've shown us a snapshot of what the future will look like for state-affiliated actors' actions. The research aspect I think is a good thing - giving people full access to more information for how things are made and architected will likely be a net positive. The phishing aspect though is terrifying - it's going to be crazy seeing what the next decade looks like for phishing. I do wonder how long it takes before there's some sort of "verified as a human" type function in communications to try and combat this.
[+] [-] miohtama|2 years ago|reply
This means the same information you get from ChatGPT is available from a Google search. Based on the listed queries ("programming help") ChatGPT does not create much value for national security threat actors here.
[+] [-] Cheer2171|2 years ago|reply
[+] [-] drclau|2 years ago|reply
https://help.openai.com/en/articles/7730893-data-controls-fa...
[+] [-] Aerbil313|2 years ago|reply
[+] [-] blueblimp|2 years ago|reply
[+] [-] chpatrick|2 years ago|reply
[+] [-] GaggiX|2 years ago|reply
[+] [-] collegeburner|2 years ago|reply
This is the actually concerning one imo. Pair that with Russia's '21 ASAT demo and it shows a militaristic stance towards space by a great (ish) power
[+] [-] mise_en_place|2 years ago|reply
[+] [-] paxys|2 years ago|reply
With what GPUs?
[+] [-] JacobiX|2 years ago|reply
[+] [-] KindAndFriendly|2 years ago|reply
[+] [-] rightbyte|2 years ago|reply
[+] [-] Havoc|2 years ago|reply
The real concern in my mind is use cases aimed at swaying opinions in aggregate via large amounts of real seeming AI actors. Attempts to move the Overton window etc. for that sort of mass psychology patios LLMs are perfect
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] maldev|2 years ago|reply
[+] [-] sunk1st|2 years ago|reply
[+] [-] yusml|2 years ago|reply
[+] [-] udev4096|2 years ago|reply
[+] [-] timeon|2 years ago|reply
[+] [-] lumost|2 years ago|reply
e.g. does datafusion or polars support partial reads of parquet files in s3?
I wonder if they've rolled out some draconian restrictions.
[+] [-] quadcore|2 years ago|reply
Im surprised one can name names like that.
[+] [-] Cheer2171|2 years ago|reply
[+] [-] photochemsyn|2 years ago|reply
https://www.theguardian.com/world/2023/dec/01/the-gospel-how...
[+] [-] notavalleyman|2 years ago|reply
[+] [-] jojobas|2 years ago|reply
[+] [-] dullcrisp|2 years ago|reply
[+] [-] Cheer2171|2 years ago|reply
[+] [-] NLPaep|2 years ago|reply
[+] [-] willmadden|2 years ago|reply
[+] [-] kazinator|2 years ago|reply
Yawn.
C'mon, tell me how you canned state-affiliated USA accounts and what they were up to.