- Anything written down and transmitted digitally will be available to anyone who wants it eventually. Besides my copy, the recipient has a copy, as do any number of intermediaries. Encrypted or not, that encryption will almost certainly be broken in my lifetime.
- Anything written down on physical medium or stored digitally (but not transmitted) will be freely available to anyone that wants it eventually. It's slightly more secure than a digital copy, because I have physical control over the only copy and would most likely know if that physical control was compromised (so my NAS is considered transmitted since it is possibly accessible).
- Anything I say to someone in person is pretty safe, depending on how much I trust them not to record what is said (otherwise it becomes classified as digitally transmitted). Of course now a copy remains in their mind's eye, which is in some cases worse since it can be modified and they don't even know it.
- Any thought I have that I have never expressed outside my body is almost completely safe, baring successful administration of truth serum (or until someone invents adversarial brain scans).
How does this affect me in real life? I don't write down things I wouldn't want other people to know, and I am pretty careful about saying things I don't want other people to know. I use encryption whenever I can because it will at least slow down the attackers, but I never assume something that is encrypted is safe.
I have a very similar list to yours. I'm also a datahoarder though, so I also have the reverse characteristics, i.e. if I don't have a copy then assume there are no copies left for me to ever access (assume adversaries have a backup though).
> - Anything written down and transmitted digitally will be available to anyone who wants it eventually. Besides my copy, the recipient has a copy, as do any number of intermediaries. Encrypted or not, that encryption will almost certainly be broken in my lifetime.
I completely agree. So we're now assuming that the information will exist indefinitely for adversaries (NSA warehouse), however if I don't have a copy then it's effectively gone (for me only).
> - Anything I say to someone in person is pretty safe, depending on how much I trust them not to record what is said.
I'm not quite there yet, but I've become leary of all the phones, laptops, cameras, personal assistants, etc. While I'm from the US (so mass spying is supposed to be illegal), and most companies like Amazon/Apple have been shown to be "mostly telling the truth" in terms of how they only transmit data after a trigger phrase, it's a lot of microphones.
Let's assume the person I'm talking to is 100% trusted, all it takes is a hacked Alexa to leak the conversation.
Likely or practical? Not really. Do I do anything specific such that I'd consider myself a potential NSA target? Not really.
But I consider privacy a human right so it's always something I'm thinking about.
In my hierarchy these two points are swapped. At least I think there is a real chance to keep written material secret as long as you are not a target of a state level adversary. I can write in a room where I am sure there are no cameras watching and I can put my copy in a safe where I would know if it was compromised. On the other hand I can almost never be sure not be audio recorded - be it deliberately or accidentally by any of the microphones that are almost everywhere nowadays.
There seems to be a fundamental assumption here that's just completely wrong, and that is that there's a way to guarantee that your chat logs /email /search history /whatever can ever be 100% secure from disclosure. This is as wrong as assuming you don't need backup because you have RAID, excellent malware protection, are fully patched, have great sprinklers, are geologically stable, not in a flood plain, etc.
You need to start from the assumption that it can happen here and take steps to ensure that damage is minimal /recoverable. Part of that is policy, part culture, another part is technical but none are sufficient by themselves. On the tech side, look at systems designed to comply with data protection laws.
I didn't get that assumption from the article at all. (Unless you meant "on HN" instead?)
To me, the article's focus on permanently deleting old messages specifically avoids that error - it's not about avoiding disclosure but as defense-in-depth since disclosure is always a possibility. Beyond that, avoiding Slack (even with data deletion) is unlikely to increase security, but it does decrease priority for smaller users who might be swept up in an attack on Slack in general.
Slack seems like a huge single point of failure. The chat logs across who knows how many companies and groups all stored on a single set of servers by a single company. All it takes is one bad insider with the keys to the kingdom, or just a government backend into the system, and all the secrets are loosed. And who's to say Slack employees don't watch our chat logs already? I really don't get the amount of trust placed in Slack, especially in the IT industry where everyone is very aware of security.
I worked at a company that uses Slack as its main casual communication medium.
One day, I walked into a meeting to learn that I was being fired without warning, and would have absolutely no opportunity from that moment forward to log in to any of my business-related accounts.
That sudden unexpected contextual change really puts the lack of privacy control into perspective.
Suddenly, all the conversations that I had had with other coworkers were taken away from me, and made available to whoever was in charge of handling my old logins. Is that really acceptable?
I always make a point to remember the communication medium that I am using, and to filter myself respectively, but how many events like this exist that I have no control of? How can I predict the events that will put my privacy in jeopardy? What text exists that I would have liked to be more private?
I don't think the firing changed anything, though. You have to assume your employer had access to your chats already. And emails, etc. Your net privacy didn't change.
Anonimnity is fake. Assume your conversations in Slack are reviewed, don't be mean and sneaky, and you're all set. If you're a true dick irl, Slack will only serve to magnify that fact.
once worked for a start-up that went through a high amount of churn and employee turnover. one of the pain points was the know-how being locked inside Slack threads and we had hit the 10K message limit months before I joined. The place was also politically toxic and the CEO was mostly the cause of this. Initially when I had still some passion left I suggested to move Slack to a self-hosted Zulip installation (threaded topics FTW) because the CEO constantly complained about having to pay for Slack subscription and he was totally against this as somebody who believes in FOSS.
After getting this Zulip migration approved the CEO pulled the plug in the last minute because he realized during a discussion about how to handle the import of the original messages - that all the old (toxic) discussions would now be in the hands of his internal employees and they couldn't be trusted not reading all the shit him and everyone else said behind each others back.
This made me aware that Slack has some interesting reasons for why teams are locked into their SaaS platform which may have nothing to do with scalability or uptime. In our case it was fear of libel lawsuits and further turnover. While you might be able to live with the insider-threat at SlackHQ with them being able to read your messages, sometimes the idea that anyone in your IT can read everything management has said shared or discussed in the past may be too risky for most.
In case anyone runs into this in the future, Zulip has a few features that could help:
* You can set a message retention policy that will delete each message after N days. (We're building the UI for it, but currently you can email [email protected] for help on turning it on.)
* On Zulip Cloud, you can set a message visibility limit that will save all your messages (e.g. for legal/compliance reasons), but only the last N messages will be visible to the team.
First thought: who the hell would be interested to read thousands of lines of discussions like how to name a field in REST response or notifications of someone making a build xD
Nobody, but it's not about that. It's about trade secrets, access keys for e.g. AWS, git; it's about private information that can be used for social engineering or extortion. If a malicious actor can take over someone's account they could do even more convincing social engineering and access confidential information.
If you think "I have nothing to hide" you're lacking in imagination.
A variation of this argument always seems to come up in discussions about privacy.
Yes, 99.9% of what you do or say in your daily life is likely of no consequence. But every now and then you may do or say something that could be used against you, and someone who has many years worth of data collected on you can probably find quite a few such bits of info.
Among all of your internet "transactions" probably less than 0.1% are with your bank, for instance (sending your credentials, etc). You want end-to-end encryption and good security to protect that 0.1% of your data, not for the other crap.
Netscape, 1998: "And I keep thinking to myself, Microsoft is going to pay some jackass lawyer $200 and hour to find out that we hate our cafeteria food, don't like the security posters, had a sucky newsfeed, and think ``Navigator'' was a cooler name than ``Communicator''." [1]
Just because you don't think you have anything interesting doesn't mean a competitor isn't going to subpoena it years later.
I sincerely do not believe that. Somewhere in there is a joke like "They trust me with their data. Suckers" - Mark Zuckerberg. An obviously tongue-in-cheek comment that can be interpreted in a different way.
Lots of people believed what you did, but you only need to have said one thing.
What if all your search history were leaked? What if all your text messages were leaked? What if all your emails were leaked? I guess those things aren't trendy enough to worry about.
For a long time I have noticed what I would call 'ankle biting journalism'. Basically take whatever is trendy, make only the most obvious observations about it (things that someone who only rudimentary knowledge would come up with in a few minutes), then act like these obvious things are serious and there is a problem here.
As usual, it's not that serious, and there isn't a problem here. I have no idea how Slack stores user chat history, but lets say they store them in plain text in the cloud. Then Slack is about as secure as IRC, which is exactly what it is trying to be, IRC 'but better'.
Besides, if you are using Slack for work, whoever is the 'Owner' of the enterprise account can export and view every conversation whenever they want, because 'compliance'.
"In hindsight, complying with the company's Document Retention Policy (which at Netscape was basically, ``shred anything within 90 days unless you can't get your job done without it'') might have been a good idea." [1]
Do companies no longer have Document Retention Policies? That seems like the bigger piece of the story here.
> What if all your search history were leaked? What if all your text messages were leaked? What if all your emails were leaked?
But I can directly go into all those things and delete at least my copies of them. Even with as many problems as Google and Facebook have, it's relatively straightforward to see the entire history of what I have on their sites and delete it.
> As usual, it's not that serious, and there isn't a problem here.
Quoth the article,
> Everything beyond that 10,000-message limit remains on Slack’s servers. So while those messages might seem out of sight and out of mind, they are all still indefinitely available to Slack, law enforcement and third-party hackers.
So I can't go back and check those on any free Slack server I post on.
And if someone decides to pay Slack for one of those servers, blam, any convo that got heated but that I forgot about is now visible to anyone with an axe to grind.
We've seen 10 year old tweets dredged up to go after people of all stripes, so this is certainly a thing that happens.
The problem you mention is different, and the severity is as high as the most sensitive content that's been posted.
As is the case with most chat platforms, the technical safeties put in place only serve to protect _legitimate_ users. The problem you're referring to is a matter of _illegitimate_ users or _insider-threats_.
* What if I copy-paste a sensitive conversation to a third-party?
* What if I export conversations or user accounts to a third-party?
* What if I grant an unauthorized party access to a conversation that they wouldn't otherwise be able to see?
In-transit encryption and encrypted storage do not solve these issues, because an insider threat inititated the action.
What you say is all true, but some kind of end to end encryption would reduce the risk, or at-least impede mass leaking of data. That seems to be the entire point of the article.
I would assume this article was "paid for" (perhaps not directly) by someone with a vested interest. E.g. a competitive vendor with a better security story, or whoever is having their legacy lunch eaten by Slack.
I worked for a fortune < 10 company. One day I found a complete dump of all public AND PRIVATE slack messages sitting on a dev server that was open to all.
I reported it to security. Next day I'm suspended. Turns out security did it because they wanted to search through 'just in case' but didn't want to go through the process properly.
Either it was my fault or they were incompetent, guess which one they chose. I was forced to quit eventually.
The single most terrible thing about Slack is the hostage holding of message archives. You don’t pay? Fine...you get 10k message history, no ability to set retention and Slack still stores all those messages forever, taunting me that they have it all and won’t let me do anything with them.
That’s just user hostile. If I don’t pay, I shouldn’t have all that message history stored forever. Either let me set retention on my messages or delete anything over 10k automatically. Don’t hold my content hostage and make it completely inaccessible and un-deletable unless I pay.
I worked on a project a while ago that used this as a feature. They were worried about a Freedom of Information request for chat archives (UK Govt linked) so intentionally didn't pay for Slack, so most of the message history wasn't available if a request came through.
It was sort of helpful for us. We (a bootstrapped small business) remained on the free tier for a few years before upgrading and getting access to all history.
We might not have bothered if the history was simply deleted. I was grateful they didn't as there are some great moments in there, i.e. our first invoice, announcement of our first member of staff, prototype renders, etc.
I've been spending all these years holding my tongue because as a matter of principal I don't write anything I don't want a permanent record of and it would be nice to see all that overhead pay off or more accurately, it would be nice to see people get burned for being sloppy. So no, I wouldn't really stand to lose anything if everything I ever said on company chat was published in an easily searchable format online.
> Slack is one of many Silicon Valley unicorns going public this year, but it’s the only one that has admitted it is at risk for nation-state attacks.
Gosh, everyone who runs a computer is at risk for nation-state attacks.
The real question is: how high is the risk?
The risk section of an S1 tries to list every imaginable threat as a risk, without any assessment of the probability or impact (the two components of risk). Using this is a source for such an article is simply wrong.
I mean, the article is generally right but they immediately get a detail wrong:
> Right now, Slack stores everything you do on its platform by default — your username and password ...
I would be extremely surprised if they store plaintext or even encrypted passwords. Maybe the author means usernames/passwords sent in messages, but that's not unique to slack.
This may be a taboo take on systems like Slack and I know this will not be popular given the number of developers here, but I had to explain this to HR and it was not easy to convey these concepts in plain terms.
Slack itself is just a chat system. Ok, what's the risk? By design, people (the user base, or admins, up to each company) can integrate third party applications. The permissions system allows chat data to flow to these third parties without any logging or visibility by the Slack business customer. So in effect, each employee (depending on perms) may on behalf of their company, relay all chat messages to third parties that their company does not have a legally binding agreement and NDA with. This is the actual risk with Slack (the product, not the company).
So by design, employees can leak all the chats for all of the #public channels they are a member of and they won't even see it happening. Some companies choose to have admins review the third party applications and integration. "But they are #public, right?". People in a company don't assume that the public channels are really public in the sense that third parties outside of their company can see these messages. Employees may discuss very sensitive topics about their own customers that may not be appropriate to relay to extended parties that their company does not have mutually binding agreements and NDA's with.
When you run your own servers such as IRC, Mattermost, etc.., the chat admins know what third party servers (if any) they are linking to. This does not preclude an employee from relaying their own data through their workstation, but that can be addressed by edge DLP devices for monitoring or mitigation. Even then, the employee knows exactly what they are relaying.
When the chat system itself is a third party, and that third party allows relaying of user data, chat data to fourth parties, then by design, the system will always leak data. Slack does not alert members in a channel which bots are reading their messages real time and where that data is being stored and who is reading it and for what purpose. This also becomes a problem for data retention policies. The parties external to Slack may retain and use the data for as long as they wish, even if Slack purge data from a channel after a period of time. I see this as a legal quagmire.
In terms of leaking private messages, those are also stored so that an admin in a company may review them by request. This is only an issue if Slack's servers were compromised. That risk applies to both self hosted and third party chat servers, though Slack becomes a much more juicy target by having the private chats of many companies. This is similar to the risk of routing non-static content through Akamai. Akamai had employees caught selling sensitive data to other nations which highlights the risk of aggregating private data through one transit provider that can decrypt your data or see your data in plain text. This risk could be mitigated by having short data retention policies, however that is the opposite of what users expect. They expect their messages to be around forever.
[+] [-] jedberg|6 years ago|reply
- Anything written down and transmitted digitally will be available to anyone who wants it eventually. Besides my copy, the recipient has a copy, as do any number of intermediaries. Encrypted or not, that encryption will almost certainly be broken in my lifetime.
- Anything written down on physical medium or stored digitally (but not transmitted) will be freely available to anyone that wants it eventually. It's slightly more secure than a digital copy, because I have physical control over the only copy and would most likely know if that physical control was compromised (so my NAS is considered transmitted since it is possibly accessible).
- Anything I say to someone in person is pretty safe, depending on how much I trust them not to record what is said (otherwise it becomes classified as digitally transmitted). Of course now a copy remains in their mind's eye, which is in some cases worse since it can be modified and they don't even know it.
- Any thought I have that I have never expressed outside my body is almost completely safe, baring successful administration of truth serum (or until someone invents adversarial brain scans).
How does this affect me in real life? I don't write down things I wouldn't want other people to know, and I am pretty careful about saying things I don't want other people to know. I use encryption whenever I can because it will at least slow down the attackers, but I never assume something that is encrypted is safe.
[+] [-] penagwin|6 years ago|reply
> - Anything written down and transmitted digitally will be available to anyone who wants it eventually. Besides my copy, the recipient has a copy, as do any number of intermediaries. Encrypted or not, that encryption will almost certainly be broken in my lifetime.
I completely agree. So we're now assuming that the information will exist indefinitely for adversaries (NSA warehouse), however if I don't have a copy then it's effectively gone (for me only).
> - Anything I say to someone in person is pretty safe, depending on how much I trust them not to record what is said.
I'm not quite there yet, but I've become leary of all the phones, laptops, cameras, personal assistants, etc. While I'm from the US (so mass spying is supposed to be illegal), and most companies like Amazon/Apple have been shown to be "mostly telling the truth" in terms of how they only transmit data after a trigger phrase, it's a lot of microphones.
Let's assume the person I'm talking to is 100% trusted, all it takes is a hacked Alexa to leak the conversation.
Likely or practical? Not really. Do I do anything specific such that I'd consider myself a potential NSA target? Not really.
But I consider privacy a human right so it's always something I'm thinking about.
[+] [-] weinzierl|6 years ago|reply
> - Anything I say to someone [..]
In my hierarchy these two points are swapped. At least I think there is a real chance to keep written material secret as long as you are not a target of a state level adversary. I can write in a room where I am sure there are no cameras watching and I can put my copy in a safe where I would know if it was compromised. On the other hand I can almost never be sure not be audio recorded - be it deliberately or accidentally by any of the microphones that are almost everywhere nowadays.
[+] [-] ForHackernews|6 years ago|reply
These are coming sooner than many people will be comfortable with: https://www.npr.org/templates/story/story.php?storyId=157448...
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] fencepost|6 years ago|reply
You need to start from the assumption that it can happen here and take steps to ensure that damage is minimal /recoverable. Part of that is policy, part culture, another part is technical but none are sufficient by themselves. On the tech side, look at systems designed to comply with data protection laws.
[+] [-] Bartweiss|6 years ago|reply
To me, the article's focus on permanently deleting old messages specifically avoids that error - it's not about avoiding disclosure but as defense-in-depth since disclosure is always a possibility. Beyond that, avoiding Slack (even with data deletion) is unlikely to increase security, but it does decrease priority for smaller users who might be swept up in an attack on Slack in general.
[+] [-] yters|6 years ago|reply
[+] [-] thomastjeffery|6 years ago|reply
One day, I walked into a meeting to learn that I was being fired without warning, and would have absolutely no opportunity from that moment forward to log in to any of my business-related accounts.
That sudden unexpected contextual change really puts the lack of privacy control into perspective.
Suddenly, all the conversations that I had had with other coworkers were taken away from me, and made available to whoever was in charge of handling my old logins. Is that really acceptable?
I always make a point to remember the communication medium that I am using, and to filter myself respectively, but how many events like this exist that I have no control of? How can I predict the events that will put my privacy in jeopardy? What text exists that I would have liked to be more private?
[+] [-] gdulli|6 years ago|reply
[+] [-] mbritton72|6 years ago|reply
[+] [-] thomastjeffery|6 years ago|reply
Except the desire for privacy isn't limited to the rude things you say.
None of us truly wants to live in a world where a third party can secretly review a permanent record of our communications.
[+] [-] DyslexicAtheist|6 years ago|reply
After getting this Zulip migration approved the CEO pulled the plug in the last minute because he realized during a discussion about how to handle the import of the original messages - that all the old (toxic) discussions would now be in the hands of his internal employees and they couldn't be trusted not reading all the shit him and everyone else said behind each others back.
This made me aware that Slack has some interesting reasons for why teams are locked into their SaaS platform which may have nothing to do with scalability or uptime. In our case it was fear of libel lawsuits and further turnover. While you might be able to live with the insider-threat at SlackHQ with them being able to read your messages, sometimes the idea that anyone in your IT can read everything management has said shared or discussed in the past may be too risky for most.
[+] [-] rishig|6 years ago|reply
* You can set a message retention policy that will delete each message after N days. (We're building the UI for it, but currently you can email [email protected] for help on turning it on.)
* On Zulip Cloud, you can set a message visibility limit that will save all your messages (e.g. for legal/compliance reasons), but only the last N messages will be visible to the team.
[+] [-] mapcars|6 years ago|reply
[+] [-] Cthulhu_|6 years ago|reply
If you think "I have nothing to hide" you're lacking in imagination.
[+] [-] rhacker|6 years ago|reply
The sexualized commentary about employee V's nice boobs.
The CEO arguing that employee M be kept because he's got leverage and we should let employee Z go instead.
That discussion we had about the time a hacker got ahold of 4000 customer records but we paid them off to delete the records.
I dunno, pretty much the stuff that can break a company into nothingness.
(FYI those are not scenarios where I work)
[+] [-] mtgx|6 years ago|reply
Yes, 99.9% of what you do or say in your daily life is likely of no consequence. But every now and then you may do or say something that could be used against you, and someone who has many years worth of data collected on you can probably find quite a few such bits of info.
Among all of your internet "transactions" probably less than 0.1% are with your bank, for instance (sending your credentials, etc). You want end-to-end encryption and good security to protect that 0.1% of your data, not for the other crap.
[+] [-] ken|6 years ago|reply
Just because you don't think you have anything interesting doesn't mean a competitor isn't going to subpoena it years later.
[1] https://www.jwz.org/gruntle/rbarip.html
[+] [-] madeofpalk|6 years ago|reply
http://nymag.com/intelligencer/2016/03/what-hulk-hogan-taugh...
https://splinternews.com/the-gawker-hulk-hogan-trial-and-the...
[+] [-] scarejunba|6 years ago|reply
Lots of people believed what you did, but you only need to have said one thing.
[+] [-] squeaky-clean|6 years ago|reply
[+] [-] pjmorris|6 years ago|reply
[+] [-] ppcdeveloper|6 years ago|reply
[+] [-] ltbarcly3|6 years ago|reply
For a long time I have noticed what I would call 'ankle biting journalism'. Basically take whatever is trendy, make only the most obvious observations about it (things that someone who only rudimentary knowledge would come up with in a few minutes), then act like these obvious things are serious and there is a problem here.
As usual, it's not that serious, and there isn't a problem here. I have no idea how Slack stores user chat history, but lets say they store them in plain text in the cloud. Then Slack is about as secure as IRC, which is exactly what it is trying to be, IRC 'but better'.
Besides, if you are using Slack for work, whoever is the 'Owner' of the enterprise account can export and view every conversation whenever they want, because 'compliance'.
[+] [-] ken|6 years ago|reply
Do companies no longer have Document Retention Policies? That seems like the bigger piece of the story here.
[1] https://www.jwz.org/gruntle/rbarip.html
[+] [-] ben509|6 years ago|reply
But I can directly go into all those things and delete at least my copies of them. Even with as many problems as Google and Facebook have, it's relatively straightforward to see the entire history of what I have on their sites and delete it.
> As usual, it's not that serious, and there isn't a problem here.
Quoth the article,
> Everything beyond that 10,000-message limit remains on Slack’s servers. So while those messages might seem out of sight and out of mind, they are all still indefinitely available to Slack, law enforcement and third-party hackers.
So I can't go back and check those on any free Slack server I post on.
And if someone decides to pay Slack for one of those servers, blam, any convo that got heated but that I forgot about is now visible to anyone with an axe to grind.
We've seen 10 year old tweets dredged up to go after people of all stripes, so this is certainly a thing that happens.
[+] [-] groovybits|6 years ago|reply
As is the case with most chat platforms, the technical safeties put in place only serve to protect _legitimate_ users. The problem you're referring to is a matter of _illegitimate_ users or _insider-threats_.
* What if I copy-paste a sensitive conversation to a third-party?
* What if I export conversations or user accounts to a third-party?
* What if I grant an unauthorized party access to a conversation that they wouldn't otherwise be able to see?
In-transit encryption and encrypted storage do not solve these issues, because an insider threat inititated the action.
[+] [-] duxup|6 years ago|reply
[+] [-] jonahx|6 years ago|reply
Anything I write on IRC I assume is public. Not so with Slack, where much is written in DMs.
The email comparison is more apt.
[+] [-] la_barba|6 years ago|reply
[+] [-] dboreham|6 years ago|reply
[+] [-] jacobsenscott|6 years ago|reply
[deleted]
[+] [-] zztaway2019|6 years ago|reply
I reported it to security. Next day I'm suspended. Turns out security did it because they wanted to search through 'just in case' but didn't want to go through the process properly.
Either it was my fault or they were incompetent, guess which one they chose. I was forced to quit eventually.
[+] [-] simonebrunozzi|6 years ago|reply
[+] [-] AndyMcConachie|6 years ago|reply
[+] [-] james_pm|6 years ago|reply
That’s just user hostile. If I don’t pay, I shouldn’t have all that message history stored forever. Either let me set retention on my messages or delete anything over 10k automatically. Don’t hold my content hostage and make it completely inaccessible and un-deletable unless I pay.
[+] [-] Cthulhu_|6 years ago|reply
[+] [-] jmkni|6 years ago|reply
[+] [-] gtsteve|6 years ago|reply
We might not have bothered if the history was simply deleted. I was grateful they didn't as there are some great moments in there, i.e. our first invoice, announcement of our first member of staff, prototype renders, etc.
[+] [-] dsfyu404ed|6 years ago|reply
[+] [-] czbond|6 years ago|reply
[+] [-] opticbit|6 years ago|reply
Keybase is open source and uses pgp. Easy for the the average user. Can be used in command line.
Been using keybase for a while now, haven't tried the tool.
[+] [-] roemance|6 years ago|reply
[+] [-] perlgeek|6 years ago|reply
Gosh, everyone who runs a computer is at risk for nation-state attacks.
The real question is: how high is the risk?
The risk section of an S1 tries to list every imaginable threat as a risk, without any assessment of the probability or impact (the two components of risk). Using this is a source for such an article is simply wrong.
[+] [-] deegles|6 years ago|reply
> Right now, Slack stores everything you do on its platform by default — your username and password ...
I would be extremely surprised if they store plaintext or even encrypted passwords. Maybe the author means usernames/passwords sent in messages, but that's not unique to slack.
[+] [-] jeremyjh|6 years ago|reply
[+] [-] LinuxBender|6 years ago|reply
Slack itself is just a chat system. Ok, what's the risk? By design, people (the user base, or admins, up to each company) can integrate third party applications. The permissions system allows chat data to flow to these third parties without any logging or visibility by the Slack business customer. So in effect, each employee (depending on perms) may on behalf of their company, relay all chat messages to third parties that their company does not have a legally binding agreement and NDA with. This is the actual risk with Slack (the product, not the company).
So by design, employees can leak all the chats for all of the #public channels they are a member of and they won't even see it happening. Some companies choose to have admins review the third party applications and integration. "But they are #public, right?". People in a company don't assume that the public channels are really public in the sense that third parties outside of their company can see these messages. Employees may discuss very sensitive topics about their own customers that may not be appropriate to relay to extended parties that their company does not have mutually binding agreements and NDA's with.
When you run your own servers such as IRC, Mattermost, etc.., the chat admins know what third party servers (if any) they are linking to. This does not preclude an employee from relaying their own data through their workstation, but that can be addressed by edge DLP devices for monitoring or mitigation. Even then, the employee knows exactly what they are relaying.
When the chat system itself is a third party, and that third party allows relaying of user data, chat data to fourth parties, then by design, the system will always leak data. Slack does not alert members in a channel which bots are reading their messages real time and where that data is being stored and who is reading it and for what purpose. This also becomes a problem for data retention policies. The parties external to Slack may retain and use the data for as long as they wish, even if Slack purge data from a channel after a period of time. I see this as a legal quagmire.
In terms of leaking private messages, those are also stored so that an admin in a company may review them by request. This is only an issue if Slack's servers were compromised. That risk applies to both self hosted and third party chat servers, though Slack becomes a much more juicy target by having the private chats of many companies. This is similar to the risk of routing non-static content through Akamai. Akamai had employees caught selling sensitive data to other nations which highlights the risk of aggregating private data through one transit provider that can decrypt your data or see your data in plain text. This risk could be mitigated by having short data retention policies, however that is the opposite of what users expect. They expect their messages to be around forever.
[+] [-] ilikehurdles|6 years ago|reply
[+] [-] gallamine|6 years ago|reply