tabletopneedle's comments

tabletopneedle | 4 months ago | on: iPhone Pocket

>It is questionable whether it solves its primary use case particularly well.

It solves the problem of "how do I flaunt the fact I carry an iPhone to everyone around me"

It's a conversation piece and way to flaunt your wealth and status by uncovering a iPhone 17 Pro Max S+ Duo XTX from it when asked.

tabletopneedle | 2 years ago | on: Crown Sterling: Five years since TIME AI, five years of grifts and lie

Five years ago, today, the infosec community found itself wondering about Robert Edward Grant's Quasi-primes, and an ever expanding portfolio of grifts by his company, Crown Sterling. These included

* TIME AI, a completely bonkers, five-dimensional vaporware cipher with time-traveling keys,

* Black Hat 2019 crank presentation,

* Bogus RSA break claims, and later,

* Cryptographic protocol broken in just about all aspects,

* Cryptocurrency grifts, and as the newest addition,

* A browser-based messaging app.

This wiki-article documents and debunks pretty much all of it, in ridiculous detail and with more than 200 references.

tabletopneedle | 6 years ago | on: RSA is a fragile cryptosystem

Even if you were using a perfect implementation of RSA-OAEP, it would still be less secure than Diffie-Hellman over Curve25519 (called X25519) or Curve448 (called X448).

This is because RSA lacks forward secrecy: If the private RSA key is stolen, it can be used to retrospectively decrypt all past communication.

Also X448 provides the equivalent security of ~15000-bit RSA with a fraction of the key size, and key generation takes milliseconds instead of minutes.

tl;dr

For key exchange, use X25519 or X448.

For digital signatures, use Curve25519-based ed25519 signatures.

For authenticating communication, use authenticating encryption like ChaCha20-Poly1305 or Salsa20-Poly1305 or AES256-GCM.

For hash function, use Blake2 or SHA3-256 or SHA256.

tabletopneedle | 6 years ago | on: RSA is a fragile cryptosystem

There is no place to use RSA instead of Diffie-Hellman. DH provides forward secrecy, and the ECC variants are much faster and use shorter keys for equivalent security. They are harder to implement in a wrong way.

tabletopneedle | 6 years ago | on: RSA is a fragile cryptosystem

Tl;dr

Curve25519 for 128-bit security, to use with 128/256-bit symmetric cipher.

X448 for 224-bit security to use with 256-bit symmetric cipher.

-

For symmetric ciphers choose any of the three below:

-ChaCha20-Poly1305

-Salsa20-Poly1305

-AES-GCM.

tabletopneedle | 6 years ago | on: RSA is a fragile cryptosystem

Yes. However, it never hurts to test your code.

Assuming you're a C-programmer, read the libsodium docs first. https://download.libsodium.org/doc/public-key_cryptography/s...

If you're using higher level language, use a library that provides bindings for it https://download.libsodium.org/doc/bindings_for_other_langua...

By using libsodium, you're not rolling your own crypto. Rolling your own crypto would mean

-trying to find new one way functions for public key crypto -trying to implement RSA from textbook -trying to implement RSA-OAEP from papers, RFCs, books etc.

Using a library is not anywhere near those. There are other ways to fail cryptography too, from not doing public key authentication, to storing private keys in insecure places.

So it's highly recommended you take time to read a book on the topic. The best modern book currently availalbe is https://www.amazon.com/Serious-Cryptography-Practical-Introd...

tabletopneedle | 6 years ago | on: RSA is a fragile cryptosystem

Every time there's debate over Telegram's encryption the shill argument "it hasn't been broken in the wild now has it" pops up. This is fundamentally flawed thinking. The end-to-end-encryption is most likely reasonably safe (no glaring holes were pointed by experts except the IND-CCA problem). The real problem is Telegram uses their secret chats as a poor excuse for justifying the lack of E2EE for practically everything: "Just use secret chats if you need end-to-end encryption"

1. Telegram's E2EE is not on by default, therefore 99% of users don't use it.

2. Telegram's E2EE is not advertising authentication, therefore ~90% of the people using it don't check for MITM attacks, therefore majority of E2EE is useless against active attackers.

3. Telegram's E2EE does not work across devices, therefore majority people who use secret chats also use non-secret chats because desktop client don't support it.

4. 100% of Telegram's group conversations can be eavesdropped by the server, because Telegram doesn't have E2EE for group chats.

Complaining about possible cribs in how Telegram implemented the protocol from cryptographic primitives is an insignificant problem compared to the fact the entire protocol is fundamentally FUBAR, how it's so glaringly obvious you can't even fill out a CVE form.

If Signal had vulnerability where 100% of group conversations were not properly end-to-end encrypted, every newspaper in the world would publish something about it. However, with Telegram it has been spun as a "feature".

Another big problem is Telegram has been mentioned by hundreds of publications as "Secure apps like Signal, WhatsApp and Telegram".

To experts it's like hearing news spout "Great writers like Leo Tolstoy, Paulo Coelho, and Stephanie Meyer", or "Great bunker materials like reinforced concrete, steel, and MDF".

Repeatedly claimed, anyone would make mental associations between the three, but when you actually find out what they're about you can't believe your ears.

tabletopneedle | 6 years ago | on: RSA is a fragile cryptosystem

This reminds me of the Niemöller's poem. IIRC it went something like

First they came for the A2017U1s, or they would have, except he never opposed the wrongdoing.

tabletopneedle | 6 years ago | on: RSA is a fragile cryptosystem

It's much safer to just send the public key over whatever medium and then use an authenticated channel to verify the authenticity of said public key.

tabletopneedle | 6 years ago | on: RSA is a fragile cryptosystem

With DH both public keys have effect on the randomness of the shared secret. If the app on the client generates a random DH key-pair for every session, and it uses a public DH value of the server pinned to it, the encryption is authenticated and secure to use.

If there are no public keys pinned to clients (say secure messaging apps like Signal where each user generates their own keys), users need to check the public key fingerprints to make sure there's no MITM attack taking place.

tabletopneedle | 7 years ago | on: Call for testing: OpenSSH 8.0

Thank you!

So to help everyone (read whole post first), you should probably have the line

KexAlgorithms [email protected],[email protected],diffie-hellman-group-exchange-sha256

in /etc/ssh/sshd_config of server and /etc/ssh/ssh_config of client (under "Host ").

(The rest of the kex recommendations are from https://stribika.github.io/2015/01/04/secure-secure-shell.ht...)

---

However, for some reason after running "/usr/sbin/sshd -T" it said

"/etc/ssh/sshd_config line 2: Bad SSH2 KexAlgorithms '[email protected]'."

so I played around. It's hard for me to go back on everything I tried but a working solution seemed to be to add the

KexAlgorithms [email protected],[email protected],diffie-hellman-group-exchange-sha256

line to server's "/usr/local/etc/sshd_config" and to client's "/usr/local/etc/ssh_config" under "Host ".

You then need to start the server by running "sudo /usr/local/sbin/sshd" and you need to use the ssh client with the binary "/usr/local/bin/ssh".

tabletopneedle | 7 years ago | on: Call for testing: OpenSSH 8.0

I was able to install the software, but there is no documentation about how to create NTRU+X25519 keys and enable it. I checked manpages, mailing list and tried google. How is this done?

tabletopneedle | 7 years ago | on: The science of ultra pure silicon

For example, cleanroom has classification that implies some amount of dust particles/impurities in the air. The technicians don't talk about the number of dust particles allowed, they just consider class 5 suitable for some applications whereas others might need class 4.

Class 4 cleanroom makes sense, as does "Ultra pure" as long as industry sees it as a standard.

Another example: UHD makes as little sense as FHD in terms of display resolution. I mean, Full must mean it's at maximum possible amount of high definition? Well we just know it stands for 1920x1080, and we know UHD or 4K is larger.

tabletopneedle | 7 years ago | on: I don't trust Signal

Until Tox defaults it's communication through Tor, it doesn't offer any notable differences. Sure, there is no central server, but intelligence agencies can see who you talk to without compromising server just by looking at the destination IP address of packets. Tox suffers from same MITM problems if the ToxID is changed e.g. on Twitter page of your contact, the same way the author of the article claims the "checksum" of Signal's APK can be changed by NSA, your employer or angry spouse.

tabletopneedle | 7 years ago | on: I don't trust Signal

People still need to communicate with their peers in insecure networks. Now you need to compare the nitty gritty details and choose the most secure one for your needs. If you need content protection to keep dick picks out of NSA office circulation, Signal is probably the best. For metadata-free chat, Ricochet and Briar are currently the top duo.

tabletopneedle | 7 years ago | on: I don't trust Signal

Remember that OTR, Cryptocat and PGP were secure enough when Snowden was agreeing about handing data to Greenwald and Poitras. So while Signal isn't secure if you're NSA's target, it might be secure enough to protect you from passive threat scanning.

tabletopneedle | 7 years ago | on: I don't trust Signal

"Google Play Services lets Google do silent background updates on apps on your phone and give them any permission they want. Having Google Play Services on your phone means your phone is not secure."

Yes, Google can install a backdoored version of Signal. This is bad. But if you can't take that risk, you can install e.g. LineageOS without Google Apps, download the source code, reproducibly compile the apk, and install it on your android. If you have a better idea, maybe it can be implemented.

"A checksum isn’t a signature, by the way - if your government- or workplace- or abusive-spouse-installed certificate authority gets in the way they can replace the APK and its checksum with whatever they want."

If they can add a certificate on your smartphone/PC, why can't they replace Signal with malicious one? Why can't they replace F-Droid? There is no 100% method to solve this issue, unless perhaps if you can meet with F-Droid developers, obtain the authentic public key from them to verify the F-Droid client's signature. Calling SHA256 cryptographic hash a checksum shows slight dishonesty on your side. The differences in connotations between the words are significant.

F-Droid doesn't magically solve this problem. The root of trust comes from another SHA256 hash -- 61:DB:51:32:39:47:61:C4:D4:3F:8A:9B:AE:72:B0:2E:B0:8D:F3:B5:ED:F2:92:1C:7B:14:7E:2F:29:30:83:03 -- that authenticates the certificate of f-droid.org.

Or it comes from the hash F3:33:D2:E7:FA:A3:68:7F:B2:99:3E:6D:F6:9D:EE:1D:DA:77:36:11:DD:CA:B3:3A:B6:79:87:AA:40:56:94:22 that authenticates the MIT's PGP key server that has the signature verification key for F-droid clients: https://pgp.mit.edu/pks/lookup?search=f-droid&op=index All your suggestion does is, it adds a layer or two where we hope the NSA doesn't compromise them in case you'd want to use that chain to install and validate Signal. And even if you personally verify the authenticity of public key, you haven't solved the issue of private key exfiltration via hacking. You need expensive HW like HSMs to even start combatting exfiltration. And Google can afford those.

"...centralized servers and trademarks."

Of course you can't call a fork with the same or similar name as the original. You don't want malicious entities to create projects with names like "Signal Official Client" etc. Having distinct name helps both the fork and the original one.

Centralized servers fix a crucial issue, shitty designs that linger forever. It also fixes the issue of having to deal with backwards compatibility indefinitely. Moxie can actually see what versions are still deployed, and push updates to most users. The idea here being, you don't have to support older protocols (e.g. the group chat had a big issue that was or is currently being worked on), implement backwards compatilibity that risks downgrade attacks etc.

Let me give you an example. Riot decided to go with stupid, stupid base64 public key fingerprints. What happens here the only way to jump to smart choice of base10, is if all clients switch at the same time. If one client shows fingerprint in different base, it's not compatible. Sure, you can add a feature that lets the clients negotiate which fingerprint to use but then you need to get that deployed to every client. This happens really slowly, and it must usually follow the waterfall model with first deciding about these things on future revisions of Matrix protocol. And if you want to know how that will turn out, take a good look at OpenPGP research group: since SHAppening, they haven't even been able to agree on a new hash function for fingerprints. And once decided, that hash function will wait for years before the next revision of protocol is ready. Then you wait for it to be implemented in upcoming reference libraries and forks of those. And then you wait for them to be deployed in clients. Moxie changed all users' fingerprints from Base16 to Base10 -- my guess -- within a week by pushing the update. The advantage of agility is obvious.

"But we have to trust that Moxie is running the server software he says he is."

For content encryption, we absolutely don't have to trust him. For metadata, yes, we must trust the server runs the version that only collects registration date and some other minor detail, I forget. If you want to remove metadata, use Ricochet or Briar. Because Signal isn't lying about being anonymous by design, the only thing I think we can agree is, it should be stated in clear on their front page: "End-to-end encrypted, but not anonymous, we know your phone number and IP-address, and can see who you talk to, when and how much".

"We can stop Signal from knowing when we’re talking to each other by using peer-to-peer chats."

Yes, but that doesn't prevent global passive adversaries from seeing who we connect to directly. In some authoritarian country, the government could see Alice and Bob talk to each other. With centralized design, they only see connection to service providing domain fronting, or connection to Signal server at most. If you really wanted to solve this, you would run Ricochet or Briar.

Federation is a horrible idea. I trust they are not interested in my metadata personally. I won't trust metadata of all my chats to a friend of mine who runs personal instance of Signal Server. He watches porn on that same computer. He downloads Russian game cracks to that computer. He has friends who are my enemies and vice versa. He has repressed personal grudges, reasons to fuck me over, or he doesn't have 50M in foundation money (and he'd prefer $5k over our weekend hang-outs that admittedly are getting boring) or strong cypherpunk ideology to prevent corruption. He's a chinese refugee who has relatives he loves in political prisons, waiting to hand out their organs to rich members of the political party, and he's being extorted for my metadata on his computer. His computer isn't patching itself automatically so there as RCE vulnerability that got him compromised by our common adversary. He clicked on wrong link, once. The number of threats is endless.

Federated system doesn't distribute risks across hundreds of operators, it increases the attack surface tremendously, while dropping the number of targets the metadata of which is compromised at the moment. But I don't care about others, I care about the fact my friend doesn't have as good security as Google and Signal devs. Government agencies are really, really, really, really good at hacking and the trend is towards mass hacking. Having shitty servers makes that free because you can use exploits that should already be useless due to system updates.

"Federation would also open the possibility for bridging the gap with several other open source secure chat platforms to all talk on the same federated network -"

Yeah let's talk about that. Currently many Matrix channels lack end-to-end encryption because there is a backdoor: an IRC-bridge bot that leaks all conversations to non-end-to-end encrypted environment. Like you said: "Tradeoffs are necessary - but self-serving tradeoffs are not.", the possibility of having bots is extremely dangerous. The fact Matrix isn't end-to-end encrypted by default is horrible. The E2EE is in beta, and the fingerprint verification in clients suck. For the past three years I've been complaining about this, every time there is a developer assuring this will be fixed. This bug should never have existed in the first place. Now the users have come to accustomed to having the possiblity for briges to insecure systems.

"but those are all really convenient excuses for an argument which allows him to design systems which serve his own interests."

You should not make such generalized defamatory claims if you want to be taken seriously. I took this seriously at start but your arguments really lost their traction. It was another badly thought post that didn't show understanding of design choices and that hurt more than in helped: People might now switch to less secure Matrix protocol. Or they might even go with unaudited Tox, designed by non-experts.

page 1