> a specimen of such writing, which I may safely defy the united ingenuity of the whole human race to decypher, to the end of time
Turns out to have been rather optimistic. But I'd say that holding up for 200 years before yielding to a combination of modern mathematics and brute computational force that would have been utterly unimaginable to Patterson is still quite impressive.
Anything with probability and randomness tends to break my brain but I find it fascinating.
If I interpret the 200-years-late solution correctly, it relied to a large part on trial and error (like one "trick" is to focus on adjacent rows, which seems to be more of a "hunch" than a deep, mathematical strategy). Things like this make me wonder why so many modern encryption standards rely on "simple" mathematical concepts that are easily testable (even if testing might take a million years). Even if that's your base, why not throw some random algorithms on top, like swapping every 7th symbol, reversing the whole thing and then adding a random one in the middle. Wouldn't that immediately make it more tedious to decipher at not much of an additional cost? Is this done anywhere?
Is keeping the algorithm open but the keys secret not the premise of modern cryptography? The idea is that an open algorithm can be tested for venerabilities, while secret algorithms may actually be broken, but you may never know, making them dangerous.
It is done by amateurs who design "super secret encryption algorithms" that nearly always turn out to be trivially breakable.
I'll assume you don't mean this "random algorithm" you "throw in" to be secret, (otherwise see the other response) and instead you hope that complicating the algorithm to make cryptanalysis more difficult.
The thing is - You don't want cryptanalysis to be difficult! You want the world's cryptographic experts to look at your algorithm, immediately see the potential ways to crack it, and quickly find that none of them actually work.
A complex encryption algorithm is bad because there could be vulnerabilities hiding in the complexity. That "swapping every 7 symbol" could make it easier to crack by creating subtle patterns in the output.
My understanding of the solution is a little different. The virtue of the cryptography scheme was that by adding pending characters carefully, it became useless to use frequency analysis of single letters. However, what the solution does is look for digraphs - pairs of letters like "th" or "qu" that are common in english. (And, placing low scores to unlikely digraphs like "dx".) Doing two rows at a time is just part of this strategy.
Those types of strategies, tricks, don't work in practice. A foundational principle in security and cryptography is to assume an attacker possesses the entire design of your system except for the secret keys. So, just assume that an attacker knows your funny business every 7 characters. All they would do is strip it out.
All they would need to do is download your trial program, decompile it, and look at the decryption code. That takes seasoned people minutes to do.
[+] [-] brazzy|7 years ago|reply
Turns out to have been rather optimistic. But I'd say that holding up for 200 years before yielding to a combination of modern mathematics and brute computational force that would have been utterly unimaginable to Patterson is still quite impressive.
[+] [-] nothis|7 years ago|reply
If I interpret the 200-years-late solution correctly, it relied to a large part on trial and error (like one "trick" is to focus on adjacent rows, which seems to be more of a "hunch" than a deep, mathematical strategy). Things like this make me wonder why so many modern encryption standards rely on "simple" mathematical concepts that are easily testable (even if testing might take a million years). Even if that's your base, why not throw some random algorithms on top, like swapping every 7th symbol, reversing the whole thing and then adding a random one in the middle. Wouldn't that immediately make it more tedious to decipher at not much of an additional cost? Is this done anywhere?
[+] [-] djangovm|7 years ago|reply
[+] [-] brazzy|7 years ago|reply
I'll assume you don't mean this "random algorithm" you "throw in" to be secret, (otherwise see the other response) and instead you hope that complicating the algorithm to make cryptanalysis more difficult.
The thing is - You don't want cryptanalysis to be difficult! You want the world's cryptographic experts to look at your algorithm, immediately see the potential ways to crack it, and quickly find that none of them actually work.
A complex encryption algorithm is bad because there could be vulnerabilities hiding in the complexity. That "swapping every 7 symbol" could make it easier to crack by creating subtle patterns in the output.
[+] [-] herodotus|7 years ago|reply
[+] [-] IncRnd|7 years ago|reply
All they would need to do is download your trial program, decompile it, and look at the decryption code. That takes seasoned people minutes to do.
[+] [-] olskool|7 years ago|reply