This line made me go between it and inspecting the ssl3_record_st structure carefully, since the way it was written looked like a flexible array member:
unsigned char *p = &s->s3->rrec.data[0], *pl;
I think it would be better written as:
unsigned char *p = s->s3->rrec.data, *pl;
to make it clearer that data is a pointer within the structure.
Thanks; fascinating. The "length" of the payload is useful for error correcting, but the implementer didn't take advantage of this property, so it was usable against them. A double edged sword.
I'm sort of surprised an allocation occurs every time the heartbeat is sent. That is a lot of trips to the heap.
I'm not very familiar with how TLS heartbeats are implemented, but I wonder if the buffer could have just been alloc'd once when the connection was created.
I believe there's another one that occurs every time a heartbeat request is received, and was just as surprised that they had to allocate a new buffer, copy the payload over (which is where the bug was), basically construct a new message, when they could've just validated the incoming message, changed type + padding, and sent it back out using the same buffer. Heartbeat extension is only slightly more complex than an echo protocol.
It's probably much harder to remove the one on reception due to how the rest of OpenSSL is written, but at least from a glance at the code, a lot easier to rid the sending one. In terms of design, the simplest implementation would be malloc-on-receive + modify-and-send; a little better is an expanding buffer that's allocated once but reallocated if necessary, and to me, the way it's currently being done is the most complex, inefficient, and error-prone.
Having many unnecessary dynamic allocations tends to be a trend I've noticed most often in C/C++ code written by programmers with a Java background. Not saying that this necessarily applies to heartbleed's culprit, but the general trend of excessive complexity is there.
I wouldn't know, but in any case that would require allocating the whole 64KiB for each connection, when the actual heartbeats can be much smaller. Wouldn't the memory waste be worse than the allocations?
I like how those bounds checks (the ifs) have no curly braces, as if that Apple security bug didn't wake people up about such trivial opportunities for bugs.
I don't get it either. I am an old programmer full of habits too I suppose, but I had no problem to adopt mandatory curly braces after reading the rationale for mandatory curly braces in Golang years ago. I would think the Apple bug was an even stronger rationale to adopt mandatory curly braces habit...
Except in this case, an extra 'return 0' would just mean ignoring the heartbeat message completely. It's a slightly different situation from Apple's bug.
Indeed, seeing that coding style should raise further questions -- if the fix to a huge security hole is somewhat sloppy, what are the chances that everything else is sloppy?
I wonder if any of the existing static code analyzers would have found this?
PVS-Studio checks some open source projects and posts part of the results on their blog. I did a search and found that they did take a look at OpenSSL in 2012.
I wonder how "unvalidated user input passed to memcpy()" didn't trigger all of the static analysers...
I guess that the anaylser doesn't know that it's untrusted - perhaps it's worth having separate "trusted int" and "untrusted int" data types, so this would've been a compile-time error?
It's the biggest security vulnerability in the modern crypto age. Sysadmins everywhere are scrambling to renew keys and certs. It warrants at least 8 stories on the frontpage.
[+] [-] whadar|12 years ago|reply
[+] [-] userbinator|12 years ago|reply
[+] [-] gargarplex|12 years ago|reply
[+] [-] cmbaus|12 years ago|reply
I'm not very familiar with how TLS heartbeats are implemented, but I wonder if the buffer could have just been alloc'd once when the connection was created.
[+] [-] userbinator|12 years ago|reply
It's probably much harder to remove the one on reception due to how the rest of OpenSSL is written, but at least from a glance at the code, a lot easier to rid the sending one. In terms of design, the simplest implementation would be malloc-on-receive + modify-and-send; a little better is an expanding buffer that's allocated once but reallocated if necessary, and to me, the way it's currently being done is the most complex, inefficient, and error-prone.
Having many unnecessary dynamic allocations tends to be a trend I've noticed most often in C/C++ code written by programmers with a Java background. Not saying that this necessarily applies to heartbleed's culprit, but the general trend of excessive complexity is there.
[+] [-] icebraining|12 years ago|reply
[+] [-] Cthulhu_|12 years ago|reply
[+] [-] wreegab|12 years ago|reply
[+] [-] userbinator|12 years ago|reply
[+] [-] mkempe|12 years ago|reply
[+] [-] Flow|12 years ago|reply
PVS-Studio checks some open source projects and posts part of the results on their blog. I did a search and found that they did take a look at OpenSSL in 2012.
http://www.viva64.com/en/b/0183/
And Coverity: https://scan.coverity.com/projects/294
[+] [-] Shish2k|12 years ago|reply
I guess that the anaylser doesn't know that it's untrusted - perhaps it's worth having separate "trusted int" and "untrusted int" data types, so this would've been a compile-time error?
[+] [-] kzrdude|12 years ago|reply
[+] [-] nate_martin|12 years ago|reply
[+] [-] whadar|12 years ago|reply
[+] [-] IvyMike|12 years ago|reply
[+] [-] nodata|12 years ago|reply
[+] [-] voltagex_|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] yiedyie|12 years ago|reply
[+] [-] Beltiras|12 years ago|reply