dennisjac's comments

dennisjac | 7 years ago | on: WireGuard: Next Generation Kernel Network Tunnel [pdf]

As you have stated elsewhere negotiation has been a huge problem in other protocols and makes things much more complicated and I agree with that. My concern was merely with how absolute this stance is i.e. if the sentiment runs along the line "Wireguard will only ever support a single version and potential upgrade paths are the problem of the users" or more like "Wireguard will avoid negotiation wherever possible but when the cipher primitives are deprecated (not broken) by the community we might support introducing a replacement but keeping support for the old primitives for a while for upgrade purposes".

Have you considered mentioning the way you intend to deal with cipher breakage/deprecation more explicitly on the Wireguard page?

dennisjac | 7 years ago | on: WireGuard: Next Generation Kernel Network Tunnel [pdf]

In terms of negotiation there is a lot of room between the two extremes of "none" and "highly complex" but I agree that one of the big appeals of Wireguard is that you no longer have to fill out weird spec sheets to coordinate the used cipher suites with the admin on the other side of the connection.

Having said that I still would have preferred for something like a single increasing integer as the "cipher suite version". This would have allowed for the option of updating Wireguard asynchronously on both ends without any additional configuration or cipher suite coordination with the peer.

dennisjac | 7 years ago | on: WireGuard: Next Generation Kernel Network Tunnel [pdf]

While this is a possibility it is still strange to not specify something like the protocol version intentionally. Even if in an updated implementation these fields would be used for something like that they still couldn't communicate properly with an older implementation which doesn't understand the new semantics of these fields.

Also looking at the struct it seem the three bytes are only reserved in order to align the following fields on 4-byte boundaries.

If the authors had the intent to allow for some kind of asynchronous update path then surely this would be built explicitly into the protocol right from the start.

dennisjac | 7 years ago | on: WireGuard: Next Generation Kernel Network Tunnel [pdf]

You cannot really predict this because you don't know when a weakness in a cypher is discovered. Yes it might never happen but it might also happen three days from now. Any piece of software that involves cryptography must be able to change the used primitives quickly in case they are compromised.

Where do you have the information about how Wireguard would handle such a transition from? Looking at https://www.wireguard.com/protocol/ I can see no protocol version or other means to distinguish between an old and new version of the protocol. Also this would introduce additional configuration and a negotiation step which seems to run counter to the motivation of the project.

Not preparing for for this inevitability seems foolish which leads me to believe that this was not an oversight but a deliberate design decision by the creators of Wireguard.

dennisjac | 7 years ago | on: WireGuard: Next Generation Kernel Network Tunnel [pdf]

I'm a bit unsure about the fact that Wireguard has no negotiation capabilities whatsoever. On the one hand this feature makes the configuration of tunnels dead simple as opposed to e.g. IPSec but on the other hand it also means that all participating systems are extremely tightly coupled. If one system updates to a kernel that contains a Wireguard version with crypto changes then all peers have to update their Wireguard versions at the exact same time or the tunnels break. This is easy if you have point-to-point tunnels where you control both ends of the connection but could potentially be a nightmare for tunnels to other companies or road-warrior setups. I fear that in these cases many will be forced to stick with IPSec due to these constraints.

dennisjac | 8 years ago | on: Linux Performance: Why You Should Almost Always Add Swap Space

I always see people caring way to much about the amount of swap space used and not enough about the swap activity going on. It isn't the amount of swap space that slows a system down but actual swapping in/out of memory pages. In vmstat you want to pay attention to the "si" and "so" columns and in your alerting/graphing you want to keep track of the values "pswpin" and "pswpout" in /proc/vmstat. If these are almost always exactly or near zero then that means the fact that some memory pages are swapped out has virtually no impact on the performance of your system.

There are two other important issues to take into account though. 1) Even if the memory pages swapped out are not accessed generally they might be forces back into memory because of some specific action. One example is a database that has all the hot records in memory but other records that are accessed only very rarely swapped out to disk. In general everything will perform fine in this situation but the moment you do e.g. a table scan and a lot of these records need to be moved back into memory you might see a disk I/O peak that might be quite a kick on the neck for overall performance if the database is really busy.

2) If you don't have any swap space configured you might still run into problems with swap which seem to be caused by a bug in the memory handling in the kernel. I've seen this on some KVM hypervisors which were CentOS 7 systems. These systems were equipped with 128G of RAM and had two virtual machines running which both were configured with 32G of virtual RAM. They ran fine until one day the kswapd kernel process ran with 100% cpu usage even though no swap swap was configured whatsoever (to avoid the situation mentioned above). The "fix" was to dump the systems caches with "echo 3 > /proc/sys/vm/drop_caches" which seemed to calm down kswapd again. As best as I can tell what happened is that the system used all the free ram for the page cache and buffers and when the system needed some memory it apparently prefered to swap pages out to disk (even though no swap was configured) rather than reclaiming page cache of which there was plenty to reclaim. Unfortunately that means there seems to be no bulletproof way to say "only use physical ram and never try to swap anything out to disk". Even /proc/sys/vm/swappiness can be dangerous as a value of "0" doesn't actually tell the system to only swap if absolutely necessary but can lead to OOM situations even if swap space is still available (see https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux... for details).

TL;DR: 1) Don't just pay attention to the amount of swap space but to actual swap activity over time 2) Be aware of corner cases and bugs relating to swap

page 1