I love how this starts off "What is a VXLAN..." that was going to be my first question! So often posts like this seem to assume everyone knows what the topics are at the start. I know what WireGuard is, and I've at least heard of VXLAN before, but I couldn't remember what it was.
This can cause massive packet fragmentation. I'd be most interested in the performance degradation due to the L2 encapsulation. Are there any benchmarks available for this kind of project?
I've tunneled VXLAN over Wireguard on Linux. In my setup, my WAN's MTU was 1500 bytes, and my Wireguard tunnel's MTU was 1550, with the VXLAN's MTU being 1500. Surprisingly, traffic and iperf3 tests going over the VXLAN had much better throughput than traffic going directly over the Wireguard connection. IIRC, over the VXLAN, I was pulling ~800Mbps over the VXLAN/WG setup with iperf3.
Where this would fall apart is if there are firewalls in between that silently drop UDP fragments. In a case like that, it may be necessary to do VXLAN/Wireguard/Wireguard to conceal the fragmented packets with MTUs of 1500/1550/1440 respectively, assuming IPv4 and WAN MTU of 1500. I bet this would come with a significant performance hit though.
That's what I was thinking, unless you have jumbo frames you're going to have a hard time stuffing ethernet frames into IP payloads. Does Vxlan mitigate this somehow?
I took this to extremes last year: I used it to run MAAS from Australia to sweden (which requires layer 2). Granted I used tailscale to make the WireGuard part even easier, but it was a lot of fun.
The feature I would be interested in is if this can do link state toggling on the on the vxlan interface if the wg handshake timer goes stale. If that works, then it becomes practical to do things like run ospf routing over the vxlan interface.
What exactly would you be intending to accomplish? OSPF already has state timers, and furthermore runs just fine on a wireguard interface without having to introduce a vxlan tunnel.
This is fun, but applications requiring L2 adjacency do it to limit latency/distance. Creating a L2 domain between here and the moon, what are you gonna use it for? Certainly not anything other than fun.
There's a number of specific scenarios this could be useful, like, some SANs can only replicate to L2 adjacent units. Say you wanted a replica off-site, and your gear is older/proprietary, you used to have to buy enterprise network gear to encap L2 and ship VLANs to remote sites. I wouldn't be dismissive of using VXLAN over wireguard to accomplish that.
Can you use this to get Apple bonjour / mDNS working over a remote network (connected via VPN)? Or similarly, could you use it for a cloud seedbox to cast to a chromecast on your local network (via the VPN obviously)?
Wow, I can’t believe the HN audience is so accepting of stretched layer 2 as a solution. It’s almost as though we’ve been invaded by middle management.
IP over Ethernet over VXLAN over UDP over IP over WireGuard over UDP over IP over Ethernet… sigh
OpenBSD does support both routing domains and multiple routing tables and includes multiple routing daemons in the base system. I would recommend to the author to stop hacking at the keyboard, grab whatever not to structured visualisation tool works for them (e.g. a whiteboard, a block of paper, a random drawing app, Visio) and (re-)phrase the problem. Are you solving a problem or showing of how many acronyms you can expand without looking them up? This n layer encapsulation can work and can even be required to reproduce some (problematic) organisational structure, but it's far from elegant. Given the chance I would vastly prefer to just use multiple routing domains for the WireGuard tunnel interfaces and the underlay. It would result in far less complexity to manage as well as less overhead.
Why do so many people insist on tunneling Ethernet over IP? What's keeping operators from using IP routing (and just one layer of encapsulation) instead? Is IP routing so scary or everyone that indispensable applications that only work over Ethernet?
Just the grateful that nobody has tried to wrap the entire thing in JSON over HTTP yet! I wouldn't be surprised if we get Wireguard over websockets for "enterprise" applications soon.
Sometimes you just need an L2 tunnel. Most of the time you don't, but when you do, you do. For example, if you use IPv6 over SLAAC in a private network, you'll need to route NDP.
In the rare cases that you do need an L2 tunnel between two different locations, you probably want some kind of authorisation and authentication of the traffic to prevent injection/spoofing attacks and to make life just a but harder for the NSA (Google's use of HTTP was one way the NSA managed to tap connections that were otherwise encrypted by HTTPS). After all, this isn't just any traffic, these are internal subnets.
In terms of authorised traffic, Wireguard is quite lightweight and foolproof. Perhaps IPSec is even more lightweight but it's a pain to set up. The alternative would be to wrap all internal network traffic in an encrypted protocol and set up the necessary whitelists in the upstream ISPs.
The impact of such layering depends on the network connection between the data centers. If you can get jumbo packets across, fragmentation won't be a problem at all. If you run your own fiber between data centers, there's basically no downside until you're reaching very high saturation network saturation.
I end up having to run basically this very setup (on OpenBSD, too) because I have a customer who has a Novell NetWare 5 setup and runs IPX only. Bad times.
vMotion needs L2 adjacency to make live migrating VMs easy. Some software rely heavily on broadcast discovery messages and are thus designed for LAN usage not Internet connectivity but businesses try to stuff a square peg into a round hold.
But doesn't provide layer 2 between networks. Think of devices that are hardcoded to communicate with broadcast or multicast with TTL of 1, you either need some active reflector, and cope with any perculiarities of the device, or you simply extend a single vlan between two routers (using vxlan or another solution)
I sometimes need to extend a system like this from one site to another. One is a calrec system (an audio mixer, I think it's the control traffic that needs to be sent), and I don't have enough access or time to see if I could build some kind of transparent proxy -- it won't work with multicast routing.
I do however have enough time to create a layer2 network between two nics. I tend to use mikrotiks for that, create an eoip tunnel (GRE with proprietary addons to cope with fragmentation) between the two endpoints and pop the interface in a bridge with a physical port, and move on.
There's a weird font-rendering bug on this site that causes the text in the code blocks to be unreadable unless you highlight it with your mouse. If you enable Javascript, it seems to fix it.
Not sure if the author is reading this thread, but it's something you may find worth investigating fixing.
[+] [-] blakesterz|3 years ago|reply
[+] [-] generalizations|3 years ago|reply
[+] [-] Denatonium|3 years ago|reply
Where this would fall apart is if there are firewalls in between that silently drop UDP fragments. In a case like that, it may be necessary to do VXLAN/Wireguard/Wireguard to conceal the fragmented packets with MTUs of 1500/1550/1440 respectively, assuming IPv4 and WAN MTU of 1500. I bet this would come with a significant performance hit though.
[+] [-] jacob019|3 years ago|reply
[+] [-] jmclnx|3 years ago|reply
https://web.archive.org/web/20230214134248/https://rob-turne...
A little bit over my head, but an interesting read.
[+] [-] anton5mith2|3 years ago|reply
https://medium.com/@antongslismith/bare-metal-cloud-provisio...
[+] [-] lillecarl|3 years ago|reply
[+] [-] floatinglotus|3 years ago|reply
How does it handle segmenting jumbo frames? Etc.
[+] [-] yokem55|3 years ago|reply
[+] [-] gorkish|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] noobface|3 years ago|reply
[+] [-] skullone|3 years ago|reply
[+] [-] TechBro8615|3 years ago|reply
[+] [-] anton5mith2|3 years ago|reply
[+] [-] frupert52|3 years ago|reply
Stretched layer 2 is almost always a mistake.
[+] [-] miller_joe|3 years ago|reply
[+] [-] crest|3 years ago|reply
OpenBSD does support both routing domains and multiple routing tables and includes multiple routing daemons in the base system. I would recommend to the author to stop hacking at the keyboard, grab whatever not to structured visualisation tool works for them (e.g. a whiteboard, a block of paper, a random drawing app, Visio) and (re-)phrase the problem. Are you solving a problem or showing of how many acronyms you can expand without looking them up? This n layer encapsulation can work and can even be required to reproduce some (problematic) organisational structure, but it's far from elegant. Given the chance I would vastly prefer to just use multiple routing domains for the WireGuard tunnel interfaces and the underlay. It would result in far less complexity to manage as well as less overhead.
Why do so many people insist on tunneling Ethernet over IP? What's keeping operators from using IP routing (and just one layer of encapsulation) instead? Is IP routing so scary or everyone that indispensable applications that only work over Ethernet?
[+] [-] jeroenhd|3 years ago|reply
Sometimes you just need an L2 tunnel. Most of the time you don't, but when you do, you do. For example, if you use IPv6 over SLAAC in a private network, you'll need to route NDP.
In the rare cases that you do need an L2 tunnel between two different locations, you probably want some kind of authorisation and authentication of the traffic to prevent injection/spoofing attacks and to make life just a but harder for the NSA (Google's use of HTTP was one way the NSA managed to tap connections that were otherwise encrypted by HTTPS). After all, this isn't just any traffic, these are internal subnets.
In terms of authorised traffic, Wireguard is quite lightweight and foolproof. Perhaps IPSec is even more lightweight but it's a pain to set up. The alternative would be to wrap all internal network traffic in an encrypted protocol and set up the necessary whitelists in the upstream ISPs.
The impact of such layering depends on the network connection between the data centers. If you can get jumbo packets across, fragmentation won't be a problem at all. If you run your own fiber between data centers, there's basically no downside until you're reaching very high saturation network saturation.
[+] [-] lstodd|3 years ago|reply
(also I think you lost one 'over UDP')
[+] [-] systems_glitch|3 years ago|reply
[+] [-] wyager|3 years ago|reply
[+] [-] aroulin|3 years ago|reply
[+] [-] supertrope|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] candiddevmike|3 years ago|reply
[+] [-] iso1631|3 years ago|reply
I sometimes need to extend a system like this from one site to another. One is a calrec system (an audio mixer, I think it's the control traffic that needs to be sent), and I don't have enough access or time to see if I could build some kind of transparent proxy -- it won't work with multicast routing.
I do however have enough time to create a layer2 network between two nics. I tend to use mikrotiks for that, create an eoip tunnel (GRE with proprietary addons to cope with fragmentation) between the two endpoints and pop the interface in a bridge with a physical port, and move on.
[+] [-] johnklos|3 years ago|reply
"You don't need to do that thing."
How do you know?
[+] [-] vxxzy|3 years ago|reply
[+] [-] icedchai|3 years ago|reply
[+] [-] tristor|3 years ago|reply
Not sure if the author is reading this thread, but it's something you may find worth investigating fixing.
[+] [-] Arnavion|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]