I've used wormhole once to move a 70 GB file. Couldn't possibly do that before. And yes, I know I used the bandwidth of the relay server, I donated to Debian immediately afterwards (they run the relay for the version in the apt package).
I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.
Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.
Do you do NAT hole punching, and/or port traversal like uPNP, NAT-PMP? I think for all but the most hostile networks the use of the relay server can be almost always avoided.
Yes, it relies on two servers, both of which I run. All connections use the "mailbox server", to exchange short messages, which are used to do the cryptographic negotiation, and then trade instructions like "I want to send you a file, please tell me what IP addresses to try".
Then, to send the bulk data, if the two sides can't establish a direct connection, they fall back to the "transit relay helper" server. You only need that one if both sides are behind NAT.
The client has addresses for both servers baked in, so everything works out-of-the-box, but you can override either one with CLI args or environment variables.
Both sides must use the same mailbox server. But they can use different transit relay helpers, since the helper's address just gets included in the "I want to send you a file" conversation. If I use `--transit-helper tcp:helperA.example.com:1234` and use use `--transit-helper tcp:helperB.example.com:1234`, then we'll both try all of:
* my public IP addresses
* your public IP addresses
* helperA (after a short delay)
* helperB (after a short delay)
and the first one to negotiate successfully will get used.
> since otherwise you just scp or rsync or sftp if you don't have the dual barrier
True, but wormhole also means you don't have to set up pubkey ahead of time.
All of them require an account on the other machine and aren't really suitable for quick transfer one-off file transfer from one computer to another that you don't own.
If I have a direct network connection I tend to go with:
python3 -m http.server
or
tar ...| nc
Neither of which is great, but at least you'll find them on many machines already preinstalled.
lotharrr|1 year ago
Thanks for making a donation!
I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.
Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.
Thanks for using magic wormhole!
password4321|1 year ago
As I'm sure you're aware: https://www.scaleway.com/en/stardust-instances/ "up to 100Mbps" for $4/month
jancsika|1 year ago
Does wormhole try something like that before acting as a relay?
pyrolistical|1 year ago
I know this requires one of the ends to be able to open ports or whatever but that should be baked into the wormhole setup.
floam|1 year ago
AtlasBarfed|1 year ago
It relys on some singular or small set of donated servers?
NAT <-> NAT traversal is obviously the biggest motivator, since otherwise you just scp or rsync or sftp if you don't have the dual barrier.
Is the relay server configurable? Seemed to be implied it is somewhat hardcoded.
lotharrr|1 year ago
Then, to send the bulk data, if the two sides can't establish a direct connection, they fall back to the "transit relay helper" server. You only need that one if both sides are behind NAT.
The client has addresses for both servers baked in, so everything works out-of-the-box, but you can override either one with CLI args or environment variables.
Both sides must use the same mailbox server. But they can use different transit relay helpers, since the helper's address just gets included in the "I want to send you a file" conversation. If I use `--transit-helper tcp:helperA.example.com:1234` and use use `--transit-helper tcp:helperB.example.com:1234`, then we'll both try all of:
* my public IP addresses * your public IP addresses * helperA (after a short delay) * helperB (after a short delay)
and the first one to negotiate successfully will get used.
> since otherwise you just scp or rsync or sftp if you don't have the dual barrier
True, but wormhole also means you don't have to set up pubkey ahead of time.
grumbel|1 year ago
All of them require an account on the other machine and aren't really suitable for quick transfer one-off file transfer from one computer to another that you don't own.
If I have a direct network connection I tend to go with:
or Neither of which is great, but at least you'll find them on many machines already preinstalled.teruakohatu|1 year ago
bredren|1 year ago