> HAProxy is staying true to its principle of not accessing the disks during runtime and so all objects are cached in memory. The maximum object size is as big as the value of the global parameter “tune.bufsize”, which defaults to 16KB.
> This is a general-purpose caching mechanism that makes HAProxy usable as a small object accelerator in front of web applications or other layers like Varnish. The cool thing here is that you can control how large the objects to cache and accelerate delivery should be.
If this is on a server with a few GB RAM, is there any reason not to use it as the only cache? Is there any reason not to set the bufsize to a few MB, for example (assuming memory is not a constraint)?
I have a somewhat complex setup with several micro services each served by a dynamic set of backends. Caching for some of it would be nice, but I couldn't figure out a way to use varnish (or other external cache) without ending up with 1 varnish instance per backend (a pain to maintain), introducing a single point of failure (one per service - and still a pain to maintain), or losing some of the benefit and power of haproxy as a frontend (with varnish in front of it).
Also, I understand we can configure the objects max size (with tune.bufsize).
But can we configure the whole cache size (ie. the number of tune.bufsize KB cached objects) ?
If so, neither haproxy nor nginx expire cached A records.
Nginx Plus does, and a few nginx plugins do, however.
https://github.com/airbnb/synapse is a process that polls DNS, and updates haproxy config accordingly and SIGHUPs haproxy I've used synapse to solve this issue, but it's a moving piece I'd rather not have involved.
While I'm excited to be able to use haproxy for this, one alternative on nginx is to store the hostname you want looked up dynamically in a variable, and use that variable in a proxy_pass directive or equivalent. E.g. "proxy_pass http://$backend:80;" will cause Nginx to pass the value $backend to any resolver you have defined dynamically without needing Nginx Plus.
I'm a bit confused about the runtime capabilities. Does "set server" mean I can register new frontends in runtime? I mean, if we have a static DNS that basically just has a wildcard entry that routes to HAProxy - can I add new routes dynamically at runtime there?
About persistence: Does HAProxy have IP-based session persistence, i.e. routing to always the same backend server for the same client IP, as long as that server is available?
No way to register new frontends, new binds neither new backends.
We can only add/remove servers in backends for now. The "registration" you mentioned may happen later.
Please note that you can already do dynamic routing using ACLs or MAPS and updating your ACLs or MAPs content at runtime using the Runtime API (stats socket).
there are "set map" and "set map" commands for this purpose.
> How can the arguably best piece of open source tech keep get better?
Letsencrypt support? I realize that's a big undertaking though. There is a Lua plugin, but it does depend on certbot being installed and running, which is a bit difficult if you use the docker image (there are some docker containers on github that achieve this by running supervisord in the docker container for cron/certbot and haproxy).
This is probably outside the scope of HAProxy really though, and could probably be implemented entirely as a lua plugin that handles ACME, maybe?
Kudos to the HAProxy devs and contributors. I've always been a fan and great to see them pushing forward with these big improvements.
> The only thing I could ask for is a REST admin api to assist with my deploys.
I run haproxy in docker - create a shellscript that allows your jenkins (or whatever you use) to send a SIGHUP to the haproxy container, and it will reload the config.
> The only thing I could ask for is a REST admin api to assist with my deploys.
For now, there is one in HAProxy Enterprise, and it manages HAProxy's configuration file and triggers reload.
We're working on opening it, as soon as we have improved it: make it use both HAProxy Runtime API (stats socket) and configuration file to trigger reloads only when required.
Are there any plans to add UDP support to HAProxy? I wanted to use HAProxy as a SIP/RTP proxy at my last job, but was unable to do so as we were pushing SIP and RTP as UDP.
That is superb! I love haproxy but although it's rugged and rock-solid it's always seemed like a relic from the old way of doing things. Adding service discovery brings its reliability into a microservice age. Awesome.
HTTP/2 was designed for high latency networks. It supports stream multiplexing and headers compression to significantly shrink page load time over the internet. It has very limited use on the local network. Regarding server-side push, it brings about as many issues as benefits and is often counter-productive. Hopefully it will totally disappear when Early Hints are standardized (addressing the same benefits without the issues).
That said, we still want to address server-side H2 in 1.9 to address the CDN type of workloads where the origin servers might be far away. But that's less important.
Haproxy would usually be on the same network as your www servers so even without http2 the response time for the backend requests will be tiny compared to the client <-> haproxy roundtrip time.
The promise of http2 is to lower latency on the client side and having H2 on the frontend will help with that.
Fewer concurrent connections for the client if you have a resource heavy page. Http 1.1 still gives you stair stepped responses if you have a large chain of dependencies. Chrome halts concurrent connections around 10 or so.
[+] [-] gregmac|8 years ago|reply
If this is on a server with a few GB RAM, is there any reason not to use it as the only cache? Is there any reason not to set the bufsize to a few MB, for example (assuming memory is not a constraint)?
I have a somewhat complex setup with several micro services each served by a dynamic set of backends. Caching for some of it would be nice, but I couldn't figure out a way to use varnish (or other external cache) without ending up with 1 varnish instance per backend (a pain to maintain), introducing a single point of failure (one per service - and still a pain to maintain), or losing some of the benefit and power of haproxy as a frontend (with varnish in front of it).
[+] [-] snvzz|8 years ago|reply
Disk access can never be deterministic, which is terrible if you care about latency.
[+] [-] bpineau|8 years ago|reply
[+] [-] KeybInterrupt|8 years ago|reply
Awesome! Thank you!
[+] [-] Touche|8 years ago|reply
[+] [-] tamalsaha001|8 years ago|reply
Disclaimer: We develop https://github.com/appscode/voyager which is a HAProxy based ingress controller for Kubernetes.
[+] [-] tomfitz|8 years ago|reply
Is this true for A records too?
If so, neither haproxy nor nginx expire cached A records.
Nginx Plus does, and a few nginx plugins do, however.
https://github.com/airbnb/synapse is a process that polls DNS, and updates haproxy config accordingly and SIGHUPs haproxy I've used synapse to solve this issue, but it's a moving piece I'd rather not have involved.
[+] [-] bedis9|8 years ago|reply
HAProxy won't follow-up the TTL returned by the server. It's up to the administrator to decide how HAProxy should behave with DNS responses.
From my point of view, you don't need synapse any more if your usage of synapse is limited to this single feature.
[+] [-] vidarh|8 years ago|reply
[+] [-] m_mueller|8 years ago|reply
About persistence: Does HAProxy have IP-based session persistence, i.e. routing to always the same backend server for the same client IP, as long as that server is available?
[+] [-] terlisimo|8 years ago|reply
Look up "stick" functions in HAProxy docs. You can even persist TCP connections.
[+] [-] bedis9|8 years ago|reply
Please note that you can already do dynamic routing using ACLs or MAPS and updating your ACLs or MAPs content at runtime using the Runtime API (stats socket). there are "set map" and "set map" commands for this purpose.
[+] [-] arrty88|8 years ago|reply
[+] [-] djsumdog|8 years ago|reply
Letsencrypt support? I realize that's a big undertaking though. There is a Lua plugin, but it does depend on certbot being installed and running, which is a bit difficult if you use the docker image (there are some docker containers on github that achieve this by running supervisord in the docker container for cron/certbot and haproxy).
This is probably outside the scope of HAProxy really though, and could probably be implemented entirely as a lua plugin that handles ACME, maybe?
Kudos to the HAProxy devs and contributors. I've always been a fan and great to see them pushing forward with these big improvements.
[+] [-] mschuster91|8 years ago|reply
I run haproxy in docker - create a shellscript that allows your jenkins (or whatever you use) to send a SIGHUP to the haproxy container, and it will reload the config.
[+] [-] bedis9|8 years ago|reply
For now, there is one in HAProxy Enterprise, and it manages HAProxy's configuration file and triggers reload.
We're working on opening it, as soon as we have improved it: make it use both HAProxy Runtime API (stats socket) and configuration file to trigger reloads only when required.
Stay tuned as we say :)
[+] [-] osrec|8 years ago|reply
[+] [-] Scarbutt|8 years ago|reply
[+] [-] jsmeaton|8 years ago|reply
[+] [-] vbernat|8 years ago|reply
[+] [-] hobofan|8 years ago|reply
[+] [-] kuschku|8 years ago|reply
Chrome detects that email field, and assumes it might be part of a login form. (Because it is <input id="email" name="email" […] type="text">).
That triggers that.
[+] [-] martinald|8 years ago|reply
[+] [-] will_hughes|8 years ago|reply
All around good stuff, but I can't wait for multithreading.
[+] [-] JeanMarcS|8 years ago|reply
[+] [-] merb|8 years ago|reply
[+] [-] sofaofthedamned|8 years ago|reply
[+] [-] ksec|8 years ago|reply
[+] [-] ihattendorf|8 years ago|reply
[+] [-] mschuster91|8 years ago|reply
[+] [-] wtarreau|8 years ago|reply
That said, we still want to address server-side H2 in 1.9 to address the CDN type of workloads where the origin servers might be far away. But that's less important.
[+] [-] terlisimo|8 years ago|reply
The promise of http2 is to lower latency on the client side and having H2 on the frontend will help with that.
[+] [-] tyingq|8 years ago|reply
[+] [-] snvzz|8 years ago|reply
[+] [-] 3131s|8 years ago|reply
[+] [-] continuations|8 years ago|reply
[+] [-] ryanqian|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] astrix_pre|8 years ago|reply
[deleted]
[+] [-] vacri|8 years ago|reply
The server at www.haproxy.com is taking too long to respond.
Not the best advert for a high availability proxy...