I can see why you might not want to, though. App ELBs charge by usage and can get somewhat expensive (like running another EC2 instance or two). They can also have cold-start performance issues in specific circumstances (traffic spikes).
That doesn't solve the problem of the hostname on the EC2 instance itself being the same across all instances thereby making it harder to see what logs came from what hosts.
It doesn't solve the problem of allowing you to look at logs and then quickly SSH'ing to a single machine in the ASG.
We absolutely put an ELB/ALB in front of these ASGs as well. The post mentions a few use cases where unique hostnames with internal Route53 records are helpful for us.
Can't this be solved by using IP addresses for hostnames? This can be a part of bootstrap script(which ASG/Launch Configuration already supports via UserData[1])
What I can't understand is -
If your logs are in ELK and metrics in prometheus/grafana - why do you need SSH access? Sounds like thats a good problem to solve
SSH access is a last resort, but it can be necessary in certain cases. For example, if our log forwarding breaks. SSH is also just one example, it can also be helpful to curl endpoints on the host directly without hitting the ELB/ALB.
The post actually provides the user_data script we use.
I do this already but from my configuration management system on my instances, but one thing I don't have that I'd love if Route53 would help support is being able to run route53 to handle in-addr.arpa zones for my IP addresses so I can get reverse IP looking for my VPC networks without having to run my own resolver.
I wrote and use this regularly, docker app that adds and removes instances from route53, similar to their terraform solution. Similar idea, different implementation.
I brought up that point since I think most developers prefer the user experience of Lambda/Kubernetes where they don't have to manage individual instances in Auto Scaling Groups. They certainly are not 'outdated' for our use cases, and especially not for those responsible for running the underlying infrastructure (when running Kubernetes nodes).
Why should an instance created by an ASG have a host name? These are cattle not pets. I use Serilog for logging with an EC2 enricher that automatically adds the instance Id and the IP address.
Since Serilog does structured logging, I can use either an ElasticSearch or Mongo sink and do complex queries.
If I routinely need to log into an instance to troubleshoot, I need to be capturing data and sending it to a central logging system.
Or a way to get you more familiar with tagging, or the various queries and filters on different api results. It's annoying at first, but it leads to less reliance on the console and more effective scripting. (Instead of naming my instances I just made a script which looks up the instance I want and outputs the IP and username, and put that in an SSH config)
[+] [-] cxmcc|6 years ago|reply
[+] [-] Cpoll|6 years ago|reply
[+] [-] X-Istence|6 years ago|reply
It doesn't solve the problem of allowing you to look at logs and then quickly SSH'ing to a single machine in the ASG.
[+] [-] jimsheldon|6 years ago|reply
We absolutely put an ELB/ALB in front of these ASGs as well. The post mentions a few use cases where unique hostnames with internal Route53 records are helpful for us.
[+] [-] merlincorey|6 years ago|reply
I was curious what they are using for the Lambda function, and it turns out is a framework-less Python script[0].
One thing I'm not clear on yet is if using this will imply one such lambda for every autoscaling group or not.
[0] https://github.com/meltwater/terraform-aws-asg-dns-handler/b...
[+] [-] spier|6 years ago|reply
He confirmed "You only need one instance per account, we have always been fine with just one".
I will let him fill in further details here but I figured you would be interested in a short update on this.
[+] [-] spier|6 years ago|reply
[+] [-] fatninja|6 years ago|reply
What I can't understand is -
If your logs are in ELK and metrics in prometheus/grafana - why do you need SSH access? Sounds like thats a good problem to solve
[1] - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-dat...
[+] [-] jimsheldon|6 years ago|reply
SSH access is a last resort, but it can be necessary in certain cases. For example, if our log forwarding breaks. SSH is also just one example, it can also be helpful to curl endpoints on the host directly without hitting the ELB/ALB.
The post actually provides the user_data script we use.
[+] [-] VectorLock|6 years ago|reply
[+] [-] mrud|6 years ago|reply
[+] [-] spier|6 years ago|reply
[+] [-] discobean|6 years ago|reply
https://github.com/discobean/route53-sidecar
[+] [-] OJFord|6 years ago|reply
Our nodes are in ASGs.
[+] [-] jimsheldon|6 years ago|reply
I brought up that point since I think most developers prefer the user experience of Lambda/Kubernetes where they don't have to manage individual instances in Auto Scaling Groups. They certainly are not 'outdated' for our use cases, and especially not for those responsible for running the underlying infrastructure (when running Kubernetes nodes).
[+] [-] aequitas|6 years ago|reply
[+] [-] scarface74|6 years ago|reply
Since Serilog does structured logging, I can use either an ElasticSearch or Mongo sink and do complex queries.
If I routinely need to log into an instance to troubleshoot, I need to be capturing data and sending it to a central logging system.
[+] [-] peterwwillis|6 years ago|reply
[+] [-] jedberg|6 years ago|reply