This works, but it leaves all traffic between the CloudFront edge node and S3 unencrypted. In theory, that shouldn't be an issue, by why risk it?
A better way is to completely leave the "website" bits of S3 off, and leave that all up to CloudFront. You can create an Origin Access Identity, then grant that OAI access to read your S3 bucket (all automated in the wizard when you create a CF dist and specify an S3 origin). You then specify a default object in your CF dist, and bam, CF is using the S3 REST API over SSL to secure that CF-S3 hop.
Another important aspect of using AOI is that you don't need to make the s3 bucket public. This matters even if the website is fully public. It has to do with a simple governance rule. No public s3 buckets should be allowed.
That if monitored and enforced would stop many data breaches. With some public bucketd enforcement will be difficult
Go to https://www.netlify.com/features/#dev-tools and check out the dependencies in the image there. I bet an exec said "hey we need a cool looking screenshot of code" and the dev whipped up the most useless package.json they could think of and screen-shotted it. Well, I hope that's the case.
Looks like the majority of your bill -- $4.00/$4.39 -- is in hosted zones. It's $0.50/hosted zone, and you only need one for a single static site. So looks like with reasonable traffic, this jekyll setup is about $0.89/mo for hosting, that's not bad!
Highly recommend using CloudFlare instead of Cloudfront.
a) it's totally free, which means once it's cached at CF, no charges from AWS for bandwidth, also no charges for Route 53 since CF handles the DNS too.
b) it can be used to terminate SSL in front of the S3 bucket (with or without the S3 bucket properly using SSL, depending on if you're using path-based or host-based bucket access)
c) cache invalidations are stupid fast
d) any CDN changes are done nearly instant, vs. "however long" Cloudfront takes
GitLab Pages offers no IPv6 support. GitHub doesn't support IPv6 for custom domains officially, but you can easily work around that by adding 2a04:4e42::403 as the AAAA record.
My question with this kind of setup is: what if a malicious person (or just an unexpected success on HN) gets me a gazillion request, do I end up with a $10k liability ?
I'd rather have the site go down than me go broke, so is it really a good idea ?
This is ehy you can create budget limits in AWS. DDOS to your site is not legitimate traffic and AWS will provide you protection against it. Cloudfront is limited by default too. I cant remember the actual req/s but there is a limit. You can also limit access to certain countries where your legitimate users are.
My favorite combination for a static website is AWS S3 for content and Cloudflare for caching and SSL termination. I think Cloudflare offers more capabilities as CDN.
I’ve got the same setup at pfortuny.net/reflexiones plus amazon workmail and it costs me around 6$/month. Very low traffic, though. Anyway, the cost is 5$ for the mail, so the blog is negligible.
[+] [-] subway|8 years ago|reply
A better way is to completely leave the "website" bits of S3 off, and leave that all up to CloudFront. You can create an Origin Access Identity, then grant that OAI access to read your S3 bucket (all automated in the wizard when you create a CF dist and specify an S3 origin). You then specify a default object in your CF dist, and bam, CF is using the S3 REST API over SSL to secure that CF-S3 hop.
[+] [-] fishdaemon|8 years ago|reply
That if monitored and enforced would stop many data breaches. With some public bucketd enforcement will be difficult
[+] [-] justinsaccount|8 years ago|reply
https://aws.amazon.com/blogs/compute/implementing-default-di...
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] greatamerican|8 years ago|reply
[+] [-] 3stripe|8 years ago|reply
[+] [-] javajosh|8 years ago|reply
[+] [-] davewasthere|8 years ago|reply
https://i.imgur.com/ji1z6oz.png
I switched from gh-pages/cloudflare to netlify, and it looks as though page crawl performance has worsened significantly...
[+] [-] tambre|8 years ago|reply
[+] [-] greatamerican|8 years ago|reply
https://imgur.com/a/kDmdE
[+] [-] grepthisab|8 years ago|reply
[+] [-] mike503|8 years ago|reply
a) it's totally free, which means once it's cached at CF, no charges from AWS for bandwidth, also no charges for Route 53 since CF handles the DNS too.
b) it can be used to terminate SSL in front of the S3 bucket (with or without the S3 bucket properly using SSL, depending on if you're using path-based or host-based bucket access)
c) cache invalidations are stupid fast
d) any CDN changes are done nearly instant, vs. "however long" Cloudfront takes
$.02
[+] [-] Mononokay|8 years ago|reply
[+] [-] charlieegan3|8 years ago|reply
[+] [-] tambre|8 years ago|reply
GitLab Pages offers no IPv6 support. GitHub doesn't support IPv6 for custom domains officially, but you can easily work around that by adding 2a04:4e42::403 as the AAAA record.
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] greatamerican|8 years ago|reply
[+] [-] trevyn|8 years ago|reply
[+] [-] navaati|8 years ago|reply
I'd rather have the site go down than me go broke, so is it really a good idea ?
[+] [-] StreamBright|8 years ago|reply
[+] [-] logronoide|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] praveenweb|8 years ago|reply
I think cloudflare gives more options as a CDN than cloudfront.
[+] [-] edem|8 years ago|reply
[+] [-] pfortuny|8 years ago|reply
Amazon’s pricing is easy for this simple setup.
[+] [-] prayerslayer|8 years ago|reply
[+] [-] forty|8 years ago|reply
[+] [-] IloveHN84|8 years ago|reply
[+] [-] greatamerican|8 years ago|reply
[+] [-] dang|8 years ago|reply
https://news.ycombinator.com/newsfaq.html