A few things have caught my attention in your post.
Your biggest problem was that the configuration of your services was not sized/tuned properly for the hardware resources you've got. As a result of this your servers have become unresponsive and instead of fixing the problem, you've had to wait 30+ minutes until the servers recovered.
In your case you should have limited Solr's JVM memory size to the amount of RAM that your server can actually allocate to it (check your heap settings and possibly the PermGen space allocation).
If all services are sized properly, under no circumstance should your server become completely unresponsive, only the overloaded services would be affected. This would allow you or your System Administrator to login and fix the root-cause, instead of having to wait 30+ minutes for the server to recover or be rebooted. In the end it will allow you to react and interact with the systems.
The basic principle is that your production servers should never swap (that's why setting vm.swappines=0 sysctl is very important). The moment your services start swapping your performance will suffer so much that your server will not be able to handle any of the requests and they will keep piling up until a total meltdown.
In your case OOM killing the java process actually saved you by allowing you to login to the server. I wouldn't consider setting the OOM reaction to "panic" a good approach - if there is a similar problem and you reboot the server, you will have no idea what caused the memory usage to grow in the first place.
You're a development shop, not scalable system builders. Deciding to build your own systems has already potentially cost you the success of this product - I doubt you'll get a second chance on HN now. If you were on appengine, you'd be popping champagne corks instead of blood vessels, and capitalising on the momentum instead of writing a sad post-mortem.
I'd recommend you put away all the Solr, Apache, Nginx an varnish manuals you were planning to study for the next month, and check out appengine. Get Google's finest to run your platform for you, and concentrate on what you do best.
I'd say that the biggest problem is that they tried to launch their product on what appears[1] to be a 4G host, representing maybe $3-400 of hardware cost (maybe more if you buy premium, I doubt Linode does).
I mean, careful configuration and capacity planning is important. But what happened to straightforward conservative hardware purchasing where you get a much bigger system than you think you need? It's not like bigger hosts are that expensive: splurge for an EC2 2XL ($30/day I think) or three for the week you launch and have a simple plan in place for bringing up more to handle bursty load.
[1] The OOM killer picked a 2.7G Java process to kill. It usually picks the biggest thing available, so I'm guessing at 4G total.
1. Reduce keepalive, even with nginx 60 is too much (unless it's an "expensive" ssl connection).
2. set vm.swappiness = 0 to make sure crippling hard drive swap doesn't start until it absolutely has to
3. Use IPTABLES xt_connlimit to make sure people aren't abusing connections, even by accident - no client should have more than 20 connections to port 80, maybe even as low as 5 if your server is under a "friendly" ddos. If you are reverse proxying to apache, connlimit is a MUST.
> 3. Use IPTABLES xt_connlimit to make sure people aren't abusing connections, even by accident - no client should have more than 20 connections to port 80, maybe even as low as 5 if your server is under a "friendly" ddos. If you are reverse proxying to apache, connlimit is a MUST.
One must be careful when setting connection limits like this. A lot of people still use proxy servers and with modern browsers it quite easy to hit 20 concurrent connections per IP address.
If anyone owns a blog or site that they suspect may appear on HackerNews (especially if you're posting it), then please take the small amount of time to put an instance of Varnish in front of the site.
Then, ensure that Varnish is actually caching every element of the page, and that you are seeing the cache being hit consistently.
You should expect over 10,000 unique visitors within 24 hours, with most coming in the 30 minutes to 2 hours after you've hit the front page on HN.
You need not do your whole site... but definitely ensure that the key landing page can take the strain.
Unless you've put something like Varnish in front of your web servers, there's a good chance your web server is going down, especially if your pages are quite dynamic and require any processing.
A few weeks ago I got on the front page and within a 24 hour period was hit with 29,000 unique visitors with 38,000 page views. The page itself is image heavy with 1.3 MB on first load. I'm running Wordpress with the Quick Cache plugin by PriMoThemes. I'm hosted on a shared 1and1 server.
I've been hit before and went down, that's when I installed the Quick Cache plugin. Also 1and1 moved me to another server at some point but I'm not sure if it was before or after. Either the cache plugin is really good or I'm on a rockin server all by my lonesome. Or both.
If you're self hosting a wordpress site grab the Hyper Cache plugin or the very very simple Quick Cache plugin by PriMoThemes.
I'll be very happy to help you and HN'ers set Varnish up on their server (not looking for compensation for this) and get you through HN traffic on your launch day.
Plug: we've built several products around Varnish so we have a good handle of how/where Varnish can be leveraged. Here's a list of varnish things we've built at unixy:
Cucumbertown co-founder here. Nginx was serving the cache and our sense was it was caching. But then the day before we put in csrf validation to the login form and it was bypassing the caching.
So in theory we were positioned to serve from Nginx cache.
The blog mentions that they did have caching on with nginx (which is what Varnish does, isn't it?). The problems were because of nginx not caching the frontpage (configuration issue) and because there was an unexpected hit on solr.
I'd argue the opposite of your headline, that this was a very successful launch. Since HN isn't your target audience having your site fail from the traffic was far better than having it fail from a launch in your market. You shook out some important bugs before you lost real users. Plus you got to do this followup which will bring even more traffic.
First off, best of luck with your project. Secondly, kudos on writing the post-mortem, as I know it takes some guts to own a "failure".
I think, however, the need to write something like this speaks to an incorrection assumption: you need a "launch". Of course, TC and HN can give you a nice bump in traffic and even signups. However, in the long run, this really doesn't accomplish much for you. It gives you the kind of traffic that will likely leave and move on to the next article, skewing your metrics. There's certainly qualified prospects in there, but it's hard to decipher with all the noise.
Again, the concept of a "launch" speaks to poor business models. It really benefits businesses where the word "traction" is more important than "revenue". Build a business that provides a service that others will pay for and grow as fast as the business can bear, bringing in those visitors that are truly valuable to you.
When we did the beta, I posted the launch details on HN and you’ll be surprised by the amount of constructive feedback and users that we got. Cucumbertown now has users from devs to CEO’s who came in through HN and are now engaged users.
Cucumbertown has some notions like 'forking recipes' – called “Write a variation” which enables you to take a recipe and fork and make changes. Additionally Cucumbertown has a short hand notation way to write recipes(think stenography for recipes) – for advanced users. Things like these appeal to the HN crowd a lot.
Also, don’t you think quite a few hackers like me are also cooks!
I know as well as anyone the relative futility of relying on HN, Reddit, or TC coverage for building a successful tech product. Feedback and traffic from social news is merely a blip that says next to nothing one way or another about your long-term prospects.
However, if your site goes down for any reason a postmortem of this sort is definitely warranted. The word "launch" is not signifying much more than a point in time in this case, and I think you're jumping to a lot of conclusions about what hopes they were pinning on this event.
Right on. I second these sentiments. First, keep up the good work and best of luck moving forward. Very good that you're also reflecting on your successes and failures - always be learning.
The most challenging piece of a new business is, well, new business. And it's about growing your value proposition organically, one customer at a time, and refining the business. Analyzing bump in media attention won't really help you on that piece of the search.
Once you've nailed down the search, and you're simply focused on getting more publicity as you scale, then perhaps that sort of analysis will be of more use. But, I doubt it.
Thanks for this post, there were some nice tips in there. Although, I do have some nitpicking about your writing style. Maybe it's just me, but I found that your use of "+ve" instead of just saying "positive" and of "&" instead of "and" did not have the intended effect of speeding up reading, quite the reverse actually.
Seconded. Initially, my brain told me that +ve was the name of the site, so I was confused when I looked for a product page link only saw "Cucumbertown".
Granted, that's mostly laziness -- apparently I've got a rule that matches "strange words near the top of the post" to "probably the name of the product".
Running with swap enabled is a terrible idea. The authors mention how it was only once solr crashed that they were able to actually log in and start fixing problems; having swap means that rather than the OOM killer terminating processes, instead your whole system just grinds to a halt.
(it's strange that they recommend enabling swap when they also recommend enabling reboot-on-oom, which is pretty much the complete opposite philosophy)
I think OP's post treats swap along the conventional lines i.e swap is good. I think its true for applications running on the client machine. Where you don't want an app, say Eclipse, to crash for lack of memory. And for some years that conventional wisdom stayed on the server side as well.
But modern wisdom on that is that, in general(+), it may be good to not have swap at all, on your server. Rather than address things by running parallel instances and load balancing.
As such swap space may also run out eventually if some service is leaking memory. And until it does it will make the system slow for everybody. Its better instead to let the culprit processes simply die, and make things easier for every body else.
On my server my jettys keep dying when they run out of memory. Thankfully there are lot other instances which are there to process requests.
Its a trade off you make, in favor of dropping some requests which are currently hitting the errant service (jetty instance in my case) vs. slowing down things for everybody, to the extent that even the developers can't help until something finally runs out of swap space also and dies (like the solr case you and OP mentions).
+ I say in general because there definitely could be reasons when you need swap.
In my experience, the linux kernel handles no swap at all very badly, so you need a small amount.
Increasing the swap, which is the suggested solution, is however, a terrible idea. As soon as you hit high memory usage, your IO load will go through the roof, and everything will grind to a halt.
The solution here is separation of services - i.e. put Solr on a different box, so that if it spirals it doesn't take out other services.
The OOM killer is your friend for recovering from horrible conditions, but as soon as you hit it or swap, somethings gone wrong.
2. Isn't swapping bad? I don't think I've ever had a situation in which swap more than say 100MB was helpful. Once the machine starts swapping, a bigger swap just prolongs the agony.
3. If you couldn't ssh, why didn't you just reboot the machine?
I find it very difficult to believe that this person worked on any sort of performance team, given that what they discovered is pretty much "Handling Load 101".
Running everything on one box? Using swap? No caching? It's like a laundry list of junior admin mistakes.
This post mortem has me thinking about the best way to handle the situation in which you can't SSH into your server. The OP decided to trigger a kernel panic/restart on OOM errors, but I have a couple of concerns about this approach:
* If memory serves correctly, if your system runs out of memory, shouldn't the scheduler kill processes that are using too much memory? If this is the case, the system should recover from the OOM error and no restart should be needed.
* OOM errors aren't the only way to get a system into a state where you cannot SSH into a system. It would be great to have a more general solution.
* Even if you do restart, unless you had some kind of performance monitoring enabled, the system is no longer in the high-memory state so it will take a bit of digging to determine the root cause. If OOM errors are logged to syslog or something, I guess this isn't a big deal.
I suppose the best fail-safe solution is to ensure you always have one of the following:
* physical access to the system
* a way to access the console indirectly (something like VSphere comes to mind)
* Services like linode allow you to restart your system remotely, which would have been useful in this scenario
* In linux-land, there's an OOM killer (http://linux-mm.org/OOM_Killer) that would have started taking processes out. You have to exhaust swap for it to really take effect, and once you hit swap, your entire machine suddenly becomes hugely IO bound - in shared or virtual hosting environments, this usually makes the machine totally unresponsive.
* I've never seen any sort of virtual hosting service without either a remote console or a remote reboot. Usually both.
This is a great way to make lemonade out of the lemon of getting hosed by a lot of traffic. Write an informative post-mortem and resubmit! I know I missed the original submission and clicked through to the site, and there you have it. I'd say being humble and trying again is never a bad idea.
Whatever was the real cause for your issues, Linode's default small swap space is a plague. A system starts to misbehave much gently if there is enough swap.
For a production server I think that the opposite is a better _general_ advice - reduce the available swap, because if your server gets to a point to need it, the performance will suffer so much that your server will become completely unresponsive. Having less swap will allow the OOM to kill the run-away process and allow you to login and fix the problem instead of rebooting the server or waiting in vain for it to recover by itself.
There was a cookie set for CSRF protection and the headers specify that content should not be cached if there is a cookie (or more precisely - the cached content includes the cookie as a cache key, so each request with a different cookie gets a cache-miss).
[ not that you asked for it here, but I've got some frontpage UI feedback: ]
I think you should put a description up front to describe what Cucumber town is. I think that main image should be a slider with multiple feature images, and I think the Latest Recipes should be the first section after this. Just my 2c!
I have been having some of the same issues on a site I run ( http://www.opentestsearch.com/ ). Under heavy load solr will grind to a halt if you don't have enough ram available.
Putting a dedicated Varnish server in front of the search servers helped a lot. Using a cdn may also be a viable option, but haven't tried it myself.
Well, and this is why I recommend running solr on a standalone instance. It (and java/jetty/tomcat/etc.) are very memory hungry in general, so it is worth your while and money to spend a bit more and spin up a separate instance or whatever type of services you are using to run solr. It'll also run faster.
One last thing you can do if none of that is possible is use a better VM like Jrockit (http://www.oracle.com/technetwork/middleware/jrockit/overvie...). Jrockit with the right GC in my experience is much better about running in lower memory type situations.
That's why I like to use Heroku/EC-2 for launching new webservice. If shits hit the fan, you can jack up the processing power/database/RAM/whatever to scale to your demand. Once you have a good idea of the traffic it generates, you can then move it to a cheaper service.
Obviously, it's easy to say that when you're on the bench. Congratulations on the launch by the way.
Cucumbertown co-founder here. Actually I dislike this idea though we should have been better prepared.
At my previous firm we had this culture that whenever traffic peaks we spin up new instances. And tools like RightScale & Chef make it ridiculously simple. So our style was to do that than to optimize strains in code paths. Because this is so so convenient.
And before you know it, this notion of hardware is cheap becomes a culture. Soon enough if you grow you’ll be serving 100K users with 250 machines.
I think throwing more resources at the problem is a quick and dirty solution for when things go downhill quickly (like what happened here), and having that option is incredibly nice. Still, it should take second priority to proper configuration tuning in the long term.
Also, in some instances of runaway memory, there will always be a point where all the memory in the world isn't enough.
Once memory goes to swap you already lost. Personally I rarely configure swap on servers, save the DB. I would reconfigure your services to not grow past physical free memory. After that you are going to have to scale servers horizontally.
[+] [-] zorlem|13 years ago|reply
Your biggest problem was that the configuration of your services was not sized/tuned properly for the hardware resources you've got. As a result of this your servers have become unresponsive and instead of fixing the problem, you've had to wait 30+ minutes until the servers recovered.
In your case you should have limited Solr's JVM memory size to the amount of RAM that your server can actually allocate to it (check your heap settings and possibly the PermGen space allocation).
If all services are sized properly, under no circumstance should your server become completely unresponsive, only the overloaded services would be affected. This would allow you or your System Administrator to login and fix the root-cause, instead of having to wait 30+ minutes for the server to recover or be rebooted. In the end it will allow you to react and interact with the systems.
The basic principle is that your production servers should never swap (that's why setting vm.swappines=0 sysctl is very important). The moment your services start swapping your performance will suffer so much that your server will not be able to handle any of the requests and they will keep piling up until a total meltdown.
In your case OOM killing the java process actually saved you by allowing you to login to the server. I wouldn't consider setting the OOM reaction to "panic" a good approach - if there is a similar problem and you reboot the server, you will have no idea what caused the memory usage to grow in the first place.
[+] [-] foxylad|13 years ago|reply
You're a development shop, not scalable system builders. Deciding to build your own systems has already potentially cost you the success of this product - I doubt you'll get a second chance on HN now. If you were on appengine, you'd be popping champagne corks instead of blood vessels, and capitalising on the momentum instead of writing a sad post-mortem.
I'd recommend you put away all the Solr, Apache, Nginx an varnish manuals you were planning to study for the next month, and check out appengine. Get Google's finest to run your platform for you, and concentrate on what you do best.
[+] [-] ajross|13 years ago|reply
I mean, careful configuration and capacity planning is important. But what happened to straightforward conservative hardware purchasing where you get a much bigger system than you think you need? It's not like bigger hosts are that expensive: splurge for an EC2 2XL ($30/day I think) or three for the week you launch and have a simple plan in place for bringing up more to handle bursty load.
[1] The OOM killer picked a 2.7G Java process to kill. It usually picks the biggest thing available, so I'm guessing at 4G total.
[+] [-] ck2|13 years ago|reply
2. set vm.swappiness = 0 to make sure crippling hard drive swap doesn't start until it absolutely has to
3. Use IPTABLES xt_connlimit to make sure people aren't abusing connections, even by accident - no client should have more than 20 connections to port 80, maybe even as low as 5 if your server is under a "friendly" ddos. If you are reverse proxying to apache, connlimit is a MUST.
[+] [-] zorlem|13 years ago|reply
One must be careful when setting connection limits like this. A lot of people still use proxy servers and with modern browsers it quite easy to hit 20 concurrent connections per IP address.
[+] [-] jacquesm|13 years ago|reply
[+] [-] Cherian|13 years ago|reply
[+] [-] buro9|13 years ago|reply
Then, ensure that Varnish is actually caching every element of the page, and that you are seeing the cache being hit consistently.
You should expect over 10,000 unique visitors within 24 hours, with most coming in the 30 minutes to 2 hours after you've hit the front page on HN.
You need not do your whole site... but definitely ensure that the key landing page can take the strain.
Unless you've put something like Varnish in front of your web servers, there's a good chance your web server is going down, especially if your pages are quite dynamic and require any processing.
[+] [-] ChrisNorstrom|13 years ago|reply
A few weeks ago I got on the front page and within a 24 hour period was hit with 29,000 unique visitors with 38,000 page views. The page itself is image heavy with 1.3 MB on first load. I'm running Wordpress with the Quick Cache plugin by PriMoThemes. I'm hosted on a shared 1and1 server.
I've been hit before and went down, that's when I installed the Quick Cache plugin. Also 1and1 moved me to another server at some point but I'm not sure if it was before or after. Either the cache plugin is really good or I'm on a rockin server all by my lonesome. Or both.
If you're self hosting a wordpress site grab the Hyper Cache plugin or the very very simple Quick Cache plugin by PriMoThemes.
http://www.tutorial9.net/tutorials/web-tutorials/wordpress-c...
[+] [-] jjoe|13 years ago|reply
Plug: we've built several products around Varnish so we have a good handle of how/where Varnish can be leveraged. Here's a list of varnish things we've built at unixy:
Varnish load balancer: http://www.unixy.net/advanced-hosting/varnish-load-balancer
Varnish for cPanel and DirectAdmin: http://www.unixy.net/varnish
Varnish w/ Nginx for cPanel: http://www.unixy.net/advanced-hosting/varnish-nginx-cpanel
Email in profile. I'll be more than happy to help out.
Joe
[+] [-] Cherian|13 years ago|reply
So in theory we were positioned to serve from Nginx cache.
[+] [-] gingerjoos|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] driverdan|13 years ago|reply
[+] [-] bdcravens|13 years ago|reply
I think, however, the need to write something like this speaks to an incorrection assumption: you need a "launch". Of course, TC and HN can give you a nice bump in traffic and even signups. However, in the long run, this really doesn't accomplish much for you. It gives you the kind of traffic that will likely leave and move on to the next article, skewing your metrics. There's certainly qualified prospects in there, but it's hard to decipher with all the noise.
Again, the concept of a "launch" speaks to poor business models. It really benefits businesses where the word "traction" is more important than "revenue". Build a business that provides a service that others will pay for and grow as fast as the business can bear, bringing in those visitors that are truly valuable to you.
[+] [-] Cherian|13 years ago|reply
Cucumbertown has some notions like 'forking recipes' – called “Write a variation” which enables you to take a recipe and fork and make changes. Additionally Cucumbertown has a short hand notation way to write recipes(think stenography for recipes) – for advanced users. Things like these appeal to the HN crowd a lot.
Also, don’t you think quite a few hackers like me are also cooks!
[+] [-] gtd|13 years ago|reply
However, if your site goes down for any reason a postmortem of this sort is definitely warranted. The word "launch" is not signifying much more than a point in time in this case, and I think you're jumping to a lot of conclusions about what hopes they were pinning on this event.
[+] [-] dklounge|13 years ago|reply
The most challenging piece of a new business is, well, new business. And it's about growing your value proposition organically, one customer at a time, and refining the business. Analyzing bump in media attention won't really help you on that piece of the search.
Once you've nailed down the search, and you're simply focused on getting more publicity as you scale, then perhaps that sort of analysis will be of more use. But, I doubt it.
[+] [-] gingerjoos|13 years ago|reply
> HN community’s remarks and constructive criticism are pearls of wisdom
[+] [-] natrius|13 years ago|reply
[+] [-] mekoka|13 years ago|reply
[+] [-] politician|13 years ago|reply
Granted, that's mostly laziness -- apparently I've got a rule that matches "strange words near the top of the post" to "probably the name of the product".
[+] [-] Cherian|13 years ago|reply
Call this a hacker’s laziness + Yahoo chat room era slangs.
[+] [-] ajross|13 years ago|reply
[+] [-] lmm|13 years ago|reply
(it's strange that they recommend enabling swap when they also recommend enabling reboot-on-oom, which is pretty much the complete opposite philosophy)
[+] [-] rehack|13 years ago|reply
But modern wisdom on that is that, in general(+), it may be good to not have swap at all, on your server. Rather than address things by running parallel instances and load balancing.
As such swap space may also run out eventually if some service is leaking memory. And until it does it will make the system slow for everybody. Its better instead to let the culprit processes simply die, and make things easier for every body else.
On my server my jettys keep dying when they run out of memory. Thankfully there are lot other instances which are there to process requests.
Its a trade off you make, in favor of dropping some requests which are currently hitting the errant service (jetty instance in my case) vs. slowing down things for everybody, to the extent that even the developers can't help until something finally runs out of swap space also and dies (like the solr case you and OP mentions).
+ I say in general because there definitely could be reasons when you need swap.
Edit: Added explanation for (+)
[+] [-] richardwhiuk|13 years ago|reply
Increasing the swap, which is the suggested solution, is however, a terrible idea. As soon as you hit high memory usage, your IO load will go through the roof, and everything will grind to a halt.
The solution here is separation of services - i.e. put Solr on a different box, so that if it spirals it doesn't take out other services.
The OOM killer is your friend for recovering from horrible conditions, but as soon as you hit it or swap, somethings gone wrong.
[+] [-] boundlessdreamz|13 years ago|reply
2. Isn't swapping bad? I don't think I've ever had a situation in which swap more than say 100MB was helpful. Once the machine starts swapping, a bigger swap just prolongs the agony.
3. If you couldn't ssh, why didn't you just reboot the machine?
Edit:
1. What did you use for the graphs?
2. What is the stack?
[+] [-] Cherian|13 years ago|reply
Stack is Python, Django, PostgreSql, Redis, Memcache etc.
[+] [-] TeeWEE|13 years ago|reply
It doesn take more then an hour, and you quickly know what your upper limits are, and where the bottlenecks are.
I use gatling in favor of JMeter: https://github.com/excilys/gatling
[+] [-] nasalgoat|13 years ago|reply
Running everything on one box? Using swap? No caching? It's like a laundry list of junior admin mistakes.
[+] [-] druiid|13 years ago|reply
[+] [-] jsaxton86|13 years ago|reply
* If memory serves correctly, if your system runs out of memory, shouldn't the scheduler kill processes that are using too much memory? If this is the case, the system should recover from the OOM error and no restart should be needed.
* OOM errors aren't the only way to get a system into a state where you cannot SSH into a system. It would be great to have a more general solution.
* Even if you do restart, unless you had some kind of performance monitoring enabled, the system is no longer in the high-memory state so it will take a bit of digging to determine the root cause. If OOM errors are logged to syslog or something, I guess this isn't a big deal.
I suppose the best fail-safe solution is to ensure you always have one of the following:
* physical access to the system
* a way to access the console indirectly (something like VSphere comes to mind)
* Services like linode allow you to restart your system remotely, which would have been useful in this scenario
[+] [-] adrianpike|13 years ago|reply
* I've never seen any sort of virtual hosting service without either a remote console or a remote reboot. Usually both.
[+] [-] debacle|13 years ago|reply
I think I really like your website. I really like the simplicity of the presentation to the user.
[+] [-] nemesisj|13 years ago|reply
[+] [-] antirez|13 years ago|reply
[+] [-] zorlem|13 years ago|reply
edit: typo
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] alexbrand09|13 years ago|reply
[+] [-] zorlem|13 years ago|reply
[+] [-] saltcod|13 years ago|reply
I think you should put a description up front to describe what Cucumber town is. I think that main image should be a slider with multiple feature images, and I think the Latest Recipes should be the first section after this. Just my 2c!
Screen: http://cl.ly/image/3R2Y131Z433L
[+] [-] Cherian|13 years ago|reply
[+] [-] runarb|13 years ago|reply
Putting a dedicated Varnish server in front of the search servers helped a lot. Using a cdn may also be a viable option, but haven't tried it myself.
[+] [-] druiid|13 years ago|reply
One last thing you can do if none of that is possible is use a better VM like Jrockit (http://www.oracle.com/technetwork/middleware/jrockit/overvie...). Jrockit with the right GC in my experience is much better about running in lower memory type situations.
[+] [-] pothibo|13 years ago|reply
Obviously, it's easy to say that when you're on the bench. Congratulations on the launch by the way.
[+] [-] Cherian|13 years ago|reply
At my previous firm we had this culture that whenever traffic peaks we spin up new instances. And tools like RightScale & Chef make it ridiculously simple. So our style was to do that than to optimize strains in code paths. Because this is so so convenient.
And before you know it, this notion of hardware is cheap becomes a culture. Soon enough if you grow you’ll be serving 100K users with 250 machines.
[+] [-] dkokelley|13 years ago|reply
Also, in some instances of runaway memory, there will always be a point where all the memory in the world isn't enough.
[+] [-] lnanek2|13 years ago|reply
[+] [-] mp3tricord|13 years ago|reply