top | item 16155534

(no title)

pixie_ | 8 years ago

For Go and/or C# lambda support would it be worth even running the garbage collector, or just allocating a block of memory and cleaning it up when the function ends?

Side note I think that should be an option for web servers as well for languages with managed memory. Light isolated non-threaded api endpoints shouldn't be interrupted by garbage collection.

discuss

order

jitl|8 years ago

We do this during request execution in our Ruby services. We tell the GC to pre-allocate a few GBs of memory, and then explicitly suspend GC until the end of the request. Then we do a single GC run.

wickawic|8 years ago

Very interesting! Have you experienced improvements from this approach?

sudhirj|8 years ago

Generally an interesting idea, but Lambda containers are re-used across requests, so might not be a good thing to do in this case. If there's no reuse, and they figure out sub millisecond cold start, they could do this and essentially create a disposable server for each request.

Ironically, I think that's what CGI did :D

elwesties|8 years ago

This is an interesting question. I think it depends on the situation, You pay for memory so if you are getting near the cap in the execution of your function then managing the memory as you go may be beneficial but otherwise I don't see why you wouldn't just ignore it. I do have an open question weather the memory leak would effect subsequent runs of the function after it has been "warmed up" - https://serverless.com/blog/keep-your-lambdas-warm/.

brianwawok|8 years ago

Most lambdas would be reused many times, the use once and kill the VM is not the only plan...

eklavya|8 years ago

I think that's arena allocation and that's what compact regions do in haskell.