top | item 47088297

(no title)

nasretdinov | 9 days ago

Let's say you're opening files upon each loop iteration. If you're not careful you'll run out of open file descriptors before the loop finishes.

discuss

order

mort96|9 days ago

It doesn't just have to be files, FWIW. I once worked in a Go project which used SDL through CGO for drawing. "Widgets" were basically functions which would allocate an SDL surface, draw to it using Cairo, and return it to Go code. That SDL surface would be wrapped in a Go wrapper with a Destroy method which would call SDL_DestroySurface.

And to draw a surface to the screen, you need to create an SDL texture from it. If that's all you want to do, you can then destroy the SDL surface.

So you could imagine code like this:

    strings := []string{"Lorem", "ipsum", "dolor", "sit", "amet"}
    
    stringTextures := []SDLTexture{}
    for _, s := range strings {
        surface := RenderTextToSurface(s)
        defer surface.Destroy()
        stringTextures = append(stringTextures, surface.CreateTexture())
    }
Oops, you're now using way more memory than you need!

win311fwg|9 days ago

Why would you allocate/destroy memory in each iteration when you can reuse it to much greater effect? Aside from bad API design, but a language isn't there to paper over bad design decisions. A good language makes bad design decisions painful.

9rx|9 days ago

Files are IO, which means a lot of waiting. For what reason wouldn't you want to open them concurrently?

mort96|9 days ago

Opening a file is fairly fast (at least if you're on Linux; Windows not so much). Synchronous code is simpler than concurrent code. If processing files sequentially is fast enough, for what reason would you want to open them concurrently?

nasretdinov|9 days ago

For concurrent processing you'd probably do something like splitting the file names into several batches and process those batches sequentially in each goroutine, so it's very much possible that you'd have an exact same loop for the concurrent scenario.

P.S. If you have enough files you don't want to try to open them all at once — Go will start creating more and more threads to handle the "blocked" syscalls (open(2) in this case), and you can run out of 10,000 threads too