top | item 45535934

(no title)

bluetomcat | 4 months ago

Good C code will try to avoid allocations as much as possible in the first place. You absolutely don’t need to copy strings around when handling a request. You can read data from the socket in a fixed-size buffer, do all the processing in-place, and then process the next chunk in-place too. You get predictable performance and the thing will work like precise clockwork. Reading the entire thing just to copy the body of the request in another location makes no sense. Most of the “nice” javaesque XXXParser, XXXBuilder, XXXManager abstractions seen in “easier” languages make little sense in C. They obfuscate what really needs to happen in memory to solve a problem efficiently.

discuss

order

lelanthran|4 months ago

> Good C code will try to avoid allocations as much as possible in the first place.

I've upvoted you, but I'm not so sure I agree though.

Sure, each allocation imposes a new obligation to track that allocation, but on the downside, passing around already-allocated blocks imposes a new burden for each call to ensure that the callees have the correct permissions (modify it, reallocate it, free it, etc).

If you're doing any sort of concurrency this can be hard to track - sometimes it's easier to simply allocate a new block and give it to the callee, and then the caller can forget all about it (callee then has the obligation to free it).

obviouslynotme|4 months ago

The most important pattern to learn in C is to allocate a giant arena upfront and reuse it over and over in a loop. Ideally, there is only one allocation and deallocation in the entire program. As with all things multi-threaded, this becomes trickier. Luckily, web servers are embarrassingly parallel, so you can just have an arena for each worker thread. Unluckily, web servers do a large amount of string processing, so you have to be careful in how you build them to prevent the memory requirements from exploding. As always, tradeoffs can and will be made depending on what you are actually doing.

Short-run programs are even easier. You just never deallocate and then exit(0).

1718627440|4 months ago

To reduce the amount of allocation instead of:

    struct parsed_data * = parse (...);
    struct process_data * = process (..., parsed_data);
    struct foo_data * = do_foo (..., process_data);
you can do

    parse (...) {
        ...
        process (...);
        ...
    }

    process (...) {
        ...
        do_foo (...);
        ...
    }
It sounds like violating separation of concerns at first, but it has the benefit, that you can easily do procession and parsing in parallel, and all the data can become readonly. Also I was impressed when I looked at a call graph of this, since this essentially becomes the documentation of the whole program.

throwawaymaths|4 months ago

is there any system where doing the basics of http (everything up to framework handoff of structured data) are done outside of a single concurrency unit?

lock1|4 months ago

Why does "good" C have to be zero alloc? Why should "nice" javaesque make little sense in C? Why do you implicitly assume performance is "efficient problem solving"?

Not sure why many people seem fixated on the idea that using a programming language must follow a particular approach. You can do minimal alloc Java, you can simulate OOP-like in C, etc.

Unconventional, but why do we need to restrict certain optimizations (space/time perf, "readability", conciseness, etc) to only a particular language?

bluetomcat|4 months ago

Because in C, every allocation incurs a responsibility to track its lifetime and to know who will eventually free it. Copying and moving buffers is also prone to overflows, off-by-one errors, etc. The generic memory allocator is a smart but unpredictable complex beast that lives in your address space and can mess your CPU cache, can introduce undesired memory fragmentation, etc.

In Java, you don't care because the GC cleans after you and you don't usually care about millisecond-grade performance.

cogman10|4 months ago

> Why should "nice" javaesque make little sense in C?

Very importantly, because Java is tracking the memory.

In java, you could create an item, send it into a queue to be processed concurrently, but then also deal with that item where you created it. That creates a huge problem in C because the question becomes "who frees that item"?

In java, you don't care. The freeing is done automatically when nobody references the item.

In C, it's a big headache. The concurrent consumer can't free the memory because the producer might not be done with it. And the producer can't free the memory because the consumer might not have ran yet. In idiomatic java, you just have to make sure your queue is safe to use concurrently. The right thing to do in C would be to restructure things to ensure the item isn't used before it's handed off to the queue or that you send a copy of the item into the queue so the question of "who frees this" is straight forward. You can do both approaches in java, but why would you? If the item is immutable there's no harm in simply sharing the reference with 100 things and moving forward.

In C++ and Rust, you'd likely wrap that item in some sort of atomic reference counted structure.

lelanthran|4 months ago

> Why does "good" C have to be zero alloc?

GP didn't say "zero-alloc", but "minimal alloc"

> Why should "nice" javaesque make little sense in C?

There's little to no indirection in idiomatic C compared with idiomatic Java.

Of course, in both languages you can write unidiomatically, but that is a great way to ensure that bugs get in and never get out.

estimator7292|4 months ago

Good C has minimal allocations because you, the human, are the memory allocator. It's up to your own meat brain to correctly track memory allocation and deallocation. Over the last century, C programmers have converged on some best practices to manage this more effectively. We statically allocate, kick allocations up the call chain as far as possible. Anything to get that bit of tracked state out of your head.

But we use different approaches for different languages because those languages are designed for that approach. You can do OOP in C and you can do manual memory management in C#. Most people don't because it's unnecessarily difficult to use languages in a way they aren't designed for. Plus when you re-invent a wheel like "classes" you will inevitably introduce a bug you wouldn't have if you'd used a language with proper support for that construct. You can use a hammer to pull out a screw, but you'd do a much better job if you used a screwdriver instead.

Programming languages are not all created equal and are absolutely not interchangeable. A language is much, much more than the text and grammar. The entire reason we have different languages is because we needed different ways to express certain classes of problems and constructs that go way beyond textual representation.

For example, in a strictly typed OOP language like C#, classes are hideously complex under the hood. Miles and miles of code to handle vtables, inheritance, polymorphism, virtual, abstract functions and fields. To implement this in C would require effort far beyond what any single programmer can produce in a reasonable time. Similarly, I'm sure one could force JavaScript to use a very strict typing and generics system like C#, but again the effort would be enormous and guaranteed to have many bugs.

We use different languages in different ways because they're different and work differently. You're asking why everyone twists their screwdrivers into screws instead of using the back end to pound a nail. Different tools, different uses.

riedel|4 months ago

A long time ago I was involved in building compilers. It was common that we solved this problem with obstacks, which are basically stacked heaps. I wonder one could not build more things like this, where freeing is a bit more best effort but you have some checkpoints. (I guess one would rather need tree like stacks) Just have to disallow pointers going the wrong way. Allocation remains ugly in C and I think explicit data structures are are definitely a better way of handling it.

fulafel|4 months ago

This shared memory and pointer shuffling is of course fraught with requiring correct logic to avoid memory safety bugs. Good C code doesn't get you pwned, I'd argue.

jenadine|4 months ago

> Good C code doesn't get you pwned, I'd argue.

This is not a serious argument because you don't really define good C code and how easy or practical it is to do. The sentence works for every language. "Good <whatever language> code doesn't get you pwned"

But the question is whether "Average" or "Normal" C code gets you pwned? And the answer is yes, as told in the article.

wfn|4 months ago

Agree re: no need for heap allocation - for others: I recommend reading thru whole masscan source (https://github.com/robertdavidgraham/masscan), it's a pleasure btw - iirc rather few/sparse malloc()s which are part of regular I/O processing flow (there will be malloc()s which depending on config etc. set up additional data structs but as part of setup).

fsckboy|4 months ago

>Good C code will try to avoid allocations as much as possible in the first place.

there's a genius to this: if you're going to optimize prematurely, do it right out of the gate!

01HNNWZ0MV43FF|4 months ago

Can you do parsing of JSON and XML without allocating?

veqq|4 months ago

Of course. You can do it in a single pass/just parse the token stream. There are various implementations like: https://zserge.com/jsmn/

bluetomcat|4 months ago

Yes, you can do it with minimal allocations - provided that the source buffer is read-only or is mutable but is unused later directly by the caller. If the buffer is mutable, any un-escaping can be done in-place because the un-escaped string will always be shorter. All the substrings you want are already in the source buffer. You just need a growable array of pointer/length pairs to know where tokens start.

acidx|4 months ago

Yes! The JSON library I wrote for the Zephyr RTOS does this. Say, for instance, you have the following struct:

    struct SomeStruct {
        char *some_string;
        int some_number;
    };
You would need to declare a descriptor, linking each field to how it's spelled in the JSON (e.g. the some_string member could be "some-string" in the JSON), the byte offset from the beginning of the struct where the field is (using the offsetof() macro), and the type.

The parser is then able to go through the JSON, and initialize the struct directly, as if you had reflection in the language. It'll validate the types as well. All this without having to allocate a node type, perform copies, or things like that.

This approach has its limitations, but it's pretty efficient -- and safe!

Someone wrote a nice blog post about (and even a video) it a while back: https://blog.golioth.io/how-to-parse-json-data-in-zephyr/

The opposite is true, too -- you can use the same descriptor to serialize a struct back to JSON.

I've been maintaining it outside Zephyr for a while, although with different constraints (I'm not using it for an embedded system where memory is golden): https://github.com/lpereira/lwan/blob/master/src/samples/tec...

zzo38computer|4 months ago

It depends what you intend to do with the parsed data, and where the input comes from and where the output will be going to. There are situations that allocations can be reduced or avoided, but that is not all of them. (In some cases, you do not need full parsing, e.g. to split an array, you can check if it is a string or not and the nesting level, and then find the commas outside of any arrays other than the first one, to be split.) (If the input is in memory, then you can also consider if you can modify that memory for parsing, which is sometimes suitable but sometimes not.)

However, for many applications, it will be better to use a binary format (or in some cases, a different text format) rather than JSON or XML.

(For the PostScript binary format, there is no escaping, and the structure does not need to be parsed and converted ahead of time; items in an array are consecutive and fixed size, and data it references (strings and other arrays) is given by an offset, so you can avoid most of the parsing. However, note that key/value lists in PostScript binary format is nonstandard (even though PostScript does have that type, it does not have a standard representation in the binary object format), and that PostScript has a better string type than JavaScript but a worse numeric type than JavaScript.)

megous|4 months ago

Yes, you can first validate the buffer, to know it contains valid JSON, and then you can work with pointers to beginings of individual syntactic parts of JSON, and have functions that decide what type of the current element is, or move to the next element, etc. Even string work (comparisons with other escaped or unescaped strings, etc.) can be done on escaped strings directly without unescaping them to a buffer first.

Ergonomically, it's pretty much the same as parsing the JSON into some AST first, and then working on the AST. And it can be much faster than dumb parsers that use malloc for individual AST elements.

You can even do JSON path queries on top of this, without allocations.

Eg. https://xff.cz/git/megatools/tree/lib/sjson.c

gritzko|4 months ago

Yep, no problem. In place parsing only requires a stack. Stack length is the maximum JSON nesting allowed. I have a C dialect exactly like that.

Ygg2|4 months ago

Theoretically yes. Practically there is character escaping.

That kills any non-allocation dreams. Moment you have "Hi \uxxxx isn't the UTF nice?" you will probably have to allocate. If source is read-only you have to allocate. If source is mutable you have to waste CPU to rewrite the string.

lelanthran|4 months ago

> Can you do parsing of JSON and XML without allocating?

If the source JSON/XML is in a writeable buffer, with some helper functions you can do it. I've done it for a few small-memory systems.

self_awareness|4 months ago

That mythical "Good C Code", which is known only to some people who I never met.

pjmlp|4 months ago

These abstractions were already common in enterprise C code decades before Java came to be, thanks to stuff like Yourdon Structured Method.

Using fixed size buffers doesn't fix out of bounds errors, and stack corruption caused by such bugs.

Naturally we all know good C programmers never make them. /s