top | item 39919863

(no title)

deckard1 | 1 year ago

I can understand why people wanted that, and the benefit of doing that.

With that said, I also see benefit in having limitations. There is a certain comfort in knowing what a tool can do and cannot do. A hammer cannot become a screwdriver. And that's fine because you can then decide to use a screwdriver. You're capable of selection.

Take PostgreSQL. How many devs today know when it's the right solution? When should they use Redis instead? Or a queue solution? Cloud services add even more confusion. What are the limitations and weaknesses of AWS RDS? Or any AWS service? Ask your typical dev this today and they will give you a blank stare. It's really hard to even know what the right tool is today, when everything is abstracted away and put into fee tiers, ingress/egress charges, etc. etc.

tl;dr: limitations and knowledge of those limitations are an important part of being able to select the right tool for the job

discuss

order

kstrauser|1 year ago

I see zero benefit in having artificial functionality limitations. In my hypothetical example, imagine that `sed 's/foo/bar/'` works but `sed 's/foo/bark/'` does not because it's 1 character too long. There's not a plausible scenario where that helps me. You wouldn't want to expand sed to add a fullscreen text editor because that's outside its scope. Within its scope, limitations only prevent you from using it where you need it. It would be more like a hammer that cannot be made to hammer 3 inch nails because it has a hard limit of 2.5 inches.

Those are the kinds of limits GNU wanted to remove. Why use a fixed-length buffer when you can alloc() at runtime? It doesn't mean that `ls` should send email.

kryptiskt|1 year ago

There's a major benefit: you can test that a program with an artificial limit works up to the limit, and fails in a well-defined manner above the limit. A program without any hardcoded limit will also fail at some point, but you don't know where and how.

mywittyname|1 year ago

Imagine using a program that can only allocate 4GB of ram because it has 32-bit address space. There's no benefit to that limitation, it's an arbitrary limit imposed by the trades-offs made in the 80s. It just means that someone will need to build another layer to their program to chunk their input data then recombine the output. It's a needless waste of resources.

The benefit of not having a limitation is that the real limits scale with compute power. If you need more than 4GB of memory to process something, add more memory to the computer.

swatcoder|1 year ago

> Imagine using a program that can only allocate 4GB of ram because it has 32-bit address space. There's no benefit to that limitation

You're looking at isolated parts of a system. In a system, an artificial "limit" in one component becomes a known constraint that other components can leverage as part of their own engineering.

In the example of memory addresses, it might be "artificial" to say that a normal application can only use 32-bit or 48-bit addresses when the hardware running the application operates in 64-bits, but this explicit constraint might enable (say) a runtime or operating system to do clever things with those extra bits -- security, validation, auditing, optimization, etc.

And in many cases, the benefits of being able to engineer a system of constrained components are far more common and far more constructive than the odd occasion that a use case is entirely inhibited by a constraint.

That's not to say that we should blindly accept and perpetuate every constraint ever introduced, or introduce new ones without thoughtful consideration, but it's wrong to believe they have "no benefit" just because they seem "artificial" or "arbitrary".

Joker_vD|1 year ago

> A hammer cannot become a screwdriver.

Don't tell me you've never hammered a screw into a wooden plank? Vice versa, a screwdriver also can be used as a hammer although a quite pathetic one.