top | item 28345644

(no title)

sharikone | 4 years ago

I disagree. Such tooling would probably be annoying for pro users and beginners alike.

The reasoning involved is deeper than what an automatic tool can accomplish. It requires knowledge of computers (disk access speed) and of the input (mostly not unicode). We can imagine inputs for which this optimization does not work and also architectures where it will be less effective. This is the kind of reasoning experienced humans are (still) better than machines.

discuss

order

Zababa|4 years ago

> The reasoning involved is deeper than what an automatic tool can accomplish.

Not for the part where you have to use something else than lines(). It's ~10% of the time spent in the code. As I said, it's not the biggest improvement, but for something that can easily be automated, 10% is great.

For the part about unicode, I agree that it's harder to find a solution. Maybe the library for JSON could first check if a \u or \U is in the object, and if so decode it as unicode? I may be missing something here, but that doesn't seem too hard. Of course it would be a bit slower than just decode unicode if everything is unicode. But if the majority of JSON is like the one in this article, this would be a sensible default.

For sure, we can't replace humans. That's not the goal. The goal is to offload easy and repetitve tasks to tools, and also to encode tacit knowledge in them.