top | item 29348244

(no title)

Rainymood | 4 years ago

Yesterday, I wrote a short blog post titled "Don't mistake the internet's intelligence for your own" [0] which is very closely related to this blog post here. I wrote it it to shine some light on the fact that knowledge on the internet (Google, SO) is not ~your~ knowledge. This fact often gets painfully exposed during interviews. Funny timing.

[0] https://www.janmeppe.com/blog/dont-confuse-intelligence/

discuss

order

dekhn|4 years ago

I consider the internet an external brain I reference (mainly through Google search). I've done this for decades. At some point systems like linux got so complicted and changed so rapidly it didn't make any sense to store information in my head if it was already stored and easily fetched externally.

Much of my ability in using internet information comes from being able to rapidly prune what I can tell are incorrect answers ("do this to systemd to fix that problem" when systemd isn't related to the problem because on OS whatever, they disabled that feature) while discovering the one random answer that has the deep, non-obvious fix.

Now, in terms of interviews. I'm not looking for people to spit out information from memory in an interview, beyond some fairly basic reference information. I'm looking to see if they can apply the skills they have to solve useful problems better than other candidates.

I failed my first interviews at Google because I didn't know quicksort (the details of the implementation) off the top of my head; anybody who had memorized the code could have answered the question instantly. After later joining and spending 12 years as a software engineer, I can assure you: knowing how quicksort is implemented isn;'t helpful for the vast majority of SWE roles at Google. Knowing that quicksort can be faster than heapsort or bubblesort, but sometimes slower, is useful, and how to select the right library implementation and benchmark it with a good set of data is far more important.

wruza|4 years ago

It makes me think of how much complexity we agree to live/work with, being actually unable to manage it without searching the internet. It’s really funny that interviewers try to expose that, cause regular developers (think fullstack configuration vs software algorithms) do just that: convert the internet to a source code of mediocre value but high complexity, and then implement a crucial business logic on top of that. Which cannot be learned beforehand, unless you’re reimplementing something for legal reasons.

Is that really a bad thing?

marginalia_nu|4 years ago

I think most of this complexity is a failure in scope limitation. No exaggeration I wouldn't be surprised if the average piece of software today is 1000 times bigger and slower than it needs to be.

You need to spell check a text? In the 1980s, you could do that relatively quickly on a computer with no network access. The dictionary data was pretty big back then but today it will fit in an L3 cache.

Today, each checked word likely gets converted to JSON and is then sent around the world through load balancers and proxies to some cloud instance where it is deserialized and checked and a new JSON message is encoded saying "{\"message\:\"yup, it's a word\"}", which gets passed back through a bunch of servers until it ends up in a mobile app that is really a souped up HTML widget. Spell checking isn't by any means trivial programming, but it doesn't require planet scale computing either. There is no benefit to that. It's just a waste of resources.

So why do we build things that way? Because of Parkinson's law. If we had put a constraint on resources, we would have found a faster and better way of doing it. This is in part why I advocate targeting low power hardware when designing a new system. If it's fast on a Raspberry PI, then it will be fast on any hardware. You will identify scaling problems early while the code is still malleable and easy to redesign.