cwb's comments

cwb | 9 years ago | on: A founder's perspective on 4 years with Haskell

When we ran it, the workload was mostly during weekdays in different timezones (so distributed over more than 8 hours). This was on a single c4.2xlarge AWS server, which was bigger than we needed. To be clear, I wasn't trying to make a claim about high load with that statement, just that the system has been running smoothly with steady activity for a long time.

Each week the number of individual users was in the thousands; that was usually a rolling window since people tend to not do more than a few e-learning courses per year. I'm certainly not claiming that there are no bugs -- in all likelihood there are -- only that no bugs have yet crashed the system or were obvious and serious enough for users to tell us about.

cwb | 10 years ago | on: Not a Luddite fallacy (2011)

You might not find them convincing, but the reasons I believe that is the case are:

- Human hardware is fairly fixed (unless we go the cyborg route) whereas robot hardware (at least the computation part) evolves roughy exponentially and I don't see reasons for that to stop.

- As robot behaviour evolves (whether through deliberate design, genetic algorithms, or other types of learning) improvements can be replicated quickly and approximately for free. Improvements to human behaviours is notoriously hard, expensive, and time-consuming to replicate.

- We can rewrite many of our wealth creation recipes to make use of more specialised robots instead of flexible humans, which means robots won't need to get close to general AI before this has significant effects on jobs.

- We are starting to see robots perform the most sophisticated human skills: visual recognition, acting on and producing language, and decision making under uncertainty. Granted, robots don't do most of these things very well yet compared with humans, but I don't see fundamental reasons for why the development will stop short of human abilities.

- Robots can work 24/7, won't go on vacation, won't quit on you, don't play political games with the other robots, won't sue you, don't require food and bathrooms, and they'll make fewer mistakes.

- If you're mostly questioning the timing, I don't have a particularly good answer, but given how I understand the state of things I believe we're talking low single-digit decades rather than centuries for a significant proportion of people to look around and not find a job they could do better than a robot for a liveable wage (without government subsidies). If you disagree on the timescale I think we'd need to have a detailed discussion about how we understand technological developments and the jobs people do. You may well be able to convince me that I'm off on the timing.

cwb | 10 years ago | on: Not a Luddite fallacy (2011)

Perhaps, I might be missing something. The way I understand it is that "the Luddite fallacy is not a fallacy" is an assertion (OED: "a confident and forceful statement of fact or belief"). The reason, I claim, is that humans will not be able to compete with robots for much longer (in large numbers) which means unemployment is likely to go up (I understand that that's not a strict implication since governments could ban robots). The reason humans won't be able to compete with robots is that technology is gaining more and more of the abilities that humans use in their jobs (like reasoning and visual recognition). Those reasons consists of (a set of) assertions that could be wrong, but they are reasons and an argument is (OED again) "a reason or set of reasons given in support of an idea, action or theory". Thus, I thought that what I did qualified as an argument, or am I mistaken?

In any case, if you agree with the assertion, what would be your argument for it?

cwb | 10 years ago | on: Not a Luddite fallacy (2011)

Sorry I couldn't make it more clear. Was there anything particular that you didn't understand?

Agree on the basic income (in general, I'm not sure about the details). Jobs so far have been a convenient and pragmatic (not necessarily fair) way to both create and distribute wealth. At the same time, we should note that popular alternatives like communism or socialism have failed rather badly.

cwb | 10 years ago | on: Not a Luddite fallacy (2011)

No, not for it having significant economic and social implications, that's more a corollary of employment rate decreasing -- I'll change that sentence. The argument for the Luddite fallacy not being a fallacy was that humans have so far been able to compete with technology, but that we're fast losing that edge and when that happens things really are different this time. Does that make sense?

cwb | 10 years ago | on: Not a Luddite fallacy (2011)

You're right, the micro-level version isn't a fallacy (either..), though I think that's less contentious, or? Many countries already have some experience with this. For example, agriculture has gone from having 70-80% of the workers in 1870 to less than 2% in 2008 (https://en.wikipedia.org/wiki/Agriculture_in_the_United_Stat...). Some countries now have welfare systems that introduce some security on an individual level, though in their current form they tend to rely on the average employment rate remaining fairly high.

cwb | 10 years ago | on: Not a Luddite fallacy (2011)

Indeed, that is the question. The post tries to show that "cut your hair and get a job" will disappear as an answer to "how do I get to partake in this wealth creation" for a large number of people and we'll thus probably want to come up with an answer that scales better than today's social security for example and works for all those who live in countries without social security.

cwb | 10 years ago | on: Developers who can build things from scratch

I asked myself the same question. What is the best choice depends on both stated goals, assumptions, and predictions, which means others might differ in their assessment of the right tool for the job. In this case, why was Haskell not the best choice for the project and do you think those that tried to introduce it would have agreed?

cwb | 14 years ago | on: It’s Not China; It’s Efficiency That Is Killing Our Jobs

The author makes the implicit and false assumption that humans will always be able to compete with machines in something that other people are willing to pay for. That has been, and probably still is, the case. I see no reason to think that will continue indefinitely. Humans have three key properties that have kept us competitive with machines:

- "Intelligent" observation-based decision making

- Flexible manipulation of objects

- Teachability

Technology still have some way to go to match our capability here, but they're getting there. The dynamics are roughly that humans improve or change linearly through education, but technology can improve roughly exponentially.

cwb | 15 years ago | on: How to do deliberate practice

He does say "passage" to refer to a small part of a composition, not the whole thing. Your thinking is spot on though -- the length of the segment needs to be considered.

cwb | 15 years ago | on: How to do deliberate practice

The teacher would need to try and make the students understand that it is all right -- scrap that, necessary -- to fail when you learn. Learning to perform surgery is not the same thing as performing it.

Out of curiosity, how do you think it would affect your motivation to practice in private if you knew you would be tested in public?

cwb | 15 years ago | on: Yes, The Khan Academy is the Future of Education

Indeed. Video lectures can be helpful -- in particular for mechanics and manipulation -- but no reason to be overly excited (there was great hope for educational use of TV in its early days, unless I'm mistaken). A good book is usually more effective/efficient if you have patience to actually study it.

What few people seem to get is that we don't need to fix education, we need to fix learning. And for that, we need exercises (as any mathematician would tell you; also see deliberate practice). It turns out digital exercises afford a range of interesting opportunities (both the video and article highlight several) for making learning more effective.

Interfaces can change relatively easily so there'll be a bunch of experiments. The exercise model is harder. And more interesting. (I've been trying to figure that out for a while now and discovered a bunch of local minima.) I'm very curious to see how this exercise model works out -- it seems promising from what I can tell.

Regardless of how this works out (not to say I think it won't), this development will raise the bar and the expectations -- both of which has been too low for too long. That is incredibly valuable.

cwb | 15 years ago | on: Let's stop pretending that hard work conquers all - Psychology - Salon.com

Orszag is referring to deliberate practice (http://projects.ict.usc.edu/itw/gel/EricssonDeliberatePracti...) and it's not hard work, but "hard" practice that makes people better. What that practice should entail is not always obvious though and people will benefit from good tutors and starting early in life. Thus, the author is right that not everyone could become Mozart (or whatever), but that's because they weren't in the right place at the right time, not because of genetics (which I read him as implying, but I may be wrong). Genetics do of course matter (in particular in sports), but the research on deliberate practice seems to indicate they matter less than one might think.

Importantly, for things that are less competitive than world-class sports and music (most jobs say), even moderate amounts of deliberate practice are likely to have significant benefits. So, hard work may not conquer all, but deliberate practice will give things a run for their money at least.

cwb | 15 years ago | on: Bill Gates, Hero

Simply because I think they had the opportunity to create a great OS (whatever its relation to Unix) and chose not to. My point was that I think DOS/Windows hindered the innovation in applications -- not necessarily their complexity or size -- but their utility. I have the option to use big, feature-rich Windows applications, but mostly prefer smaller command-line based tools. I'm clearly in a minority here though, so maybe I am crazy.. :)

cwb | 15 years ago | on: Bill Gates, Hero

How come no one talks about the loss of innovation Microsoft have incurred on the world? What if Windows were NeXT or Unix? What if Bill Gates had tried really hard to create the best tools possible, rather than the most profitable software? What if the world had run on Unix in the two decades since Tim Berners-Lee used a NeXT machine to create the web? Might we have had a 1% increase in annual productivity (mainly from better innovation, not only direct efficiency improvements)? We'll never know of course; but you don't have to be crazy to think so.
page 1