jgg's comments

jgg | 11 years ago | on: Is College Worth It? It Depends on Whether You Graduate

100 years ago, learning Greek and/or Latin was a large part of all there was to study. Now we have multitudes of fields and subfields regularly generating actual demonstrable progress in the capabilities of humanity, few of which require knowing anything about Thermistocles.

Learning Greek and/or Latin was done as a) a mental exercise and b) a way to access an ancient body of knowledge that was basically considered something any educated person should know. Your statement that there was nothing else to learn is highly ignorant.

At some point, your program decided to compromise on the things that weren't necessary to reach the advanced levels, so that they had room to get people there at all during their undergrad.

Yeah, to reach advanced levels where the bulk of their graduates don't have to understand the English language well, and the CS grads don't know what a pointer is or how memory works.

I'm going to assume for my own mental health that you're a troll.

jgg | 11 years ago | on: Is College Worth It? It Depends on Whether You Graduate

Spot on.

I also feel like even if you were being charged the tuition of 30+ years ago, you wouldn't be getting nearly as much for your money. I looked through the introductory Russian textbook for my state university and was kind of blown away by how verbose and obnoxious it was. This same school cut operating systems and anything related to low-level programming (people complain that they just want to learn Java or .NET so they can get a job, so I guess they got their way), and rearranged basic English to make it easier to pass. 100 years ago, learning Greek and/or Latin was standard - it seems like there's a noticeable trend towards "dumbing down", or maybe I'm viewing a time period I didn't live in with rose-colored glasses...

jgg | 11 years ago | on: Responding to the Explosion of Student Interest in Computer Science [pdf]

Programming and graphic design perhaps (personally, I've yet to meet someone who programmed as a child who was better than a good programmer who learned later), but I really doubt there is much of an advantage to teaching your child Depth-First Search.

I don't really think CS is like learning a foreign language at all. The advantage for languages supposedly comes from the fact that we are hard-wired to acquire language at a specific age range ("Critical period hypothesis"). The biggest advantage is for infants, which steadily tapers off until puberty. I doubt we're hard-wired to acquire CS the same way - programming or theory.

jgg | 11 years ago | on: OCaml 4.02: Everything else

Whoa, that's cool. I had no idea people were still working on it at that level. The last thing I saw was a version of OPAM that didn't seem to work that well...I'll have to go look around.

jgg | 11 years ago | on: OCaml 4.02: Everything else

It's really fast, has minimal boiler plate and supports functional programming without any of the "orthodoxy" of Haskell. That is, you can write a recursive function but also write a for loop.

Imagine something that can compete with C and C++, but doesn't require all of the low-level reinvention and memory management. It's like a high-level language for smart people that doesn't feel entirely impractical. It has some things that people bitch about (like having to use +. to add two floats and + to add two ints), which don't really bother me that much.

If it were more popular and had better libraries/platform support (unless that's changed drastically in the past year or so), it would be a serious contender for general development. Being completely honest, I think Jane Street is probably the biggest organization pulling OCaml, and from a navel-gazing standpoint you can either view that as good or bad.

Use it because you learned it and thought it didn't suck, I guess?

EDIT: changed some phrasing

jgg | 11 years ago | on: Why Python is Slow: Looking Under the Hood

The pretentious view that Haskell's core is a mathematical structure familiar to mathematicians is wrong.

Haskell borrows concepts from category theory (a "field" so abstract from most math that most mathematicians don't need but a handful of its concepts) to label typeclasses, and those typeclasses don't always follow from their namesake.

Further, Haskell could exist and keep its property of referential transparency without the the 'mathematical' structures or their names. Here's a version of Haskell without monads: http://donsbot.wordpress.com/2009/01/31/reviving-the-gofer-s...

Monads, functors, arrows, etc. are more aesthetic than fundamental to the core nature of Haskell. They're just a bookish (and sometimes bureaucratic, IMO) design choice. What I mean by that, is that the language designers took a concept (referential transparency) and built an understandable structure, but beside and aside from this, made a bunch of nested, derived typeclasses of dubious value or purpose. Sometimes I look at a Haskell library and have a more academic version of the, "Why the fuck is this a class?" moment.

I'd like to comment tangentially that Haskell is almost the OOP/Java of the 2010's - the programming community claims that it makes your code "safer", and in some sense it very much does, but its features are being perverted and overhyped while its caveats are being forgotten.

jgg | 12 years ago | on: Corecursion

I once read a textbook

I think I've read that book too.

For the benefit of anyone following along at home: The unproven-but-accepted Church-Turing thesis states that any algorithm that can be done on a Turing machine (i.e., a Turing-complete language) can be represented as a recursive function and a function in the lambda calculus, and vice-versa.

And then I can apply deforestation techniques and tail-call optimization to derive the iterative algorithm from the stream algorithm.

How do you use deforestation and tail-call optimization to derive an iterative function from a stream in the general case?

You're also pulling a false comparison: you've jumped from a 'general' algorithm to "what a corecursive function is evaluated as under Haskell's semantics".

Corecursion builds a data structure. A TCO function, for example, won't produce that kind of output. The corecursive function could only be directly equivalent to the linear time, constant memory TCO function in a lazily-evaluated runtime (if that's true - tail recursive functions in Haskell can actually blow up the stack due to lazy evaluation).

jgg | 12 years ago | on: Corecursion

so I think they went a little over your head.

I understand lazy evaluation and tail recursion fine. I interpreted your comment as presenting corecursion as the only logical alternative to naive recursive algorithms with or without memoization.

You've tacked on the part where you say the latter algorithm is equivalent (due to Haskell's evaluation) - I get that. I'm still not understanding what you mean by only knowing inefficienct, naive recursion in contrast to corecursion. In practice, I have rarely seen corecursion or naive recursion used, but maybe we read different code.

In that toy example, it's easy to get between the three different forms, but in many cases the connection is far less obvious. We need principles that unite the different forms, and allow us to move between them. Co-recursion is one of those principles.

Uh, okay.

jgg | 12 years ago | on: Corecursion

Corecursion is useful for taking a recursive algorithm and transforming it to a stream-like output pattern.

But this is ridiculously inefficient.

Well...you might be surprised in the general case. Because of lazy evaluation, Haskell won't necessarily implode on non-TCO recursive functions (example: http://stackoverflow.com/questions/13042353/does-haskell-hav...), and will actually sometimes cause a stack overflow on "optimized" functions.

Until now, the only "principled" way I knew of to transform this into something sensible was through memoization, but that's still very wasteful.

I think "real" Haskell code usually favors operations of lists over recursive functions. That said, the standard way to transform your recursive structure into something "sensible" is to use a tail-recursive function. In basically any other functional language, you'd go with that approach.

To get the same "benefit" in Haskell, you'd have to force strict evaluation inside of a tail-recursive function. This prevents a thunk from causing problems. That said, Haskell doesn't always build up a normal stack on a regular recursive call.

Otherwise, you'd just use a list structure.

(Someone correct me if I've said something stupid.)

ref:

http://www.haskell.org/haskellwiki/Tail_recursion

jgg | 12 years ago | on: Washington state sues Kickstarted game creator who failed to deliver

There's no requirement that the Kickstarter complete the core project - just that they provide any promised reward.

That makes no sense. Much of the time, the reward is directly related to the core project.

Also, in what way is it not like an experimental house design using, eg, 3d printed concrete (in some new manner)?

I'll repeat myself - I have no idea what model Kickstarter is now framing themselves under or how it applies to the legal system, but in the original model for crowdfunding in general, you were giving a voluntary donation to an idea with the explicit knowledge that you might never receive it or any associated awards if the project failed. You weren't paying for a t-shirt - you were giving money to someone's business/creative idea and receiving a "free" t-shirt in return.

What Kickstarter's Terms of Use, then and now, actually imply and signify in a legal sense, and whether or not the defendant can easily make the claim that risk was fundamental to nature of Kickstarter project backing and thus the backers voluntarily chose to engage in a speculative transaction which had no legal obligation to be fulfilled, are questions for an attorney to answer.

Do I think the guy cut and ran? Yeah. Do I think that's wrong? Yeah. But I also think if it was made completely clear to each and every person donating that what they were doing was contributing to a project, not making a purchase (Kickstarter even states themselves that they are not a store here: https://www.kickstarter.com/blog/kickstarter-is-not-a-store), then enacting Consumer Protection borders on protecting people from their own stupidity.

jgg | 12 years ago | on: Washington state sues Kickstarted game creator who failed to deliver

Because you're funding a project that may or may not work out. The whole point of crowdfunding is supposed to be, "You give us money to develop an idea, and you might get a reward in return", not, "I'm paying you $35 for an order that includes a bumper sticker and a t-shirt with your logo on it." It's less like your carpenter example, and more like a VC firm dumping money into a startup.

It's supposed to be speculative investment. Whether or not Kickstarter and others have decided to backpedal and pretend they have some kind of legal precedent for holding project owners to their word, in order to make their own business seem more legitimate than it is, is another story.

jgg | 12 years ago | on: The Government is Silencing Twitter and Yahoo, and It Won't Tell Us Why

The problem is a cultural one, not a technical one.

What makes you think this community of people who work for the very companies that are being gagged, backdoored, surveilled and bribed are the ones who are going to fix it with a magic voting program? This community can't even come to a consensus to admit that Dropbox quite obviously has the hands of the Powers That Be rammed firmly into its asshole now (I apologize - maybe Condoleezza Rice had a revelation after advocating for the invasion of Iraq on false premises, and now deeply cares about the security of the world's porn backups).

Besides, I'd argue the system we have now is better, because it's hard to forge votes when you have hundreds of people across many municipalities counting votes and thinking for themselves. If you implement a national voting system in software, it would be much easier to corrupt by virtue of being centralized.

jgg | 12 years ago | on: WPA2 wireless security cracked

However, it is the de-authentication step in the wireless setup that represents a much more accessible entry point for an intruder with the appropriate hacking tools. As part of their purported security protocols routers using WPA2 must reconnect and re-authenticate devices periodically and share a new key each time. The team points out that the de-authentication step essentially leaves a backdoor unlocked albeit temporarily. Temporarily is long enough for a fast-wireless scanner and a determined intruder. They also point out that while restricting network access to specific devices with a given identifier, their media access control address (MAC address), these can be spoofed.

That doesn't sound like something that would take a year of CPU time, but since no one seems to have actually read and analyzed the paper yet, who knows.

page 1