osth's comments

osth | 12 years ago | on: Pace of modern life: UK v Denmark

"The result of this is that we have a large over educated part of the population who will have a very hard time finding a job."

Consider the alternatives: a large under educated part of the population who will/will not have a very hard time finding a job.

An educated population makes for a better society, irrespective of what "the economy" may or may not have to offer[1]. Economies can change on relatively short timeframes, while the character of societies[2] change much more slowly, if at all[3].

1. Author's opinion.

2. Substitute "character of society" with "culture" if you wish.

3. Another author opinion.

"Over educated" is interesting terminology. What exactly does that mean? Do we define education only in economic terms? Why did wealthy classes historically seek to become educated, even when "economically speaking" they already had society's best resource allocation?

osth | 12 years ago | on: Google's ‘Gopher Team’

"If code doesn't receive constant love, it turns to shit."

But it sounds like this code was receiving "love", only the "love" was coming from run-of-the-mill "just get it to work" C++ programmers.

I guess we need context to understand Fitzpatrick's statement. Perhaps he just means code at Google.

Are there any examples of code that has survived for many years without "constant love"? Netcat has not received "constant love" over the years. It hasn't turned to shit. Neither has the original awk. I can think of many other examples. These programs have proven to need very little maintenance.

I posit that simple programs that are well written do not need "constant love". They only need love when there's a bug. And there are plenty of programs that are in constant use where no bug has been discovered for many years. The bugs were vetted and fixed early on, decades ago.

Hence I disagree with Fitzpatrick.

osth | 12 years ago | on: Tox: secure messaging for everyone

Questions:

0. How important is simplicity (modularity) to the project?

1. Will Tox work for user "idontrungentoo"? Will it compile on Solaris, BSD, etc.

2. Will the GUI be optional? If not, why is it mandatory?

3. Can Tox work without DHT? What if two users just want to call each other without connecting to tens, hundreds or thousands of strangers? If there are problems with the DHT, are they SOL?

It would be good to have competing teams all working on some similar system (a Skype alternative) and then have an open bake off, instead of just idle criticism in forums like this one. This way we could see which system actually works the best instead of just theorizing about design choices and taking random anecdotes from alleged users in forums on faith.

osth | 12 years ago | on: Why YouTube buffers: The secret deals that make and break online video

Sounds like mirroring and ftp servers (or even bittorrent) would work just fine for distributing copyright-cleared video. Indeed that's how I remember it being done before YouTube and Netflix existed.

Today, with the explosion of online video, the copyright-clearance step could be administered by companies (as it already is, e.g., YouTube), but the servers providing distribution to the users at the network edge do not have to be run by companies.

1. Recall that storage is quite inexpensive and users are today quite capable of providing their own at home or on-the-go storage for terabytes or gigabytes of video.

2. Recall the "content-centric" networking idea Van Jacobsen has presented to Googlers. Does it really matter where the user gets the content? No. What is important is that it is authentic (and copyright-cleared).

osth | 12 years ago | on: BufferBloat: What's Wrong with the Internet? (2011)

VJ: "... Yet the economics of the internet tends to ensure..."

This may be the problem. Change the economics, solve the problem. Specifically, do away with the idea of "backbones" for ordinary users. Leave the backbones to research and military networks. That's what they were originally designed for.

Make the (people's) internet more like Baran's original idea. His diagrams did not have backbones. They looked more like "mesh".

A true mesh internet might mean slower speeds for its users, but that design will also reduce latency compared to our current "backboned" internet because there will be fewer "fast to slow" transitions (assuming users all have more or less the same capacity for moving packets).

osth | 12 years ago | on: Ask HN: Take down my reverse-engineered Snapchat lib because they asked?

Schaffer: "... we consider Snaphax to be unlawful circumvention device under ..."

Lackner: Mr. Schaffer, are you a lawyer? Please elaborate on why you consider Snaphax to be unlawful circumvention. I will assess the merits of your argument and then make a decision.

While people in this thread all give the customary knee-jerk "get a lawyer" response, consider that:

1. The request did not come from Snapchat's lawyers, if they have any retained for the purpose of DMCA claims. Surely they must, right?

2. It does not state what happens if Lackner does not comply. There's no threat of legal action. It just asks Lackner to remove the code from Github.

As such, there's no reason not to ask Schaffer to clarify why he thinks there is a problem.

If lawyers are not involved yet, then asking questions is free.

If this was a clear DMCA violation, then why didn't Schaffer send this to Snapchat's lawyers to handle?

Maybe because he might not get the answer he wanted: that it's a clear DMCA violation and an easy win for Snapchat.

Any lawyer can be asked to send a threatening DMCA violation letter. They will almost always say, "Yes, we can do that for you."

But sending a threatening letter does not mean it's a slam dunk win if the recipient does not comply with the demands in the letter. Sometimes threats are hollow. The sender may have no intention of pursuing litigation any further than sending demand letters. It simply might not be worth the money to pursue litigation over something like Snaphax. If this bit of PHP was that big of a deal to Snapchat, why didn't the request to remove it from Github come from Snapchat's lawyers? Where's the line about purusing all legal remedies?

Not to mention that by sending a threatening letter with no details on why the sender thinks the code at issue is a DMCA violation, there's a risk that the recipient might post a link to the code on HN and set off a "Github fork bomb". Ouch.

osth | 12 years ago | on: KORE – A fast SPDY-capable webserver for web development in C

I don't measure latency as including rendering time. Maybe I'm not "rendering" anything except pure html.

I measure HTTP latency as the time it takes to retrive the resources.

Whatever happens after that is up to the user. Maybe she wants to just read plain text (think text-only Google cache). Maybe she wants to view images. Maybe she wants to view video. Maybe she only wants resources from one host. Maybe she does not want resources from ad servers. We just do not know. Today's webpages are so often collections of resources from a variety of hosts. We can't presume that the user will be interested in each and every resource.

Of course those doing web development like to make lots of presumptions about how users will view a webpage. Still, these developers must tolerate that the speed of users' connections vary, the computers they use vary, and the browsers they use vary, and some routinely violate "standards". Heck, some users might even clear their browser cache now and again.

But HTTP is not web development. It's just a way to request and submit resources. Nothing more, and nothing less.

osth | 12 years ago | on: The Laws you can't see

"I actually don't think that's the case."

Neither do I. I think people make honest mistakes.

Alas, under pressure, people also try to cover them up.

I forget where I read it but some insider in the intelligence community said he thought that Obama's behavior could be explained by the fact he never had such access to such secrets before (unlike the Bush family who are closely connected with the CIA) and he quickly became obsessed with his newfound power of secrecy.

The question is whether he is man enough to admit he (and those before him) made a mistake and whether "yes we can" fix it.

osth | 12 years ago | on: The 7-bit Internet

Amen, OP.

herge: You could also use the "data" program from Hobbit's netcat.

osth | 12 years ago | on: The Laws you can't see

Still the best article I've seen on this whole affair was the one at foreigpolicy.com back on June 11:

http://www.foreignpolicy.com/2012/06/11/to_protect_and_defen...

The US actually tried to impeach a president for lying about a sexual encounter with a White House intern yet does not seem to care much about a president who despite teaching Constitutional Law (G.W. Bush was not even a lawyer) does not appear to know his primary responsibility as president - to protect and defend the Constitution - and proceeds to violate the sworn oath of allegiance he took (twice) after being elected and reelected.

Do we really care more about presidents who lie about sex scandals than presidents who violate their oath? Maybe we should change the oath to state "I do solemnly swear that I will not have sexual encounters with any White House staff." Under that oath, JFK would have been in clear violation.

The Constitution is far more important than any single president, administration or election.

I guess there are still some folks who might still believe that there's really been no obvious violations of the Constitution in the course of these surveillance programs. But then why did so many senior lawyers at DOJ, even the Attorney General himself, oppose and threaten to resign (or resign) over these programs when they learned about them years ago? How many more lawyers need to look at the facts and say, "Something is not right here," before we all agree to get to work and fix it?

Bush Jr. too was at fault for what has taken place, but it's Obama who has been fully caught out (thanks to Snowden). Why should Obama be excused for this? The issue is not personal nor political (as it may have been with Clinton... as if he was the first president ever to cheat and to lie); it is a matter of protecting the Constitution. What higher calling is there for any public servant? And while it's unfortunate the issue has come to the public light during his term, this is much more important than Mr. Obama, his presidency, his administration, or his legacy.

"Yes we scan." Time to stand down, my brother.

EDIT: added "www." to link

osth | 12 years ago | on: HTTP 2.0

"What happened to simple protocols?"

Answer: The Internet is still running on them, 30 years later.

Whenever I read something like "simplicity is hard", it makes me cringe. I hear that a lot, and I see evidence of gratuitous complexity everywhere I look these days. I'd hazard a guess the engineers behind SDPY would find simplicity (and reliability) boring.

Debugging binary protocols is either great job security for over eager engineers like the SPDY team or a great waste of our collective time. I'll let you all decide which.

osth | 12 years ago | on: Why HTTP/2.0 does not seem interesting (2012)

I wonder if header compression is primarily to allow for ubiquitous, large cookies.

Cookies were originally and with few exceptions remain a hack to try to add state to transactions that were not intended to be stateful.

If indeed the header compression is driven by the growing prevalence and size of cookies, then HTTP/2 is an effort to accomodate a hack. Not very interesting.

Some hacks that find their way into RFC's are difficult to remove because the transition process would be unreasonably expensive, like replacing the "sophomoric" compression scheme in DNS with something more sensible like LZ77 (credit: djb). I guess we might see some passionate arguments by web developers about the great expense of removing cookies from the HTTP standard and replacing it with a session facility, but I think the (long term) benefits easily outweigh the (short term) costs.

osth | 12 years ago | on: KORE – A fast SPDY-capable webserver for web development in C

Yes, a major appeal of pipelining to me is efficiency with respect to open connections. It's easier to monitor the progress of one connection sending multiple HTTP verbs than multiple connections each sending one verb.

Whether multiple verbs over one connection are processed by the given httpd more efficiently than single verbs over single connections is another issue. IME, a purely client-side perspective, pipelining does speed things up. But then I'm not using Firefox to do the pipelining.

I'm sure the team reponsible for Googlebot would have some insight on this question. (And I wonder how much SPDY makes the bot's job easier?)

In any event, multiplexing would appear to solve the open connections issue. And I don't doubt it will consistently beat HTTP/1.1 pipelining alone. I'm a big fan of multiplexing (for peer-to-peer "connections"), but I am perplexed by why it's being applied at the high level of HTTP (and hence restricted to TCP, and all of its own inefficiencies and limitations).

I'm curious about something you said earlier. You said something about the "overhead" of using netcat. It's relatively a very small, simple program with modest resource requirements. What did you mean by overhead?

osth | 12 years ago | on: KORE – A fast SPDY-capable webserver for web development in C

Thanks for the reading material.

You omitted the sentence before your excerpt where Mr. McManus suggests we move to a multiplexed pipelined protocol for HTTP.

I'll go further. I say we need a lower level, large framed, multiplexed protocol, carried over UDP, that can accomodate HTTP, SMTP, etc. Why restrict multiplexing to HTTP and "web browsers"? Why are we funnelling everything through a web browser ("HTTP is the new waist") and looking to the web browser as the key to all evolution? It seems obvious to me what we all want in end to end peer to peer connectivity. Although the user cannot articulate that, it's clear they expect to have "stable connections". This end to end connectivity was the original state of the internet. Before "firewalls". Client-server is only so useful. It seems to me we want a "local" copy of the data sources that we need to access. We want data to be "synced" across locations. A poor substitute for such "local copies" has been moving data to network facilities located at the edge, shortening the distance to the user.

But, back to reality, in the case of http servers, common sense tells me that opening myriad connections to (often busy) web servers to retrieve myriad resources is more prone to potential delays or other problems (and such delays could be due to any number of reasons) than opening a single connection to retrieve said myriad resources. Moreover, are his observations are in the context of one browser?

I guess when you work on a browser development team, you might get a sort of tunnel vision, where the browser becomes the center of the universe.

If you dream of multiplexing over stable connections, then you should dream bigger than the web browser. IMO.

I'm aware of a bug in some PHP databases with keep alive after POST. I mainly use pipelining for document retrieval (versus document submission) so I am not a good judge of this. What I'm curious about is where keep alives after POST would be desirable. You alluded to that usage scenario (a series of GET's after a large POST).

osth | 12 years ago | on: Lincoln’s Surveillance State

OK, while we are reviewing the beliefs and actions of past Secretaries of War, how about Henry L. Stimson? He once said, "Gentelmen do not read each other's mail."

Now, this was while he was Sec. of State, before he became Sec. of War. His views later changed, after he took the position as Sec. of War. Ask yourself, "Why?" [1]

It's an interesting piece of history: http://wikipedia.org/wiki/Black_Chamber

1. Here's a possible way to think about it: Programmers are familiar with the idea that software is not inherently good or evil; it's how it's used that matters. "A Victorinox can be used to fix your car (good) but it can also be used to disassemble it (evil)." Similarly the data being gathered by mass surveillance programs can be used to further "national security", or it could be used for other (evil) things.

If you accept this way of thinking about surveillance by a government of its own citizens, then it stands to reason that there should be some rules about how the data can be used. Check and balances. Alas, as we see, secrecy governs all aspects of the surveillance process. There is no judicial review of the collectors, except by a secret court... and one that itself lacks details about the process (i.e. how the data is collected). How can the public, even by proxy of its representatives, ever hope to review the application of these programs if they are not even permitted to know about them?

Under this sort of scheme, if a young man with good intentions informs the public, he's already broken the law. No one needs to prove he's harmed national security. It's assumed. Not that she is a good example to compare with, but I guess Rosa Parks broke the law too. She was damned if she did (arrested) and damned if she didn't (to live in a segregated country). The thing is, after she was arrested, she had the support of many people, some of who had considerable influence.

osth | 12 years ago | on: KORE – A fast SPDY-capable webserver for web development in C

Yes, I understand there are buggy servers and proxies... and I use a browser that has settings to accomodate them. However... I do not know about HTTP bugs that affect <emphasis>pipelining<emphasis>. And... in addition, for pipelining, I do not use a browser to do the initial retrieval. I use something like netcat to fetch and then I view the results with a browser.

Can you give me a list of buggy servers where my HTTP/1.1 pipelining will not work as desired? I've been doing pipelining for 10 years (that's quite a few servers I've tried) with no problems.

The arguments made by SPDY fans (e.g. Google employees) all seem plausible. But I wonder why they are never supported by evidence? IOW, please show me, don't just tell me. SPDY seems to solve "problems" I'm not having. Where can I see these HTTP/1.1 pipelining problems (not just problems with browsers like Firefox or Chrome) in action? I'd love to try some of the buggy servers you allude to and see if they slow down pipelining with netcat.

page 1