l1ambda's comments

l1ambda | 7 years ago | on: The Streets Were Never Free. Congestion Pricing Finally Makes That Plain

I post something like this every time the congestion pricing subject comes up. We have congestion pricing in Minneapolis / St Paul and it just works. If I'm in a hurry, I have the option to choose to use the priced lane. We're rolling it out to all of the major highways (it's on 3 of them so far). I wish we had it on more lanes on all highways already, because what is the point of having a 60mph road if most of the time you are barely going 20mph, or trip lengths are otherwise completely unpredictable? It's also particularly important to people who might have to travel to multiple jobs, or people with children in school/day care, when consistency in travel time is extremely valuable to them.

There is a concept in economics called spontaneous order. Once the cost of congestion becomes apparent through the price mechanism, then society can reconfigure itself to adapt to it. You just have to have the price mechanism in place to signal it. People will figure it out and adapt once you have implemented congestion pricing. Practically every medium size or larger city in America has terrible traffic congestion problems.

Importantly, you have to resist the temptation to set a price ceiling (like Houston did at $8), as it will not work effectively because it will not allow the price mechanism to work and will cause a shortage of road capacity and you end up with no material change and basically a bad tax. (E.g., the true cost of congestion spurs demand for apartments and transit options near the city center, which in turn reduces traffic congestion.) Also important that it is purely congestion pricing and not resulting in crony government. Changes in laws and zoning and public transit will occur subsequently.

I am excited for NYC to pilot this and hope it becomes a success for that city.

l1ambda | 7 years ago | on: Taxing Uber and Lyft rides is L.A's latest plan to free up congested roads

This. We have this in Minneapolis / St Paul. It just _works_. We're rolling it out on most of the highways. I wish we had it on _all_ lanes on all highways already, because what is the point of having a 60mph road if most of the time you are barely going 20mph, or trip lengths are otherwise completely unpredictable?

There is a concept in economics called spontaneous order. Once the cost of congestion becomes apparent through the price mechanism, then society can reconfigure itself to adapt to it. But you have to have the price mechanism first. People will say, what about this, what about that. It doesn't matter. People will figure it out and adapt, but you need to implement congestion pricing right now, because practically every medium size or larger city in America has terrible traffic congestion problems. Importantly, you cannot set a price ceiling (like Houston did at $8), as it will not work effectively because it will not allow the price mechanism to work and will cause a shortage of road capacity and you end up with no material change.

Changes in laws and zoning and public transit will occur. Given a little bit of time, the emergence of various kinds of social order from a combination of self-interested individuals will occur.

l1ambda | 7 years ago | on: The Student Debt Problem Is Worse Than We Imagined

You used to be able to apprentice for just about any profession at all, often earning money while doing so. Perhaps social norms need to adjust and bring back apprenticeship. Perhaps we are on that course now having reached the tipping point of student loan debt and with the death of the 4 year degree.

Education is free (MIT has free courses, and there are lots of opportunities nowadays for self-study); it's a degree that's expensive.

l1ambda | 7 years ago | on: Goodbye Microservices: From 100s of problem children to 1 superstar

Microservices typically have two goals, performance and modularity. However, porting a typical webapp to a fast, modular compiled language will typically achieve at least one (often two) orders of magnitude performance improvement over a typical interpreted language. We are seeing this to be true more often than not, and a large performance gain off the bat like that may even obviate a lot of the desire to move to microservices.

Furthermore, if one uses modules (as one should), one can arbitrarily and somewhat trivially run those modules either in-process (compiled in) or out-of-process (via REST, gRPC, Cap’n Proto, or another RPC system), e.g., in a separate service/microservice/whatever you want to call it. This gives you a best of both worlds approach where code can be arbitrarily run in a monolith or a separate service as-needed. This changes the thought dynamics from a rigid "monolith vs. microservice" decision to a more fluid process where things can be rather easily changed on a whim. When modularity is the goal, then services become something of a secondary concern.

Microservices are used as something of a sledgehammer to force modularity and performance in languages that lack proper modularity and/or are innately slow or otherwise inefficient, while suffering orchestration costs and the performance penalties of copying data across multiple processes and networks as well as making it harder to derive a single-source of truth in some cases.

Probably a good approach for a typical webappp looking to improve performance would be to first port core logic to a modern, fast, compiled language with modules, then evaluate the performance from there, and then determine if any modules should be split out into separate processes or services.

Like NoSQL, microservices can be (but not always are) a case of the cure being worse than the disease; however, they can also be useful in certain situations or architectures. Like anything in engineering, there are tradeoffs and it depends on your situation.

l1ambda | 7 years ago | on: Web Framework Benchmarks

It's not so much about immediate performance, but about headroom.

The blog post https://www.techempower.com/blog/ puts it better than I:

> I argue that if you raise the framework's performance ceiling, application developers get the headroom—which is a type of luxury—to develop their application more freely (rapidly, brute-force, carefully, carelessly, or somewhere in between). In large part, they can defer the mental burden of worrying about performance, and in some cases can defer that concern forever. Developers on slower platforms often have so thoroughly internalized the limitations of their platform that they don't even recognize the resulting pathologies: Slow platforms yield premature architectural complexity as the weapons of “high-scale” such as message queues, caches, job queues, worker clusters, and beyond are introduced at load levels that simply should not warrant the complexity.

l1ambda | 8 years ago | on: ZFS for Linux

The crux of the Software Freedom Conservancy argument hinges on their belief that there is no distinction between a statically and a dynamically compiled Linux module. Modules are, of course, a programming term referring to concerns that are separated into logically discrete functions (modules), while interacting through well-defined interfaces.

The Software Freedom Conservancy argues that not only is zfs.ko a derived work of Linux, but really any dynamically loaded Linux kernel module is a derived work (of Linux). To me that feels like a bit of a stretch, and one could make the opposite argument.

By virtue of the fact that zfs.ko is an optional module, rather than an integral part of Linux, it's not a derived work. ZFS.ko MUST be a separate entity from Linux in the first place; SINCE zfs.ko is a module, it MUST be logically discrete, while interacting with Linux through the standard, normal kernel interfaces. Further, it's likely that only if ZFS were distributed as a changeset against the Linux source code (rather than buildable as a distinct module) would it possibly be a derived work of Linux.

The statically vs. dynamically compiled (linking) distinction doesn't hold water as it is an arbitrary technical one that only has to do with how things are layed out on the filesytem and/or loaded into memory, since modules, plugins, add-ons, and similar things almost often can be either statically compiled in or dynamically loaded, of which there are many examples of in many existing software.

l1ambda | 8 years ago | on: ZFS for Linux

Their conclusion was that zfs.ko, as a self-contained file system module, is not a derivative work of the Linux kernel but rather a derivative work of OpenZFS and OpenSolaris.

l1ambda | 8 years ago | on: ZFS for Linux

I keep hearing that the GPL and CDDL are incompatible. But repeating something doesn't make it true. And I have read both licenses, and have come to the conclusion that they're unlikely to be incompatible in the first place. Unfortunately, every time this line of argument come up, it tends to get buried in discussion.

The basis of the incompatibility claim seems to be from 3 arguments:

1) Statements from the FSF website, but those have no legal bearing against the actual text of the licenses

2) Claims that the CDDL was engineered to be incompatible with the GPL, again, it's an interesting hypothesis but has little bearing against the actual text of the licenses

3) Derivative Work argument from the GPL. This seems to be the only one that could hold water; however, I doubt that a court would find ZFS to be derivative work of the Linux kernel.

Furthermore, even if you argue that the "derviative work" clause makes them incompatible, there’s no way actually to prosecute such a violation. Copyright law is something called a tort in law, which means you have to show that someone violated the rules AND also caused quantifiable harm. What would our theory of harm be? (https://blog.hansenpartnership.com/are-gplv2-and-cddl-incomp...)

l1ambda | 8 years ago | on: DC's I-66 express lanes debut with $34.50 toll, among the highest in U.S

Since the whole point is to reduce congestion, implementing a cap (price ceiling) is absurd. When a price ceiling is set, a shortage occurs. In the case of road usage pricing, a shortage means a shortage of capacity.

In other words, you end up with traffic congestion and tolls, the worst of both worlds.

I read a similar thing about Houston where it's capped at $8 and has little effect on reducing congestion. The first course of action should be to remove the price cap. (And expand the number of paid lanes, which would decrease the cost paid by drivers by spreading it out over more lanes.)

You can't have your cake and expect it to eat it too by imposing usage pricing with price caps.

Economics 101: price ceilings cause shortages.

page 1