samrift's comments

samrift | 2 years ago | on: VSCode config to disable annoyances – telemetry, notifications, welcome pages

This license only applies to the compiled/packaged extension delivered via the extension marketplace. The source code to this VSCode extension is available on Github and is under the permissive MIT License (as is the .net runtime).

If you are looking for evidence that .net core is maliciously pretending to be open source to move to a language reliant on a JVM, I don't believe this would qualify.

samrift | 4 years ago | on: The existence of true one-way functions depends on Kolmogorov complexity

> But as a layman to cryptography I don't get what is the significant difference between this finding and Levin's. Is there anyone who can explain this to someone with an undergraduate level of mathematical backgrounds?

Here's a massive simplification. Let's say you tell me "Here's a conjecture: since multiplying 2x3 is hard, multiplying any combination of two numbers from 1-5 is hard. I can't prove that it is hard, but if all of them are hard, then my cryptography works!"

Levin's complete function approach : "Here's a function that's at least as hard as any given combination of two numbers: (1x2) + (1x2) + (1x3) + (1x4) + (1x5) + (2x3) + (2x4)... [and so on]. If that turns out to be easy, than your conjecture is wrong!"

The article states that there is a novel and surprising connection: "Proving the difficulty of multiplying any random combination of numbers under 5 is equivalent to solving this seemingly unrelated (and as yet unsolved) problem in geometry."

The two approaches are pretty different and have very different ramifications. While in a lot of cases, the approach of "here's an equivalent problem" doesn't actually help, in some cases it turns out that the equivalent problem is easier to solve. Or that the connection between the two opens up completely new approaches to solutions - or even applications of the math involved. Sometimes just the proof itself causes new connections! Often it takes considerable time before the impacts show themselves.

samrift | 7 years ago | on: It’s Time to Move on from Two Phase Commit

Not trying to say you can't do it - I'm sure I'm just not informed enough.

However, I don't see how MVCC could fix a multi-worker issue that would cause category (1) aborts in your scenario.

With MVCC, if another worker concurrently modifies a record ( say 'Y'), I continue to read the pre-modified value once I've read it. So my value for Y may be incorrect between the time I check it's greater than 0, and the time I set X to 42. My constraint check was invalid.

At this point you either have a transaction that can't commit despite your guarantee it can (because my conditional passed!) , or an 'eventual consistency' model where the consistency will be reconciled outside the scope of original change (and in this model you wouldn't use 2PC anyway).

samrift | 9 years ago | on: Google and Facebook ad traffic is 90% useless

Exactly - In my opinion there is no "identity theft". There is criminal fraud, which the banks are a victim of. However, instead of dealing with that fraud they just pass the costs on to an unrelated individual and then shrug and say "you deal with it".

Google does something much like this - but without regulation or clear appeal process.

samrift | 10 years ago | on: Introducing unlimited private repositories

We are a small shop that has 4 repositories and 36 users (over half the company). About 10 of those users actually contribute code, the others are monitoring issues, pulling code just to run tests or create distributions, or bots.

If we accidentally hit the upgrade button (we won't), our cost would go from 300/year to 3,648/year. Since only a small number of projects are on github - we use TFS for our main project and github for tools - its just a non-starter.

Heck, 5 "bot accounts" is $540/year to support CI builds and slack notifications. Yikes! More than we pay now.

It seems like the only shop that would save money would be the little in-house development departments with 5 people and tons of projects. However, even there they would probably forego using issues tracking in github because of the extra user cost.

I would be very interested to see real stats on how many orgs actually "upgrade" to this new more expensive pricing model vs how many stay with the more sane model. The real losers are orgs that can't sign up under the old model. The real winners will be the github alternatives (gitlab, bitbucket, etc) that can use this as an opportunity to grow user base.

samrift | 11 years ago | on: Abandon your DVCS and return to sanity

Here's my problem with this article: Return to sanity by adopting what?

I work on several small-ish projects, and due to the leads coming and going, there are a smattering of source control solutions. On a weekly basis, I use SVN, TFS, and Git about equally.

However, the workflow supported by Git is by far the best for me: I can commit locally as much as I want, rebase the commits or just merge them with work other people are doing, bisect if I broke something, and even branch locally to experiment or when I get interrupted by a question or another task.

Neither TFS nor SVN support this at all. With both of them, I can't really check in until I'm completely done and sure I won't break the build or tests. I end up zipping my directory or fighting with patches/shelvesets that don't do what I want.

Now, does the way I want to work require a DVCS? I don't know - perhaps it doesn't in a theoretical sense. However, DVCS is the only one that actually supports that now.

So sure, we all push to the same repository and it could be a centralized system. But what would actually work? What can I switch to? I'm not abandoning Git for TFS or SVN, that's for sure. Nor Perforce which was also painful.

Yes, you convinced me I don't need the "D" in DVCS. So make a CVS that supports local branching, commiting, sane merging and diffing and show it to me! But complaining that I'm not using one of the features of my DVCS has no bearing on if I should abandon it or not.

samrift | 11 years ago | on: ComputerCOP: Dubious 'Internet Safety Software' Police Distributed to Families

No. Firemen have incentive to prevent fires. Home fires don't increase their revenue stream and in fact put them at risk of injury or death.

Police are not mandated to prevent crimes, and have no external incentives to do so (I state external because some officers want to reduce crime due to morals/ethical beliefs). In fact, in can be argued by looking at civil forfeiture laws and federal programs paying officers directly for certain arrests, that preventing crime is actually against their best interests both organizationally and individually.

However, handing out a tool that always indicates suspicious activity and allows them to invade privacy.. well, that fits in exactly with the behavior that laws and rules surrounding them have encouraged.

samrift | 12 years ago | on: I don't love the single responsibility principle

I agree that the SRP is certainly a subjective rather than objective principle, and possibly general guidance that can and should be broken in specific circumstance. This article points that out but rather than try to apply prescriptive guidance around making it more objective in specific scenarios, the author seems to believe that its subjective nature is too flawed to fix.

What's the issue?

> A good, valid principle must be clear and fundamentally objective. It should be a way of sorting solutions, or to compare solutions.

Okay, I'm listening. What is your alternative?

> It's not a clear-cut principle: it does not tell you how to code. It is purposefully not prescriptive. Coding is hard and principles should not take the place of thinking.

And.... we're right back to subjective and general again. Set up a straw man only to knock it down with an identical straw man.

Of course, reducing coupling and raising cohesion makes the class responsible for less and less... So are we back at the authors interpretation of the SRP? Seems like it to me.

samrift | 12 years ago | on: Death to the Technical Interview

A technical interview whiteboard code or data structure gotchas or mt. fuji questions is a bad interview. It sucks that most technical interviews are bad, but that means we should fix them not remove them.

Our team "technical interviews" by having a technology discussion with the applicant. One of the first lines of question is figuring out what they are most familiar with so we can discuss that particular thing, area, library, or whatever. If a person can't discuss what they are most familiar with in the high-pressure interview, I'm not sure they can discuss something they just learned about in a team design meeting either. It's also a great way for the candidate to figure out if he wants to work with us - something that is just as important as the reverse.

Quit making technical interviews a quiz show. Quit checking off boxes on your form. Quit with BAD technical interviews. But don't remove them entirely - that's just as dumb.

*ps: github as a metric is also a bad metric.

samrift | 12 years ago | on: In which I answer all of the questions (2012)

I'm not sure Jeff's lack of attribution was an oversight since there's a pattern of behavior there. His earliest blogging didn't even have formatting differences between his comment and "quoted" parts, making it seem all his own. Some people stand on the shoulders of giants, but I often wonder if Jeff kicks giants in the shins and then stands on their backs. However, this isn't much (if any) of plagiarism of Shanley - other than the title.

In any case, while I also have a hard time with Shanley's demeanor, I do try to separate the emotion and personal manner from the good points. It can be hard sometimes, that's for sure

samrift | 12 years ago | on: 10x Engineer

Indeed, the writing on this piece is good. And the fact is that the studies bolstering a "10x engineer" belief did have some flaws.

However, an couple of anecdotes and some 140-character broadcasts hardly sway me that pronounced productivity differences are a myth.

samrift | 12 years ago | on: Hermit: a font for programmers, by a programmer

"a bridge for pedestrians, by a pedestrian".

While I applaud his effort, this is why wonderful movies aren't musically scored by the director or writer - they may know what they want, but that doesn't give them the ability to create it.

Like medicine "created by a school teacher".

samrift | 12 years ago | on: He got 1%, we can't hire him

Even with this, should they have the final say? I don't think so.

They should give you the risks -"he has a criminal record of fraud, and will expose us to some lawsuits that could close the doors. In addition, we will have to quit doing business with our 5 top customers". But that's not a decision. It's information.

They can give you their capabilities - "I am unable to come up with a way for us legally employ them in the U.S." or "It will cost us approximately $500K to manage the legal end of hiring them." This is not a decision, it's information.

They can give you personality notes - "He was extremely rude and insulting during the initial phone calls, and asked me to perform a sex act for money. I believe he will be a personality cancer in this company". This is also not a decision, it's information.

When you actually trust and value your HR department they no longer feel the need to be gatekeepers. They are a valuable source of information during the hiring process. Of course, you have to trust your hiring managers to make the right decisions based on this information.

Policies that give multiple departments "final say" or veto powers are put in place because the individuals are not trusted... which points back at poor hiring or promoting.

samrift | 12 years ago | on: Ask HN: What was OSS like before Github?

The bar to entry was much higher, both for a project and a potential contributor. But that's not just because of no github

Projects had to find a home (sourceforge ended up a major player here) and for most, fill out some large form to be accepted. Then they had to get noticed - slashdot or similar, since there was very little "social" effect on the project host.

When a project did try to get noticed, it had to already be "good" enough to get people interested. Practically nobody would stumble on a half-implemented tool, because there was nowhere to stumble there from. And it wouldn't get the push and discussion on newsgroups or websites if it was half-implemented.

Contributors had to first find the active developers. Many times, that was not on the issues list or a newsgroup, but on an irc channel mentioned in passing in a readme file or on a newsgroup ("I was talking to bud on #ourproj about the thread scheduler..."). Once you found them, you could try to "break in".

Since 95% of the OSS projects you'd actually find were already "good enough", that meant there were already a good number of active contributors that had formed a clique and were understandably protective. In many cases, you had to work hard to get them to accept that you might help rather than be a burden. That meant hanging out on irc or newsgroups and trying to impress.

Once a project was abandoned, there was nearly no way to contribute, you'd have to fork it. Unfortunately, your fork would have to beat the searchability of the original project with even more buzz, or people would find the abandoned one and have no inkling that your fork existed.

Obviously this is an over-generalization... there's no way to really encompass how all OSS was, even if I knew -and many major OSS projects were not at all like this, since each had it's own ecosystem and quirks.

In general the discovery of projects, and the ability to contribute, has grown in leaps and bounds around the time of github. But that is due also (to a very large extent) to other tools such as blogs, social sites, news sites (eg hacker news), and better search with google. Really, github is just a part of a huge increase in online interaction between people.

samrift | 12 years ago | on: 'Virtual Lolita' Aims to Trap Chatroom Paedophiles

"But researchers admit that it does have limitations and will need to be monitored. Although it is has broad conversational abilities, it is not yet sophisticated enough to detect certain human traits like irony"

Luckily people using informal internet chatrooms never make ironic statements, so this software will be effective.

samrift | 12 years ago | on: Google Latitude will be retired

It is not (for me). I manage people who's location I want to see in Latitude, which is a tool I use to see people's location.

With the combining, I need to manage an entire social networking tool in order to see locations. I just want the radio, not a car with a radio in it.

This in addition to convincing people who have never even heard of google+ that they should sign up and manage for themselves, rather than simply accept an invitation email and forget about it.

It's like asking for scissors, and being handed a giant swiss army knife. Sure, it has scissors in it - but they are not as good, harder to find, and mixed in with a bunch of stuff I just don't want or need.

page 1