ownedthx's comments

ownedthx | 10 years ago | on: Ruby on Guix

As someone who deploys Ruby software to our servers, I've come up with a formula that's served us well.

We take all gems associated with the app (like a Rails app, but we do it with others), and stuff them along with the code into a package created by fpm.

In other words, the packages are completely self-contained; no gems are used from the environment.

The key to this is the --path option to bundle install, i.e.:

bundle install --path vendor/bundle

This command makes sure all my gems are in a sub directory of my project called vendor/bundle. When creating the debian package using fpm which includes all the source code, vendor/bundle also comes along with the package.

After installing the app on our Linux servers, 'bundle exec rails server' uses the gems from vendor/bundle.

This approach does require that I create the debian package on a machine that closely mirrors the deployed servers, so that compiled gems make any sense.

Most all the issues described in the article have not been problems because of this approach. I used to use rvm as part of my build process; it was very brittle. I'm completely happy now; just wanted to share in case that helps someone else.

ownedthx | 10 years ago | on: Water Fluoridation May Not Prevent Cavities, Scientific Review Shows

Since there are US cities like Texarkana, AR that have not have fluoride in the water ever, it seems they would be a decent way to harvest some data quickly and get to the bottom of this.

Anecdotally, I know a dentist that would work 2 days a week there as a sort of 'import-a-dentist' program, and she thought teeth were, in general, in much worse shape there than Little Rock, AR (3 hours away). Not enough to go on obviously.

ownedthx | 11 years ago | on: Ask HN: Are there any startups making desktop apps?

http://www.jamkazam.com - we are not yet VC-backed, but I hope so soon.

Because latency is an absolute premium for playing music in real-time across the internet, we've focused first on the desktop, where we have the most control ... but still find a ton of challenges.

If you saw this recent article about Android and audio latency, you can see why there are challenges in the mobile space: https://news.ycombinator.com/item?id=9386994

If you were to try and build a web-only version of JamKazam, WebRTC is your best bet but it does not have low enough latency, either.

ownedthx | 11 years ago | on: I was interviewed by Fog Creek

You raise a good point, but for me it only solidifies the fact that it is a view point born from inexperience or narrow experience. If you only have a bunch of .NET experience, or if you only have a bunch of Node experience, it isn't surprising when you judge others poorly when they don't like the sane OS as you do. Age and more experience will cure that.

ownedthx | 11 years ago | on: Ask HN: What startups are working on hard, technically challenging problems?

At JamKazam, we are trying to enable real-time play of music over the internet. While we can't magically make the internet better, we have spent a ton of time getting latency as low as we can in Windows, Mac OSX, Linux, and on custom hardware (still under development).

Just a week ago we released our 'distributed metronome', which lets musicians hear a synchronized metronome regardless of how much internet latency there might be.

ownedthx | 11 years ago | on: Why One Programmer Doesn’t Do DevOps Anymore

Replace DevOps with QA and an equally valid article.

Dedicated QA teams should be avoided unless truly necessary. Otherwise, you have the same problem of developers throwing over under-tested code to a group of separate people that are smart but treated poorly.

A QA guy who loves to write automation gets the same promises... "oh you'll be programming mostly; manually test maybe once a month". But when they get into the door, it's the exact opposite.

Developers should all be passionate about QA (read as: the quality of their product), as well as how their product is actually deployed (read as: executed in the most important of environments--production!).

ownedthx | 11 years ago | on: JamBlaster – Play music in real time with others from home

I work at JamKazam--

This is running the Linux real time kernel to help us achieve extremely low latency and I/O jitter. We've had to do a ton of fine tuning to get the latency as low as it is.

Our website is built on Ruby-on-Rails backed by Postgresql. We are using Resque for asynchronous jobs, and a Websocket/RabbitMQ solution to help with events in the browser and events else where in the backend.

Our JavaScript is a homegrown mess (jQuery and, ahem, 'business logic'). Having kept up with the latest web tech mostly through HN, I'd take the React/Flux plunge if starting from scratch; I'm still looking for an excuse to start using them anyway, even if just for part of the site!

Anyway, we are very excited to be starting the KickStarter for the JamBlaster, because it's your best way to get the latency low enough to have a really good jam session.

/me fingers crossed

ownedthx | 11 years ago | on: How Paul Graham Is Wrong

I have completely 180'ed on this topic. I used to think to be a successful startup, you all had to be in the same room, ideally at a big huge desk etc.

After working at my current startup, I realize I was wrong. We are an entirely remote operation, and have been since day 1. A few of us are in Austin, and rarely meet for a face-to-face, but it's more of a 'oh yeah, you actually exist' meeting rather than a hash-it-out sort of meeting.

I think the reason we have been successful is because being remote has forced us to write things down, in our wiki (in the form of product specs) and in JIRA (in the form of specific features and bug fixes). With just those two forms of written communication, we've solved 95% of our communication needs. We almost never skype or use video conferencing... it's just not needed.

The other reason we are successful is due to experience. We've all done startups before, so we know the drill. I can't emphasis how important this is enough, but does deserve more explanation. (unfortunately I can't muster the amount of typing atm).

IN the end, we can focus on getting work done... without the distraction of office chatter, commuting, long lunches.

ownedthx | 11 years ago | on: Simplicity and Utility, or Why SOAP Lost

In my experience, it wasn't that. Larger companies put value in supporting standards, because it is an important checkbox in their marketing, and SOAP is definitely a standard.

ownedthx | 11 years ago | on: Simplicity and Utility, or Why SOAP Lost

The fact that SOAP is not reasonably supported in a browser was a huge reason it fell down. When SOAP first came on the scene, complicated AJAX based applications were not typical. But as more and more JSON/HTTP APIs emerged in conjunction with browsers becoming more powerful and rich apps build built in them, the meaninglessness & complexity of SOAP grew.

Also, another major issue with SOAP is that most all of the popular tools would generate classes based on a WSDL. This creates a toolchain issue that is readily solved for someone experienced, but can really suck if you are new to the idea of generated code working it's way into your project.

Much worse was the scenario of WSDL versioning in conjunction with these tools that generated classes. If the API never broke backwards compatibility, you should be OK and can just use the newer class representations, counting on the service and tools to deal with null fields appropriately (not always true unfortunately). But if version bumps of an API/WSDL broke backwards compatibility... then you had to have separate class hierarchies for different versions of the WSDL; what an intense headache. Contrast that to web REST APIs; without a formal schema and a lack of class-centric tooling, client libraries would often let the author stuff in the params themselves and let the serializer stuff in the body based on the params; so yes your API is not as formal, but this isn't a big issue but offers infinite flexibility in dealing with one-off versioning issues or interop issues.

As someone else mentioned, some platforms couldn't interop with others (differences as it related to simply stuff like nullable fields or primitives existed all over; total nightmare), and some features of WSDLs didn't translate well to certain languages.

A great WSDL author would know to make their WSDL as simple as possible, because they had spent time working with various language toolchains and new the limitations out there, but that's asking way too much and not realistic for someone to have to spend all their time trying to understand all the ways someone in language XYZ might use their WSDL.

ownedthx | 11 years ago | on: The JavaScript Trap

The fallacy here is the assumption that there is value in JavaScript in any form. As a long time web programmer, I can tell you I could care less about the freeness of your Javascript. I don't want to see it, read it, or spend one second trying to understand your smelly bowl of web code.

Fight some other battle, gnu!

ownedthx | 11 years ago | on: Yahoo Mail moving to React

Moving to node can be in part a decision to attract fresh talent, as well as keep the current team interested and motivated.

ownedthx | 11 years ago | on: On the phenomenon of bullshit jobs

I work 40 hours a week at a minimum, every week, and have always done so. I'm a programmer primarily, but also do other related things (build, devops, customer support). But if I have a week with nothing but programming, I will still work a minimum of 40 hours a week.

The idea that you can't do more than 15-20 hours a week is not true.

I do love my job. Perhaps that's why I work this much.

I admit that I have a hard time relating to this general idea I see over and over about the need to fill up time with non-work. Also, over the years, I've met my fair share of those who also work just as hard.

ownedthx | 11 years ago | on: Show HN: JamKazam – Play Music with Others Online and in Real-time

JamKazam allows musicians to play with other musicians over the internet in real-time.

There are a number of challenges we've faced so far, but the biggest two are latency and audio gear setup. Latency doesn't only come from the internet; your audio gear and OS can also add quite a bit of delay as well. Our approach with latency is to measure well, and help users reduce latency where they can (and of course optimize our own software to add as little latency as possible).

And we continue to refine how you hook up your audio gear to the JamKazam PC/Mac application. Some gear works great. Some... gives us quite a bit of trouble.

Anyway, the product is usable, although we have much, much more to improve on.

If curious, we are using a RoR w/ Postgresql hosted mostly on Linode. We are using Websockets + RabbitMQ to help route messages between clients and browsers. We communicate over websockets for control messages when initially establishing media between two parties, then ultimately going directly P2P with UDP audio to help with latency; hair-pinning audio through the server is not your best bet.

You can check out some videos...

Musicians jamming: https://www.youtube.com/watch?v=I2reeNKtRjg

Overview: https://www.youtube.com/watch?v=ylYcvTY9CVo

Getting Started: https://www.youtube.com/watch?v=DBo--aj_P1w

ownedthx | 11 years ago | on: The State of ZFS on Linux

Thanks for the reply.

Regarding #2: On OpenIndiana, we first started with concurrent zfs commands and ruin, I think, the whole pool (maybe it wasn't that drastic, but was still a disaster scenario where key data would be lost). I couldn't believe it.

I was asking anyone who knew anything... 'so if two admins were logged in at the same time and made two zvols, they could basically ruin their filesystem'? No one knew for sure. Crazy stuff.

Anyway, I'm quite glad that's safe now.

ownedthx | 11 years ago | on: The State of ZFS on Linux

At a previous job, we built a proof-of-concept Sinatra service (i.e., HTTP/RESTful service) that would, on a certain API call, clone from a specified snapshot, and also create an iscsi target to that new clone. This was on OpenIndiana initially, then some other variant of that OS as a second attempt.

The client making the HTTP request was IPXE; so, every time the machine booted, you'd get yourself a flesh clone + iscsi target and we'd then mount that ISCSI target in IPXE, which would then hand off the ISCSI target to the OS and away you'd go.

The fundamental problem we hit was that there was a linear delay for every new clone; the delay seemed to be 'number of clones * .05 second' or so. This was on extremely fast hardware. It was the ZFS command to clone that was going to slowly.

Around 500 clones, we'd notice these 10/20 second delays. The reason that hurt so bad is that, to our understanding, it wasn't safe to do ZFS commands or ISCSI commands in a parallel manner; the Sinatra service was responsible for serializing all ZFS/ISCSI commands.

So my question to the author:

1) Does this 'delay per clones' ring familiar to you? Does ZFS on Linux have the same issue? It was a killer for us, and I found a thread eventually that implied it would not ever get fixed in Solaris-land.

2) Can you execute concurrent ZFS CLI commands on the OS? Or is that dangerous like we found it to be on Solaris?

page 2