supar's comments

supar | 13 years ago | on: Ask HN: How to contribute to open source projects?

If you are not familiar with Linux (for example, you're working on Windows or mostly doing web-development), it could entirely be. Mac OS is more friendly, but I certainly saw many developers that didn't scratch the surface and directly went with XCode/ObjC/iOS. In that sense, you can build a career without ever considering OSS development/usage.

What's best for somebody that, as of now, is questioning which project should be looking at? If you don't know what you're looking for, any advice is just as good as a google or github search (ie: useless).

If you start by having fun, even by publishing your random projects, you will be dragged in by dependencies (that's ironic). I would rather recommend choosing something fun to work with than a random project to look at.

supar | 13 years ago | on: Ask HN: How to contribute to open source projects?

I cannot believe that people can have that question. Or, well, I do believe it, but then my second question usually is: did you ever use linux/bsd or any OSS project in the past?

I don't want to be rude at all, but the suggestion of "contributing to an OSS project" makes a lot of sense if you already had to work with/use OSS software. Because, if you had to use one of these projects, you would probably already understand the most important social aspect of it. Coding, IMHO, is just secondary (and is not necessary at all).

Thus my suggestion would be: if you never dealt with an OSS project before, try to find some OSS software which you genuinely like and try to use it (and well) and follow its development. Once you did, you will certainly know how to contribute. There's nothing more to it.

If you are already familiar with OSS, but so far never found anything "interesting", the best thing to do IMHO would be starting your own. Release something that you did and that you would like others to use.

Most importantly, do all of that for fun. Don't do that because you have to and because they said "it would be helpful". Helpful for what? Coding style\quality varies wildly, as is the community around the project itself.

The biggest difference between a "professional" and OSS project is exactly this: people work on OSS project for many reasons, but it's mostly for fun or passion. Some projects strive for quality, some for functionality, and some just solve an "itch" somebody had. Understanding the social aspect, again, is the biggest differentiating factor.

There's no point in contributing to OSS unless:

1) you released something that you like to maintain 2) maintain something somebody else released 3) fix an itch you have 4) having fun coding (or any other activity around said project)

supar | 13 years ago | on: The AI Systems of "Left 4 Dead" [pdf]

Not sure what you mean by "map data". You can certainly have a list of props that are in the way and have them as a mesh. It's certainly easier if you have only one prop, but you have to account for that they may stack together in any way- and suddently the geometry configuration becomes non-trivial again.

supar | 13 years ago | on: Best ultrabook for linux? [April 2013 edition]

I got an HP EliteBook Folio 9470m at work (not my personal choice), but I though I should share my impressions.

I was genuinely impressed by the fact that this is the only ultrabook I've seen with a swappable battery. Yes!

The ultrabook in itself is fine, and the build quality is excellent. 8gb of ram, 256gb SSD, HD4000 graphics. I wasn't able to boot the latest ubuntu with EFI, but "hibrid" boot works just fine. Basically there's not a lot of hardware variation in terms of ultrabooks, so everything works more or less correctly. I was personally able to work for 5 hours on the battery (I'm a developer, so you can imagine my workload as slightly higher than average browsing).

I do have some remarks:

- The keyboard is generally good enough, but I've always found HP keyboards to be sloppy compared to ThinkPads, and this is also true for this ultrabook. - The touchpad is ok (synaptics), but the touchpad buttons are crap, like all HP I've ever used. HP doesn't seem to get buttons. When you hear the click it doesn't mean you have clicked. Wake-up HP, I've been using elitebooks since the '90 and this HAS NOT changed! - Not a fan of the "nipple" in the middle of the keyboard, wastes space for the key. - Useless fingerprint scanner, like most HPs.

Both points are moot if you are fine with HPs in general, since this is absolutely equal to any other HP elitebook.

- Some problems with the latest iwlwifi driver (some panics during network scanning in the last weeks), though hardly an HP-only problem.

Comes preloaded with Windows 8, which was easy to zap. Run-time on battery between linux/win8 was equal after for me, contrarily to what other people mention. I used windows 8 for about two weeks (to give it a spin), using Visual Studio, etc. 5 hours of work on battery is the longest I've ever had so far for a laptop. Being able to have a spare battery is a big plus.

supar | 13 years ago | on: Global Internet slows after 'biggest attack in history'

I've been designing honeypot/traps as triggers for mail filtering infrastructures for years, and this is very hard to automate process. It started like something that you could watch from time to time in the late '90, maybe slab a DNSBL or two, but right now has become a bloody nightmare. I remember when at some point almost everybody started to "reinvent" greylisting out of convergence, even before it was called as such. Reading nanae (via NNTP) was always a good read.

You constantly have to check if there is a chance that spammers noticed your honeypots so that they can avoid them or use them against you as well (the bigger you get the more sophisticated these attackers get too), you have to use tagged email addresses that can be linked back to the offenders. Methods to probe address ranges multiple times before validating them, and ways to automate the unlisting as well. False positives are basically unavoidable at some point, also because spammers themselves like to rotate their addresses based on their previous owners or known datacenters that are "too big to be blocked" wholesale for this exact reason. If they had a chance to know one of your trigger addresses, a common practice is to generate spam from a "safe" range into the trigger address, in an attempt to generate a false positive and thus, of course, backlash. It's sickening.

Exchanging digests of message contents among multiple server cooperatively became a good indicator of spammyness (vipul's razor), though you would catch bulk emails in the process, and spammers quickly adapted to random email contents so that the method became quickly ineffective.

The real problem here is that these assholes don't care as long as they can deliver the message, that's the only metric they have and care for. Maybe you don't care for it, because you can then use filtering later, but that's a huge volume of trash that needs to be shoveled around. I actually witnessed many cases in organizations bigger than a hundred eployees where several servers were used 24/7 just to churn messages through "dspam" or similar filters before delivering to the final mailbox. This is a huge cost in terms of measurable power wasted for a couple of assholes.

supar | 13 years ago | on: Global Internet slows after 'biggest attack in history'

If you ever tried to handle any mail server at all, you would recognize that you have choice in using spamhaus (or any other DNSBL).

These people have put in place a high quality method to discriminate spammers. I've been around since their beginnings, and their list has been incredibly successful (very high quality) for me, compared to njabl and other "dynamic" lists based on honeypots, or backlash entirely (say hi to spamcop).

You would also recognize that you can just as well tag the message with "likely spammminess" for use along the chain, and people would still complain that your "legitimate" message was tagged as spam by SOMEBODY, while you wouldn't complain if it was tagged as spam by a learning algorithm.

In short, people would complain anyway, except that spamhaus is doing real damage to the spammers (as in "the mail really didn't go through") and reducing their revenue, and thus forcing them to come out which such measures. Not that they will accomplish anything anyway. Spamhaus helped stop a lot of known/professional spammers, and I applaud them for that.

supar | 13 years ago | on: The MS Surface Pro

In this area was hoping for the NoteSlate to succeed, but it seems that it went waporware.

For notetaking the refresh speed is not so critical, as the areas to refresh are limited to the writing spot. You would probably notice lag, but if you ever drew with heavyweight painting/retouching programs, lag sometimes is introduced by processing and you just get used to it (you just don't expect immediate results and keep going).

I would still prefer a slight lag and the ability to avoid a glass screen in this case.

supar | 13 years ago | on: The MS Surface Pro

The N8000 is a different class of device, low spec / android based phone.

supar | 13 years ago | on: The MS Surface Pro

Can you give us some feedback on how the screen hinge flips? Is there a slot for the pen?

supar | 13 years ago | on: The MS Surface Pro

Are there any digitizer users here?

I use a Wacom digitizer daily for notes and sketches instead of using pen&paper. My wet dream is a pressure&tilt sensitive/e-ink based device, but it looks like the Surface Pro is the closest you can get currently - and this is a good review.

If you want a portable device (laptop/ultrabook/tablet) with a good digitizer that you can actually use (that is, wacom based), your options are actually very few.

There is the Lifebook T902, or the ThinkPad X230T. Did I miss anything else? Both are convertible laptops, both are quite heavy, have medium to poor battery life if you extend them with the additional battery, and a lower-dpi screen. I would have expected higher-range graphics on those laptops, but the integrated HD 4000 is ridicolous when you think you basically get the same on an ultrabook.

Not to mention that the price range is simply off. The Surface Pro is way cheaper.

I used an earlier version of the Lifebook T902. It's actually better than having a separate digitizer which takes useless space on the desk, but it's still cumbersome. You cannot draw unless you flip the screen (odd position otherwise). It's really heavy. A clipboard with paper is an all-around better.

There are two segments of markets that are filled by this usage pattern: on-the-go artists, and cheap cintiq replacements. Drawing on a cintiq is just awesome, but wacom has basically a monopole and the prices are just unjustified. Even the Intuos line is, IMHO, overpriced at least by a 2x factor. The sad reality is that they have absolutely no real competition. I tried several NTrig-based digitizers (lately the Vaio Duo 11), and they just suck. The tracking is just worse, many jumps just over a few hours of testing, not to mention that the pressure sensitivity is lower too (when you're drawing strokes it's quite visible unless the software is not interpolating it for you).

Just look at the missed opportunities there are! The Taichi 21 and VAIO Duo 11 are cool, but they use N-Trig. The keyboard on the Lenovo Yoga is awesome, but no digitizer. The Dell XPS 12 looks stunning, but again missed opportunity. It was rumored to ship with a wacom layer, but it didn't finally.

The only downside of the Surface is the keyboard. I tried the flip keyboard of the Surface Rt and I only hope that the keyboard from the Pro is different, because it sucks. Missed keys, zero feedback. Admittedly, it's better than typing on an on-screen keyboard, but the Taichi 21 of the Dell XPS 12 approaches are way better.

As a sad note, the replaceable battery concept is gone on all these modes. You know, I would settle for lower battery life if I could just have 2, or 3. I was actually shocked that at least HP offers the EliteBook Folio 9470m which has a replaciable battery in a thin format (the ultrabook is awesome), so there are really no excuses for it.

supar | 13 years ago | on: LibreOffice: cleaning and re-factoring a giant code-base [video]

I think that this scenario routinely happens at many companies. Only you don't hear public stories about it, because in the end what happens is the same for all software: continuous transition/refactoring. I know I've been working on many projects like in these conditions, though not of that size.

With more and more experience of that kind and by working on stinking code-bases, I've come to the conclusion that while in the past I could have thought that trashing the code and starting from scratch would help, now I would probably approach most problems by pushing new code in the form that I want and transition the rest as changes are required.

I had projects that I did myself with great design care, but after 5-6 years due to shifting requirements also started to look like you could have done a better job by starting from scratch again. Reality is, in retrospect all code is suboptimal.

supar | 13 years ago | on: Static site generation on python

Does anyone have some experience with pelican (possibly liquidink) and rest2web?

There are many static website generators, but I'm looking into a python+ReST solution. I've been using rest2web a lot, and I really love it's simplicity compared to the other solutions. rest2web is really straightforward. In the end, it's the python-docutils module that does most of the work anyway, while rest2web simply collects the website structure.

The only downside is that rest2web lacks a bit of polish, and I really wished it would come with the ability to generate rss feeds for a particular tree or tag. I was thinking about writing a plugin, but I'm unsure.

pelican seem to be already be done for the purpose. Actually, pelican seem to target mostly blogs, while I actually just want "a feed of changes" for a particular directory tree. I don't want a blog-turned-into-a-website approach.

Does anybody had this problem? I'm really looking for feedback from people that used rest2web here and moved to pelican/liquidink, or back maybe. Figuring out the limitations of these tools require a long time investment and I cannot really decide by just trying it out on toy pages.

supar | 13 years ago | on: Redline Smalltalk V1.0

I could say exactly the same for any source-to-C compiler, and/or any system allowing decent FFI with C. Unless there is some different reason than the JVM itself, C has an even larger code base.

supar | 13 years ago | on: GNOME (et al): Rotting In Threes

I wouldn't normally care about the state of GNOME, but as a developer myself I'm in a really sorry state of affairs regarding GTK itself.

At some point with 2.x, GTK stopped being GIMP's toolkit and became part of GNOME. Fortunately it remained more or less self-contained, but it's no longer the case with GTK3.

As an user, I cringe about the usability and responsiveness of GTK3 applications. I really dislike how the built-in dialogs have become. I don't like how some widgets now work. No (easy) theming (as a reversed color theme user) is also a major letdown.

I always considered GTK a nice toolkit from the user's perspective, and up to GTK 1.x it was also considerably faster than QT. GTK2 killed that, and at the same time removed any support for exotic OSes. I had 1-line patches refused under pretty much the same reasons you read in the article.

But as an user I still preferred GTK because of some nice unix-centric features (tear-off menus -- that disappeared at some point, column-based file browsers -- again killed later, user-customizable key bindings on any application -- can you still do that? I don't even care anymore, low memory, fast engines, etc).

But now QT is just superior in any front. QT has native support for OO and nice, consistent, multi-platform API, whereas GTK3 still depends of the shitty glib stack that pretends to be an OO framework (and doing a poor job at it). Ever got random glib warnings by GTK applications on the console? My xsession-errors is full of them. As a developer I just cringe at GTK. It was always bad from day 1, but now it doesn't really make any more sense. Whenever I need to consider a toolkit for a C-only based program (where QT or FLTK is not an option), I usually go for UIP. It's a shame that the looks of these toolkits do not integrate in the rest of the UI.

Right now I actively remove any GTK3 application. Whenever an application gets rebuilt I switch to a QT counterpart, which is usually more responsive and more stable over time. GTK didn't deserve this.

supar | 13 years ago | on: The Linux Graphics Stack

I wish I could see this kind of discussion in the wayland/name-your-compositor mailing lists. Currently it would be easy to implement a new attribute in the ICCCM so that applications that don't need foreground unredirection (ie: they actively want to be transparent, which is a minority) can signal so, but it looks like that with the unification of the window manager and compositor we are losing this kind of extendability.

I do realtime graphics with GL for a living, and I'm really sad at the state of the linux desktop, but especially worried at the future. GL performance with any modern distribution is worrysome due to the compositors, and removing it is the first thing that I need to tell any customer lamenting performance issues. Being unable to set vsync on a per-window/context basis is really a major problem, and the performance hit that you get is unacceptable. It seems that removing vsync is all hip these days for games and toys, yet tearing is unacceptable in any other context when you can actually get triple buffering with modern hardware.

Why are we optimizing for toys, when what matters are actual applications and everyday's performance? People don't seem to realize that while the GPU can be used as a general purpose computing unit, the usage pattern is vastly different. The process handling the screen often must have either higher or total control over it many times. You immediately notice video stutter and frames lagging.

But what do I know... I do the same with audio trying to squeeze decent latency with PortAudio and/or alsa directly, while people go through the pulseaudio pipeline...

supar | 13 years ago | on: Visual programming means anyone can be a coder

Also, "visual languages" even when restricted to very specific domains (think of DSP blocks, a tried&tested domain where visual languages are abundantly abused) tend to get very messy already when only very simple logic is involved. I wouldn't classify simple IFS systems in recursive painting programs to be "programming" at all. Where's branching, for instance?

supar | 13 years ago | on: The New MacBook Pro: Unfixable, Unhackable, Untenable

I've been recently travelling with several friends for a long trip. It was very fun to see how you cannot use any recent macbook or an ipad as a "portable" computer anymore, as you cannot swap the battey with a fresh one. Sitting on a power source while waiting for the thing to recharge is also very fun when you know you could be able to just pre-charge another one.

Very sad in my opinion, as the swappable battery is essentially what makes an "appliance" portable. Recent mp3/music players, phones and pads have basically the same issue. You can't simply continue to use the device while the second battery is recharging.

supar | 14 years ago | on: Linux 3.4 kernel released

SGI's IRIX always had the ability to do since decades ago. Of course, like others have stated, the main reason is to conserve both memory (for pointers) and more importantly bus bandwidth (which is a mayor problem in MPI/SMP systems memory, especially in those IRIX supported with 128+ cpus in a NUMA configuration).

There are still many features where linux is just playing catch-up that commercial unix kernels had decades ago.

supar | 14 years ago | on: FreeBSD 10 to use Clang, GCC will be deprecated

GCC speedup is actually quite substantial in math/numerical code (which, in turn, is not as fast as Intel's ICC). Moreover, GCC supports OpenMP, whereas Clang does not. This alone make building several scientific software packages not possible with Clang.

It's important to note though that it has only been recently (starting with GCC 4.4) that I've started to see some _improvement_ over previous GCC versions. GCC 3.x was known to be bloated, buggy and emit poor code for almost every architecture. Open64 was superior in _all_ regargs up to GCC 4.2/4.3 in my opinion.

Clearly, the whole project greatly benefited from Clang's competition.

supar | 14 years ago | on: Why should I have written ZeroMQ in C, not C++

I agree. In fact, this is what I would have expected to read in the article.

The only single major issue in creating infrastructure in C++ is the enormous added complexity in the final library and dependencies.

While you can avoid mangling and support a "C" api by using ``extern "C"'' declarations, this mostly imposes a "c" like, no-oop API that doesn't really save much work compared to a "C" api.

page 1