top | item 7654392

Ask HN: What is the most difficult tech/dev challenge you ever solved?

209 points| pauletienney | 12 years ago | reply

I feel I just make some CRUDs. It's fine, since they useful for my customers. But they are not technical challenges. So please tell me yours.

173 comments

order
[+] prutschman|12 years ago|reply
I got handed a custom mp3 encoder and asked if I could figure out why the output was too quiet. At the time I had essentially no DSP experience.

It seems the gain had been reduced to cover up another problem: a tonal hissing sound. Once I learned how polyphase filter banks work, I tracked the problem down to a premature optimization, namely the replacement of an integer divide by 2^n with a right shift.

Such a shift of a negative 2's compliment integer rounds toward negative infinity instead of zero. This caused a slight DC bias within each sub band filter. In all but the lowest band, this DC bias gets shifted up to a non-zero frequency.

I call the optimization above premature because fixing it only added one cycle per operation. Granted, this was a real-time MP3 encoder on an ARM7, but the cycles were there.

[+] prutschman|12 years ago|reply
Oh, and the code comments were in Dutch, which I don't read.
[+] prutschman|12 years ago|reply
At a different gig I built a linux x264 batch transcode cluster that accepted, among other formats, Apple ProRes. (Quicktime under wine using xvfb, through avisynth, piping a yuv4mpeg stream out of the wine domain).

At a yet another gig I developed a method to differentiate male and female insects at 30m using a beam-steered high speed low res camera.

[+] bsenftner|12 years ago|reply
I was on the first Tiger Woods PGA team for the first PlayStation (the one that had the South Park 1st episode hidden on the disc.) The PGA source was legacy code, having been ported and rewritten for every console to date. It was a serious rat's nest, with the compiled code far too large to fit into memory, so EA had developed their own code segment loader to enable their too large to fit executables to run on the consoles. I was put in charge of the "menu front end", the statistics tracking, and some of the AI logic. It was over a week just to read the logic and figure out WFT. The designs I had to implement were simply not possible with the existing framework, so I started over. I wrote a series of small finite state machines, fully documented their use in the source code, and then replaced the entirety of all the portions of the source code I was in charge of with my miniscule finite state machines and their paltry data. The segment loader was no longer needed for the front end, because I'd left 800K free (on a 1 MB system!). I spoke with a Tiger Woods PGA developer about a decade later, and my code was still there being used. And a year later, EA had me do the same thing to the AIs for NCAA Football, where I finite state machined their AIs, clobbering the memory required down to about a 6th of what it was previously.
[+] pauletienney|12 years ago|reply
Thanks for the story. I would be very interested to know how games "AI" are built. Is it very scripted ? Is it more organic ?
[+] RogerL|12 years ago|reply
Good job. I'm astonished at how just about every developer I have run across doesn't understand how to program with state machines, opting instead for a horrid rat's nest of millions (it feels like) of state variables, functions that modify different but overlapping sets of those variables, etc.
[+] rsmaniak|12 years ago|reply
Interesting...would you be able to give an example of the state machines you built and how you used them?
[+] kabdib|12 years ago|reply
Well, "difficult" usually goes away when you have lots of time, so I have to add ship pressure. If you have time, then hard goes away.

Making the Newton data stores stable, about three months before we shipped. The Newton had a flash-based object store instead of a file system, and the code was such that a power loss or a reset in the middle of an update would toast your data. I spent about six 100+ hour weeks writing a transaction system to make sure that users wouldn't lose data to a crash or a dead battery. I think I had to fix only a couple of bugs in that code, after a massive checkin.

Then, making the audio pipeline for the Kinect stable enough for noise cancellation to work. I'd heard that doing isochronous audio was hard, and "yeah, yeah, sure," but I had no idea it was really hard until I'd shipped a system that used it, with tight constraints around latency variance. I worked with some really good people on this, and there were days when we were looking at Xbox hypervisor code, application code and even DMA traffic on the camera itself. Another three or four 100+ hour weeks, maybe three weeks before we shipped, changing scary stuff everywhere. I still remember a satisfying feeling when I discovered the exact buffer we needed to use as a clock root for audio (and it wasn't the obvious one).

[+] wrs|12 years ago|reply
And while you were doing that, I was finishing the indexed object store on top of that transactional layer. I spent at least a week starting the stress test, going to bed, waking up, reading the test log and fixing bugs, starting the test, going to bed... (Shouldn't have started with someone else's half-baked B-tree code!)

Somehow the Newton team managed to do a whole stack of things like that in a ridiculously short time, and then shipped them in read-only memory -- not Flash, kids, but for-keeps permanently masked ROM -- and it worked. I've never seen anything like it since.

[+] ciumonk|12 years ago|reply
Made an autonomous vehicle out of a VW Golf/Rabbit, in around 2 weeks.

Custom built hardware, including the actuators.

Custom built RTOS on micro nodes.

Driving via openCV, hough transforms for lane detection, stereovision and flows for obstacles, surf/etc for traffic signs(don't ask, I was learning)on a stack of 2 laptops connected via gigabit ethernet.

Nicest thing: I got the models trained mainly without moving the car, by pumping the framebuffer of two racing/car games, rFactor and GTA3 through glcs to openCV and controlling the games via uinput to make a virtual city to drive in.

Don't have a lot of pics, here's some HW:

http://imgur.com/9cfzbMv ( yes, that's an old cordless drill and an angle grinder head with a bespoke bike chain :) )

http://imgur.com/5x9T9gi (piston is weirdly offset so I could still put my foot on the brake in emergencies, like that one time i smashed it into a fence...)

http://imgur.com/Ig7MaLT (notice how I kept the costs down to almost nothing, in this case the air distributor for the brakes made with Meccano and old air valves and a geared motor, since I didn't have the funds to buy anything)

[+] yankoff|12 years ago|reply
This is awesome. That would be so cool if you described the entire process in a blog post :)
[+] pan69|12 years ago|reply
I think it was around 1992/93, I was deep into graphics programming for games. The conventional way was to first draw the background and then draw everything on top of it (i.e. sprites etc.). However, this is wasting a lot of bandwidth since you can end up writing to the same video memory location multiple times.

I came up with an algorithm treating the each scan line of the screen as a binary tree which allowed me to keep track of which part of each scan line was already written to, meaning, I was able to build the screen up from front to back and visit each video memory location only once. So, on a 320x200 screen only ever 64000 bytes would be written to memory. With all the clipping etc. this was quite a complex beast and fully written in 286 assembler. In the end I think it made the overall graphics rendering about 20% to 25% faster.

Edit: I don't have the code anymore. I lost all my "floppy disks" in a house move... :(

[+] danieltillett|12 years ago|reply
This sounds like an interview question :)

In my case I solved a problem in genomics that people had been trying to solve for around 20 years and which would have saved the human genome project billions of dollars - the only problem is I did it 10 years too late :(

Edit. If anyone is interested I published the method in BMC Genomics a few years ago http://www.biomedcentral.com/1471-2164/10/344

[+] MichaelGG|12 years ago|reply
Had a VoIP network with TB+ of SIP packets each day. Customers demand resolution to problems from days ago, so having a time-travelling, content-aware PCAP was necessary. At 50K packets/sec, piping tshark to mysql and 11 indexes simply wouldn't cut it. We spent $$$$$$ on a commercial system that didn't work so well.

I slowly reinvented the basics of an information retrieval system (the curse of not having taken CS classes). Came up with the idea of a log-structured merge tree, made easier by this being a write-once database. Got some inspiration from the original Google paper. But most of it was just figuring out the least number of actions needed to retrieve info.

I published the core DB part, which maps an int64 (index hash value) to an int64 (docId) and stores in an efficient format (on our data, ~2.3 bits per packet). http://github.com/michaelgg/cidb - I couldn't find an existing library that has zero/low per-record overhead.

On a Q6600 and a single 7200RPM platter, I was able to index a TB of SIP a day and provide fairly quick flow reconstructions going back as far as disk space allowed. On a quad-core i7 parsing+indexing was over 1Gbps.

Company impact was huge, because we could suddenly troubleshoot things in minutes instead of hours. A few years later, after I was gone, I heard they were still using it. Neat.

This was all in F#, which presented fun challenges regarding optimization. Lots of unsafe code and manual memory management. SSE would improve varint encoding - the CLR generated code is a joke in comparison.

Last month, I dropped this lib in as a replacement for storing routing information in a telecom app and dropped RAM requirements from 6GB to 1GB.

On the downside, I'm sure any compsci student could build a similar thing in a week, and probably they do so for school projects. But to a lot of app-level developers, this kind of algorithmic work is sorta magic for some reason.

[+] smikhanov|12 years ago|reply
Sounds like a great tool -- I would certainly use something like this during my telecom days. At the end, did you still use tshark to pipe input data to your database?
[+] jonmb|12 years ago|reply
I'm a senior compsci student at one of the largest (by # of students) universities in the nation. I assure you, most of my classmates could not do such a thing in a week!
[+] aaronsnoswell|12 years ago|reply
- At age 12, I taught myself C programming using [1] (I had never programmed before, but wanted to make games), then proceeded to write a 12,000 line 3D OpenGL Game Engine from scratch, using the NeHe tutorials [2] as guides. It took me three years. The final program ran on Windows XP and 98, could import 3D models from Autodesk 3D Studio, and had a 3D asteroids style game demo. I used Milkshape [3] for 3D modelling, and Dev-C++ as my IDE [4].

- For an AI course at University, my partner and I developed a custom motion planning algorithm involving neural networks, RRTs [5] and POMDPs [6] in several thousand lines of Java. That was some of the craziest (and most fun) programming I've ever done. Our lecturer was Hanna Kurniawati [7], who is world famous (for some value of 'world') for her work on POMDPs, which was really cool.

[1] http://www.cprogramming.com/tutorial.html

[2] http://nehe.gamedev.net/

[3] http://chumbalum.swissquake.ch/

[4] http://www.bloodshed.net/devcpp.html

[5] http://en.wikipedia.org/wiki/Rapidly_exploring_random_tree

[6] http://en.wikipedia.org/wiki/Partially_observable_Markov_dec...

[7] http://robotics.itee.uq.edu.au/~hannakur/dokuwiki/doku.php?i...

[+] gambiting|12 years ago|reply
Lol, funnily enough, I did pretty much the exact same thing regarding learning C at the age of 12, ended up writing an OpenGL game for the PlayStation Portable which I released for the Neoflash competition:

http://www.neoflash.com/forum/index.php?topic=4924.0

https://www.youtube.com/watch?v=wAF9o8dsHfA

Since then programming games was always my obsession, and just a few months ago I fulfilled my life long dream and got a job as a gameplay programmer at Ubisoft.

[+] cyanoacry|12 years ago|reply
I made a bunch of LCD nametags for a project a while back[1], and ran into the strangest issue. Occasionally, the display would fail to start up and would display an all-blue screen.

My initial hunch was that it was a timing issue, and that I was seeing different behavior based on temperature. Even when I made things super slow to exclude timing, the performance was inconsistent. Next on the list was excluding race conditions (maybe I'm not resetting in the right order, and getting lucky?)

At some point in time, it was 8am after an all nighter after I'd been debugging this, and I hadn't been able to reproduce the bug. Lo and behold, when I give up, the problem starts occuring, just as I open the blinds to get some sun.

Turns out that the display driver had light sensitivity issues. Since it was a cheap display[2], the backside of the driver IC was exposed (the epoxy fill didn't encaspulate it all the way, just the edges; you can see it as the white strip in the Digi-Key picture).

Putting a piece of tape over the IC solved the issue, and I didn't run into problems with the display again.

[1] PCB (business card sized): https://bitbucket.org/cyanoacry/ditch_day/src/3bf75f6bd2fba1...

Hardware picture: http://www.albertgural.com/blog/caltech-ditch-day-2013/image...

[2] http://www.digikey.com/product-detail/en/COG-C144MVGI-08/153...

[+] avian|12 years ago|reply
Interesting. A while ago I designed an OLED display shield for Arduino. Those displays also have a controller IC bonded to the flexible PCB and encased in transparent epoxy. I never noticed any sensitivity to light.

http://www.tablix.org/~avian/blog/articles/arduino_oled_shie...

I do remember that they were however pretty picky regarding the reset sequence (something the datasheet warned about several times).

[+] jgrahamc|12 years ago|reply
I've done quite a bit of stuff on the hardware/software boundary where things get hairy. I think the nastiest thing I've debugged there was a machine that HP [1] was making in about 1994 which had some native HP bus and an EISA bus hanging off it for expansion cards from PCs.

I was working for a company that made NICs which went into the EISA bus and we were seeing data corruption in this machine.

After a long, cold night in the Apollo works myself and an engineer from HP tracked it down to a timing problem on the EISA bus where the 16 bits being sent were arriving in two 8 bit chunks slightly delayed. Our NIC was spinning on a 16 bit word looking for a change in the top 8 bits as a single that the data had arrived. We'd then read all 16 bits, but 8 bits weren't ready yet.

Luckily, HP made lots of test equipment and getting a logic analyzer with 32 inputs, an in circuit emulator for the CPU, and some logic probes was easy...

[1] I think it was an HP 9000 http://en.wikipedia.org/wiki/HP_9000 being made at the old Apollo works outside Boston.

[+] lcrs|12 years ago|reply
Had many terabytes of footage shot of events that couldn't be recreated with cameras that turned out to have broken firmware. When reading out the sensor the ADC would get out of sync with the shift register, resulting in adjacent pixels merging into each other or being skipped in a different pattern on each frame. Resulted in appalling pictures. I figured out how to re-bayer the image, then using some frequency-domain magic on a picture interpolated from the green photosites only, determined the shifting pattern in various areas of the chip. We could then automatically gain individual pixels up and down and remove the effect, resulting in a perfect image. Went and saw the fnished film at the biggest cinema I could find and didn't spot a single pixel wrong. Dread to think what might have happened if I'd tried to fix it in a more traditional "just paint it out" manner...
[+] ixmatus|12 years ago|reply
Writing distributed and highly-concurrent software in Erlang to scrape Google's search results, with localization, tens of millions of times per-day.

I don't work on that stuff anymore, but it definitely challenged my problem solving abilities to:

A) Learn Erlang. B) Learn how to write solid distributed software in Erlang. C) Figure out how to work around Google's temp-banning policies using IP balancing, captcha solving to produce cookies to balance connections on, and also how to localize the searches for geographic accuracy.

I don't like working on projects that are actively pitting me against someone though, so I'm happy to not be working on that. I now write scalable software for my energy-focused startup, we receive energy data from homes in near-real time, which has its own challenges.

[+] sireat|12 years ago|reply
Sounds very challenging, yet also very black-hattish on a massive scale not that I have much love for Google.

That is the problem that many of the most interesting projects have some moral ambiguities(military, financial, etc, etc).

While one is getting paid, it is very easy to justify or not even think about where the money comes from(que Sinclair quote).

[+] dewitt|12 years ago|reply
> I don't like working on projects that are actively pitting me against someone though.

I'm surprised you didn't also didn't mention the ethical problems with trying to take something (search results) without permission.

Your new project sounds awesome, though. : )

[+] noir_lord|12 years ago|reply
Implementing the Solar Thermal calculations from "The Government’s Standard Assessment Procedure for Energy Rating of Dwellings - 2012"

http://imgur.com/a/LiCxU ( a tiny part, full thing is 172 pages and I needed about 35 of them).

I'm developing software to help the MCS accredited renewable installers in the UK, I planned to buy the API that does the calcs in so I duly purchase it and do some quick testing...get the documentation and oh oh this doesn't match the real thing!, ring them up "oh yeah we are getting out of doing the API as our competitors are using it against us".

Oh shit.

I'm not a mathematician (I got a B in my GCSE Maths for Christ's sake) and now I have to implement code that works out the solar irradiation using tilt, latitude, solar_declination, a dozen look up tables, some hairy trigonometry

Stuff that looks like this :-

    A = k 1 × sin3(p/2) + k 2 × sin2(p/2) + k 3 × sin(p/2)
    B = k 4 × sin3(p/2) + k 5 × sin2(p/2) + k 6 × sin(p/2)
    C = k 7 × sin3(p/2) + k 8 × sin2(p/2) + k 9 × sin(p/2) + 1
Quite frankly where it not for iPython notebook allowing me to convert the math to python and play with it to figure out what was going on I don't know if I could have figured it out.

As it was it took me two weeks to implement (about the most stressful two weeks of my life).

I'd love to post the code but it represents a significant amount of work and there are competitors in the market and that would be a big hand up to at least one I know off.

[+] NAFV_P|12 years ago|reply
> I'm not a mathematician (I got a B in my GCSE Maths for Christ's sake) and now I have to implement code that works out the solar irradiation using tilt, latitude, solar_declination, a dozen look up tables, some hairy trigonometry

Same here, I got a B at GCSE maths. When I was 23 I enrolled for maths A level at the local college, I got an A. 16 years is too young to assess someones potential.

[+] wmkn|12 years ago|reply
When doing computer vision the first thing being done is usually camera calibration. The most common calibration technique is to make images of a physical object with know spatial properties (grid patterns are often used) and to extract 2D point positions from the resulting images (e.g. using a corner detector). Given the match between the extracted 2D points and the corresponding locations on the calibration grid it is possible to determine camera and lens parameters such focal length and distortion.

An in-house camera calibration application at my company solved the correspondence problem by using an algorithm that looked at properties that are not invariant under projection (i.e. angles and distances). This made the calibration process extremely fragile. The algorithm often failed to detect the calibration grid even when the image was crystal clear, which made calibrating a camera a very frustrating endeavour.

Since it was an in-house application there was never much priority to fix it. Eventually I got so annoyed though that I wrote a whole new real-time algorithm for the detection of arbitrarily-sized grids from a set of 2D points. The algorithm is capable of dealing with a significant amount of outliers (even when between the valid grid points), it can handle missing grid points and it is not affected by perspective or non-linear lens distortion. In the end it took me longer that I had hoped, but the resulting algorithm is one that still makes me proud.

[+] kator|12 years ago|reply
First to be clear CRUD makes the world go around! I always find it interesting when people in our industry somehow look down at work that makes users happy and gets us all paid so we can do other things. :-)

I've been in tech for more then 30 years. In that time I've done so many things it's hard to pick just a couple interesting ones. I think some of the fun ones are really old but even a couple are very recent:

* In the good old days HD's would die and people would bring them to me hoping I could revive them. I've swapped logic cards with working HD's to recover data. One disk was soaked from a fire sprinkler system in a small business' office. I took it apart and rinsed it with distilled water, cleaned up a bunch of parts by hand, swapped logic boards and applied lubricants to various parts and was able to spin it up just long enough to back up the customer's data. Don't try this at home with modern drives, get a recovery team to help you if the customer can afford the cost.

* In 1988 I was approached by a military contractor with a GPS board they built that needed to have a device driver built for SCO Unix. They wanted 50ns realtime responses and I had to keep explaining to them that SCO Unix was not an RT OS but they were cornered into the OS by contract and had no choice. So I pushed on them "why this strict timing" and after many signed documents and stuff I don't really want to know I showed them we could build a device driver interface that allowed them to achieve their needed result. After many tests it turned out we were able to beat their timing requirements. The contractor was very happy, I went as far as to make it a SCO Install disk etc, and they were able to make the install part of their build process. The strange part was two years later I got a call from the contractor, they were in a panic because the driver didn't work with the latest version of SCO and they had to "urgently deploy a lot of these things" into a undisclosed "middle eastern territory". I told the guy I was really busy (I was) even after that the next day he showed up at my house (literally my house) with a machine in his car and a blank check and begged me, he said his job and a lot of others were on the line and they didn't have time for someone else to come up to speed and make the adjustments. I caved in, I updated the drivers over night, it wasn't too bad just some changes in the kernel interfaces and SCO had made some dumb changes to the way installs worked etc. I gave him the results the next day late in the evening and he thanked me and drove off into the sunset. I was never told exactly what they were used for... I've always wondered.. And no I didn't burn them I charged my regular rate, but I did work about 18 hours on it in a 24 hour period.

* In recent years I've taken to building low-latency high scale systems. One such system must respond to upwards of 1.6 million queries per second across six data centers and the response must be received by the requestor within 10ms. The reality is with jitter even on local networks you really have about 7ms to respond. I wrote this system in Python, Java, C, C++ and Nginx/LutJIT. Each time I re-implemented the same solution but new tech with twists to leverage the strength of the underlying tech (Long living objects in Java to avoid GC, Cython, etc.) and my best implementation ended up being nginx/LuaJIT. I was able to get about 15k qps/core using this configuration and it was rock stable, running for weeks without need of a reboot. The best part is I've been able to publish the system internally with all the system settings (lots of network tweaks) and a script so others can deploy it and do their own testing. Previously everything was C with Libevent and it's just painful trying to get a large pool of people up to speed on using that to do their projects. Most recently I've re-written this system in Go and am working through some crazy performance issues there. I can't seem to get Go's scheduler to react as quickly as Nginx and often times it seems to latch on to just four CPU's even though GOMAXPROCS is set to 8 or more.

I could go on for pages about all the other things I've done but, the underlying thing about them all is problem solving at a level of detail where most people give up. I often say I'm not very smart; I'm just really persistent. I'm willing to change just one thing, retest, tweak another, retest and onward until the problem starts to present itself. I often find people give up too soon, they think something is impossible or they're scared of how much time it will take to find the solution. For me if I see forward progress and I have the intuition that what I'm doing will work I keep pushing until I can disprove my intuition or prove it. I think the sign of a good technologist is less about how super smart they are and more about how they approach solving real world problems. I find it annoying when someone tries to get me to do some puzzle for an interview or other thought experiments that have little basis in reality. That said, if someone asks me to solve a real world problem in an interview I'll jump up to the whiteboard and tear it to up with great passion.

Don't belittle yourself because you're doing CRUD to pay the bills. Instead challenge yourself to do more when someone isn't paying the bill. All my life I've worked and played in with technology and most people can not tell when I'm working or playing. I am always pushing to learn new things. As the example above shows, the system I built is working fine why do Go? Why not? In 30 more years all the languages will change, all the tech will be different but the problems will be related to today's problems and the more you learn to stretch your mind and solve problems with many different approaches the more valuable you will be in the complex future that is coming at us every single day.

[+] raheemm|12 years ago|reply
I loved reading these stories, and your writing style is excellent! Also, your approach to problem solving is a great reminder of how persistence is the underlying key. Please do write more if you can, it was such a pleasure to read. I'd pay $10 for a book full of your stories - exactly how you wrote it above.
[+] jamescun|12 years ago|reply
These are the types of experiences I like to read about, you should blog about them in depth some time.
[+] fatjokes|12 years ago|reply
+1 for your awesome stories. another +1 for your defense of CRUD.

too many people look down on those doing the "grunt" work, despite the fact that it is usually necessary.

[+] wink|12 years ago|reply
Your mention of nginx/luajit piqued my interest as we've developed a system based on that, coincidentally with those same 7ms/10ms constraints.

I was just checking your profile for an email address when I saw your employer on your LinkedIn profile and suddenly it all makes sense ;)

[+] dfc|12 years ago|reply
Wow 50ns gps response in 1988!?! What was the bus used for communication? I am surprised the jitter alone did not kill the 50ns requirement.
[+] NodeMuppet|12 years ago|reply
I was going to post some of my stories but they simply pale in comparison to yours.

I agree about the writing style also.. have you heard of Leanpub? Pretty sure you could make a small book that would be tons of fun to read.

[+] ratsbane|12 years ago|reply
Yes! Excellent, not the least for noting that CRUD does make the world go around. It's been done a million times but that doesn't mean there's not still room for significant innovation.
[+] dfc|12 years ago|reply
You meant SCO Xenix right?
[+] gre|12 years ago|reply
50ns in 1988?
[+] davidjohnstone|12 years ago|reply
A few years ago I had to create a nearest neighbour lookup algorithm that had to perform 2D searches in a microsecond with 16 million points (k-d trees and the like didn't cut it).

I spent a lot of time reading books and papers on computational geometry. I had an idea that involved a few minutes of precomputing things, and eventually came across a useful algorithm in a paper that let me implement this as I envisaged. In the end, everything worked perfectly. It was very satisfying.

[+] sillysaurus3|12 years ago|reply
Do you happen to remember the paper? If so, would you link to the PDF? It sounds really cool!
[+] notimetorelax|12 years ago|reply
That sounds very interesting. If you don't mind could share the algorithm name?
[+] bsamuels|12 years ago|reply
It was a problem personal project I did a few years ago. I was writing an editor for ship hulls that would allow a non-technical user to model a ship hull out of 3 sets of bezier curves - one for the side profile, one for top-down, and one for front-back. Pics at bottom.

The hard part was taking those 3 sets of bezier curves and turning it into a 3d mesh. There's no pleasant mathematical way to do this directly, and there's no way to convert the 3 sets of curves into a 2d bezier surface.

The eventual solution involved several steps - first the top-down curve was rasterized into points at intervals of N on the X axis (from front of hull towards back). The maximum distance between any two sets of symmetric points was used to "scale" the front-back view so that the endpoints of each copy of the front-back view would match each set of symmetric points on the top-down view. At this point, each set of symmetric top-down points has a matching front-back curve that connects the two points. Now each front-back curve is rasterized at intervals of I on the Y axis (from port to starboard).

At this point I have all the points I need and could actually rasterize them into a mesh, but with one problem - the side-view curve still isn't accounted for. If I were to rasterize it at this point, the ship would probably look like a bullet cut in half.

So to take the side profile curves into consideration, the side view was rasterized like the top-down curves were, into points at intervals of N on the X axis. These points are converted into proportions (from 0 to 1) of how far they are from the top deck relative to the deepest point on the side-profile curve. Finally, each proportion was multiplied with the Z components of each point on the point-rasterized front-back curves. In this way, the side-profile just acts like a "scale" to how deep the front-back curves are allowed to go.

I was pretty happy with the results - however the mesh had densities in bad places that I later smoothed out using bicubic interpolation. I don't have any pics of final product, but here's some of it before the interpolation phase:

http://i.imgur.com/PqX1aWL.png

http://i.imgur.com/se9YdoO.png

http://i.imgur.com/oK4taXA.png

[+] highCs|12 years ago|reply
Video game related. One monday my boss came and asked me to rewrite the pathfinding library recast [1] in ActionScript (we could not use alchemy to compile it from c++ to actionscript because a bug made the AIR compilation to iOS not to finish with any alchemy code in it).

His approach was to translate the code by hand without understanding much - a monkey me? He told me I have approx. 1 week. I look at the code and see dozens of optimized c++ code - most of the features wasn't require for us however. I've quickly understood that I would have to figure out the core algorithms and implement them if I wanted to finish on time. The problem was: when you read the code of a complex algorithm in c++, some part can be so complex that you can spend a lot of time just to understand barely how it works. And I have 5 days, 8 hours a day, and the 3D game was waiting.

So I took 20 minutes, and succeed to figure out how to make a navmesh-based pathfinding library, on a piece of paper without even reading recast's code. The hard part to figure out is a portal-based algorithm and also to succeed to get error-perfect 3D floating-point geometrical approaches to avoid nasty corner cases which causes bugs on the position of the main avatar of the game. At the end of the week, our 3D game was running with the new library and I did not make more than 1 hours of overtime each day in average. I felt classy =) It has worked during the two years I was there and games has been shipped with it.

[1] https://github.com/memononen/recastnavigation

[+] faster|12 years ago|reply
Not the most difficult, but one of the most satisfying...

Early 90's (when C++ was new and broken in a different way for each compiler), I was on a QA team for a project where developers were learning C++ as they built the next version of the product. There was a buggy math function in the standard lib, and the compiler vendor didn't see it as a high priority. Devs didn't know how to find expressions that were at risk so they could cast them to a type for which the libraries worked reliably.

I discovered a useful combination of compiler flags and wrote an awk script to take the compiler output and make a list of source files and lines that produced calls to the broken lib function. The lead developer insisted that I was wasting their time with a bogus list, until I explained how it worked.

More challenging: I worked on an industrial machine that had to mix measured amounts of gasses over time. The developer who wrote the mass flow controller (the device that controls gas flows, basically) task just opened the MFC at the start time, then slammed it shut after the correct amount of gas had passed. I coded up a smooth open/close that kept the area under the curve correct. In 6809 assembly language. That was in the early 80s, when a 2MHz 8-bit CPU was some serious horsepower.

[+] grinich|12 years ago|reply
One time I reverse engineered the flight controls for a octocopter (like a quadcopter, but 8 rotors) so I could access the debug port and stream telemetry to a ground station while issuing GPS waypoints.

Probably the only thing worse than reading C++ is when it's poorly documented and in German. :P

[+] danieltillett|12 years ago|reply
I feel for you. I have to work with a proprietary library that is only documented in German - google translate only gets you so far.
[+] drblast|12 years ago|reply
It's much worse after it's compiled. :-)