Those are all beginner embedded problems. Big problems include
* You're developing a controller that controls something. Now you have to have the hardware it controls. This can be a sizable piece of industrial equipment. You may need a simulator for it.
* I've seen auto engine controls developed. Phase I was run connected to an engine simulator. Phase 2 was run connected to a test board with the auto components, including one spark plug, and an analog computer to simulate the vehicle power train. Phase 3 was run on a test stand with a real engine. Phase 4 was run in a car with a debug system plugged in. Yes, you need all that stuff.
* Safety issues. You may need a separate safety system to monitor the primary system and shut it down. Traffic lights have a simple hard-wired device which has inputs from all the green lamps, and a PC board wired with diodes to indicate which can't be on at the same time. If the checker detects a conflict, a relay trips, all the lights go to flashing red, and the CPU can't do anything about it. Some systems allow the CPU to try again after 30 seconds of flashing red, but usually it requires someone to come out and replace the electronics.
* JTAG is your friend, but JTAG is so low-level that it's a huge pain.
Phase 1 was running with blinky leds on breadboard. Phase 2 was prototyped version running under the seat of my motorcycle hooked up ti the engine sensors and datalogging the output of my hardware/software compared to the OEM ignition/FI while I rode it round the block and local neighbourhood a few times. Phase 3 was hooking my version up and riding it around... Safety issues, huh? ;-)
(Arduino driven ignition and fuel injection for a motorcycle a bunch of years back. Gave up on it 'cause though it worked well, I never got it reliable enough. Fun way to waste a year or so's worth of weekends/evenings...))
* Not giving up when your errors do lead to physical damage. Repair the heat exchange radiators that froze because you didn't test what happens when outside temperatures drop. Change the timing belts that snapped because of off-by-one mistake when estimating position of part on conveyor. Straighten the door bent by air pressure when your pressure/temperature regulator oscillates because it was tuned without taking sudden disturbances (opening the said door) into account.
* Staying motivated during the long product development cycles. Just when you think you're finished when manufacturing starts- there is still a lot of grunt work to do.
* Designing a logging system that stores enough information to pinpoint issues yet be lightweight enough.
> * JTAG is your friend, but JTAG is so low-level that it's a huge pain.
Yes, JTAG is definitely your frenemy. Hopefully you have a fast version or even better a full trace probe but those are usually vendor specific so more cost
For me it was working with vendors. Coming from web dev, that was the biggest change, that you can't just open an account with Heroku/AWS/whatever and go about your business like you would if you were hacking for yourself.
You've first got to figure out what kind of SOC you want, and no it's likely not Arduino/RPi or whatever is popular for hacking projects. And no it's not the EE responsibility to choose these things; they just route them. You have to work with vendors to figure out what performance characteristics, lifespan, etc you'll need for your system along with preferences such as OS support (first class, 2nd class, community/experimental, etc, because no, you don't typically just download and install the latest Ubuntu and call it a day; drivers etc are often custom and proprietary, so unless you want to write your own driver stack...). It all doesn't "just work out of the box" like a SAAS service. You've got to put a lot of thought into "other stuff" and make some big decisions before you even write a line of code.
I'm one of those EEs, and I think you're short-selling your colleagues. Or maybe your organization hasn't found great hardware people. Where I've worked, the hardware folks are (on average) just as responsible for architecture as the embedded software folks.
Anything less than equal-and-honest cooperation tends to lead to lopsided or quirky designs.
The tricky parts of embedded development? Take all the tricky parts of making all the functional elements of a computer, kill the supportive community, demolish a couple bells and whistles, burn all pertinent documentation, and, most importantly, give it to a single developer to handle all by their lonesome.
I'm a huge fan of simulations, I don't think you can develop a 'good' embedded system without it. My way of implementing something embedded is:
* Develop a capture program for the input feed/sensors, and capture as much as you can.
* Develop a software model to recreate the input for a 'target' embedded system
* Write the embedded system against the captured input. That will get you 95% there.
* Run it 'live', check anything wrong (you will). Usual debugging & tweaks.
* Do a feedback loop if problem input comes in, and keep /that/ preciously for your test unit sequence.
* Once software is done, every time you make a change, run your simulator with all the test input you have and check your output for divergence.
I very, VERY rarely need to JTAG into a board, I'd rather spend the time on the simulation model and get it accurate as I can than spend time 'debugging' on the target.
That's why I wrote simavr for example [0], but I also use qemu lot for bigger systems. Unfortunately it's next to impossible to get anything upstream in qemu, so most of the work there just is dropped eventually [1].
Do you have any suggestions on deciding where to draw the line for simulation? It seems that you are suggesting instruction-level simulation of the same binary which is going to be deployed, correct?
What is the performance you're usually seeing for simulations like that?
What is your opinion on Model-based development? Model creation in MATLAB/SimuLink and code generation with the help of a suitable tool (ex. TargetLink). I recently came to know that this is the preferred method in the automotive industry in Germany atleast.
Bad documentation. The worst documentation is the documentation that looks professional and complete . . . and lies through its teeth. And the support staff at $VENDOR is just reading the same documentation that you are trying to decode, and it takes weeks to get a round-trip through their ticket system.
Once I submitted an issue with a workaround that had some rather nasty side-effects, and asked for a better solution. Weeks went by. They were promising a code sample. More weeks, more "Well, we're still working on this." The solution I got back, about a month later, was exactly the workaround I'd included in the support request, with some of the serial numbers filed off.
One issue took months to resolve. We finally told $VENDOR "Look, we've got to ship. But we can't if we can't get this (critical component) working, and we'll be forced to ditch you." That got all kinds of political bullshit out of the way, and in an hour we were talking to the guy who had designed the circuit we were having trouble with.
"Oh," he said, "You have to do (three simple steps)". And I tried it while one of my cow-orkers kept him on the phone, and it worked.
for me, its that the hardware doesn't always work or at least not as you expect. if you write workstation or server code, 99.99999% of your problems will be software bugs. But on an embedded system, especially something custom, you can have weird, intermittent hardware behavior that takes a lot of work to pin down. and sometimes you can't fix it so you workaround. It's both rewarding to get this stuff to work while at the same time extremely frustrating.
I've worked embedded systems for years but every year I tell my colleagues that I'm switching to IT so if my hardware doesn't work I can just throw it away and buy a new workstation.
My answer (not mentioned in article): cycle time. So many code-bases with no tests and no useful emulator.
So each "cycle" can be as long as, flash the thing, wait for it to warm up, get the device into the right state, test the thing you wanted to test (which might involve another device etc).
I recently put together a hack to alert me when the washing machine was done, via an ESP8266 device.
By far the most frustrating part was waiting to test it. I didn't want to run the washing machine (empty) just to test it, so I had to schedule debugging and testing times when we were doing laundry for real.
The moment it worked for the first time was definitely a happy one though!
This is especially fun when your machine is a medical instrument doing biochemistry that can't be sped up. You need to test a particular behavior under a certain error condition that is only physically possible 35 minutes into a run? Gonna be a long day!
As a former embedded developer, there is another aspect of difficulty that hasn't been touched on. Dealing constantly with physical hardware introduces a host of challenges.
* If you or your coworkers are not organized, you can waste a lot of time looking for proper sized wrenches, proprietary screw heads, speciality crimpers, oscilloscope probes, etc. etc. The more hardware the company makes, the worse this problem can be.
* Most internal connectors are not meant to be constantly plugged and unplugged. In a testing scenario where you have to change connectors or test harnesses frequently, it is common for the connectors to break or wires to become loose. Then you have to waste time figuring out why your hardware stopped working.
Another big challenge is that most of the embedded software I have seen is written by people who aren't exactly top notch programmers.
I spent a about 15 years mostly writing server code for UNIX machines in C (before ditching it in favor of Java, and 13 years later: Go). Since embedded programming is a bit of a specialty field where things like predictable performance and robustness is important, I expected the embedded world to be pretty professional. Because any time people start using words like "guarantees" or "real-time", you tend to assume that they do some pretty amazing stuff.
I can't really say that's what I found. A lot of code is brutally ugly, many lack understanding of even the most basic defensive programming techniques and there's a lot of superstition around abstractions by people who don't really seem to understand what comes out of a compiler (Having the compiler "compile away" abstraction layers was something we often obsessed over on projects I worked on in the 90s and early 00s).
Code is often badly organized, badly formatted, badly documented and amateurishly maintained (eg bullshit commit logs -- if the source is even kept in a version control system). As a result, you constantly fight the urge to rewrite stuff because the code is just so damn hairy. Of course, any talk of rewriting code makes people nervous ("we invested a lot in order for this to work and now it does! Don't touch it". Yeah, I'm not surprised it took a lot of work)
And all of this was code by serious companies whose brands you have heard of.
I'm hoping the IoT craze is going to accomplish at least one thing: educate embedded developers. Sure, a lot of us "regular" software people are going to run around like a bunch of flatfooted morons because it is unfamiliar territory, but the embedded world is in _dire_ need of some software culture and discipline.
At my last job, the EE manager wanted to improve the quality of firmware, so one of the things he did was to ask for a software guy to show them how we did code reviews.
As an EE working in software, I volunteered. One of the comments I made was that the many magic numbers in the code should be replaced by definitions that explained what they meant.
I got back code for re-review that contained the line:
#define ZERO 0
To this day I'm not sure if the author didn't understand or was just irritated at having me review his code. Probably both, come to think of it.
Although coming from the microcontroller world there is practically nothing in the STL which gives me the guarantees I need to use it in ISRs and most devs have no chance at reviewing the STL piece to see if its going to deadlock in ISR or something.
Another problem is that although the abstraction compiles away in release mode the debug build usually still has to fit on the chip and/or meet realtime deadlines. Tooling is still super bad at optimizing part of a build and not other parts even though we have an optimize pragma (I brought this up in the SG14 working group but concluded that its more of a tooling than a language issue).
In short I think at least the drivers should be written in C++ with proper abstraction but for the most part those abstractions have not been written yet and we can't just borrow from other domains because we have to be deterministic in timing and RAM use and also usually use other threading models (RTC event based) and memory management models (pools, state local storage) at least in drivers.
> lot of superstition around abstractions by people who don't really seem to understand what comes out of a compiler (Having the compiler "compile away" abstraction layers was something we often obsessed over on projects I worked on in the 90s and early 00s).
I have seen this a lot, specially like caring about bounds checking or virtual method dispatch without measuring if it really matters.
Even some modern microcontrollers are quite powerful versus those mainframes running Lisp, yet were we are still preaching Assembly and C as if it was the 80's, early 90's.
Your hope is becoming a reality, at least in my experience working at a semiconductor company (public, mid-sized).
The IoT craze has forced a lot of software on hardware companies: everything from ZigBee, Thread, and Bluetooth stacks to RTOS's to manage all of those stacks, to IDEs that are actually usable.
The companies embracing software as a vital part of their products are doing well, and will continue to do well at the expense of the companies who treat software as an after thought.
I think a lot of that is changing due to the rise of Linux in embedded systems. Many more traditional software engineering concepts are becoming mainstream in embedded.
And for embedded in high value (hundreds of millions of dollars) or human life, I think there has always been lots of proper discipline and engineering process.
* You're going back in time about 10-20 years in terms of tool chains, language support, memory/CPU power, debuggers, and for the most part programming paradigms.
* Documentation? HAHAHAHAHAHAHAHA LOL
* Debugging can be extremely challenging in real-time systems. Things like JTAG printf will slow things down enough to wreck your timings.
* You have to at least know the basics about the hardware, especially if you're doing control systems and meddling with GPIOs and such.
Ugh. Dealing some of this right now. 3 full minutes to compile and flash 430kbs to a board. And then the debug tools won't use break points properly. 400kbs are system libraries I can't avoid. I'm thinking of writing a lisp compiler with some ffi and pushing it to the remote memory via tcp. Even having to write my own debugging tools I feel like I might save time.
A tiny favorite I remember from when I started working for real as an embedded developer ~5 years ago (before that I was in AAA game development, quite the switch in so many ways!):
I was taking over development of a new display driver for a small hand-held instrument. The display and drivers were both new to the organization, so there was no experience in-house. And we had these strange "color-flowing" bugs, that nobody could understand. Fields of greens and blues that bled across the screen in weird ways. Of course everybody thought it was a driver (=software) bug.
Weeks passed, my hair got thinner, then finally I looked once more on the schematics, traced a signal back to the CPU, and said to the hardware designer "hey, isn't this a 3.3-volt signal?" Turned out we were backfeeding the display driver from its reset line, causing it to power up due to the voltage overpowering the input circuitry and flowing into the power rail, enough to power it up but not to make it behave correctly. Yikes that was frustrating (but fun to catch, of course).
I've come to realize software almost always gets the initial blame. Software is almost always the one painting the error screen. So even if it says "Voltage out of range", someone is going to accuse software of not working. So they pull you into the lab, watch you open the box, probe it with a multimeter, show that the voltage is out of range, and then get the hardware engineer.
I once spent over a month (nights & weekends) tracking down a memory corruption bug. Everyone accused software of course. It turned out to be poor signal integrity on the memory bus (hardware problem). It was horrible.
Real hardware regularly fails and you need to recover or people will think your device is flakey. Consider running a program for 20 years on a single chip without someone rebooting ever. Now, consider 100,000 people doing this and everyone thinking something is broken if it fails.
My experience: working with the hardware engineers and convincing them of things like yes, you need to latch those signals because, no, I can't poll the signals often enough to avoid missing events.
One of the tricky part is documentation. The PDF for an STM32 (something way smaller than a beaglebone or a raspberry) is 1700 pages long, it includes the 69 pages long description of a timer, but doesn't include any ARM core documentation.
RTFM is a nightmare, the time you look at another page for a related device, you have forgotten what you read previously.
The STM32s have an odd documentation structure where you have a datasheet (showing pinouts and capabilities of a specific part or subfamily), a reference manual (showing the detailed structure and function of all peripherals, common to the part family) and a programming manual (documenting the core, common to all parts using that core). So in this case, you're looking at the wrong document.
I wasn't really an embedded developer ever but I worked at an industrial IoT company for awhile and used to just get handed devices which I was supposed to connect to the internet and figure out how to make them send useful data to us.
Besides the undocumented proprietary protocols which isn't embedded specific as a backend engineer I used to struggle heavily with development environment setup. As a JVM and Python guy I'm not used to fucking with weird compiler tool chains at all.
I found that doing endian conversion for a system using both PCI and VME buses was quite a challenge.
I also found it a challenge writing Linux device drivers for chips with an external 32 bit bus and an internal 16 bit one. The 32 bit data had to be split into two, set the data pins for the first half, set a bit saying the data was available, wait for it to be read in and repeat with the second half. Do the reverse for reading.
Also challenging was setting the bits in a register for a chip with multiple commands per register. My program checked the input data for an error, e.g. trying to set three bits to 128, read the register, masked out the bits to be changed, changed them, wrote the value, read the register, masked out the bits and checked them to make sure they had been changed.
Then there was the time the board manufacturer changed the memory map of the board without telling us. Boards made before April would boot, newer ones didn't.
I also found that some device drivers from SOC chip manufacturers had to be debugged before I could use them.
Isn't the "Mbed compiler" just GCC running on their servers?
I've successfully made code that worked just fine with the online MBed IDE and gcc-none-arm-eabi, I don't recall anything having to be done differently, just more to setup locally (linker scripts and so on).
I think they've added a fair bit of stuff to it, and are unwilling to release their source, because they don't actually ship the modified GCC, they can not release their changes.
IIRC, most of the stuff they've added are surrounding libraries and such.
It's basically a canonical example of why the Affero GPL exists.
I used the mbed for my embedded class. You can talk to the mbed using the ARM Keil ide and it's so much better. You can actually debug code on the device with breakpoints .
Is embedded work, in general, well paid? And is it relatively easy to get new work once you've broken into it?
It seems to be not very visible. I'm under the impression that some "not very visible" work is well paid and not going away any time soon. I'm thinking of some cobol or pascal developer called out of retirement at great expense.
Or is this like being at the edge of new/exotic web 2.0 where you are continually looking for the next job, re-learning tech over and over.
I know this is a big topic and perhaps there is no correct answer overall.
If anyone has any resources to job sites or further articles I'd love to read them.
Generally not, compared with the difficulty of it. (And the first dozen posts are all bang on the money, and you have to deal with all of those things AT ONCE.) You have to be a multidisciplinary wizard and you will never get paid what some VBScripter gets paid at $bank.
When you're developing a product, hardware iterations take most of the budget. As 'the software guy' your job is to be handed a piece of hardware and some vague requirements, and to make it work the way the end user (who you probably never get to meet) expects it to. The product will already be over budget and behind schedule, so you don't get funding and you're probably already late in delivering the product (no, your deadline does not move due to this).
It's hard to find embedded work because the people who 'need a software guy' for their hardware product don't know any software developers so you have to be lucky to meet potential employers.
That said, there is opportunity for some COBOL-style big bucks later on, when suddenly they need to make another production run of the product you worked on back in the day, and they want a few tweaks, and you're the only person on the planet who has the faintest clue how to build the software and program the hardware, so you get to pull your hair out all over again but this time for a more reasonable wage.
The trick is breadth of knowledge. If you can just program a microcontroller, you've got a lot of competition. If you can program a microcontroller AND an FPGA, your competition dropped by orders of magnitude (this is the biggest step and probably the easiest to tack on--Verilog isn't that hard but you WILL foul up until you get race conditions and concurrency properly beaten into your head--do everything synchronously and synchronizers are your friends). If you can program a microcontroller, program an FPGA, AND design the board--you are in rare company. If you are good at debugging these boards, people will worship you as a god. If you have domain knowledge on top of that, you should be starting a company.
> And is it relatively easy to get new work once you've broken into it?
Well, it's networking like anything else. If you've been doing it for 10 years, things seem to just drop into your lap. Otherwise, you have to beat the bushes.
However, if you do a good job, the good people around will notice quickly. And those good people are often at capacity so they will throw stuff over to you.
Obviously, being near a tech hub helps.
> I'm under the impression that some "not very visible" work is well paid and not going away any time soon.
The problem with "not very visible" is that it also means "executive level may not appreciate it". I've seen medical device companies trying very hard to get rid of the single person who actually understands their hardware.
> Or is this like being at the edge of new/exotic web 2.0 where you are continually looking for the next job, re-learning tech over and over.
No and yes. :)
ARM is dominating the embedded space currently. So, especially at the Cortex M end of the spectrum, the base programming environment looks pretty much the same. This hasn't changed for quite a lot of years.
In addition, 8-bit development is mostly going away. The delta between an 8-bit micro and a 32-bit micro is now so small that it makes no sense to use anything other than 32-bits unless you have a very specific use case.
For FPGA, the tools are similarly stable over long time frames.
The peripherals, on the other hand, completely differ from manufacturer to manufacturer. And have bugs from revision to revision. So, that is like learning stuff over and over :(
However, what the peripheral are supposed to DO is stable. Once you understand that, you will be very good at ferreting out the small differences and your life gets some easier.
> If anyone has any resources to job sites or further articles I'd love to read them.
You can read a lot, but doing is better. Go get a Nordic BLE development kit for $39 and build something.
From what I could see pay was on par with most enterprise development. Well above basic web stuff though.
Like the other poster said, if all you do is code, then your position is pretty weak. I have an EE background and I'm pretty good with mechanical things, so I could debug the electronics and figure out issues with mechanisms, etc. (most of my programming career has involved writing code for things that move) My skillset pretty much goes from low-level assembly code up to writing simple web apps, so I was generally able to handle anything thrown at me. I'm also pretty good with creating documentation, understanding the business and built up my domain knowledge at any company I've worked for.
In this field, the more you know, the more valuable you become because it tends to require a wide range of expertise. IME, the people who have done well are always generalists.
[+] [-] Animats|9 years ago|reply
* You're developing a controller that controls something. Now you have to have the hardware it controls. This can be a sizable piece of industrial equipment. You may need a simulator for it.
* I've seen auto engine controls developed. Phase I was run connected to an engine simulator. Phase 2 was run connected to a test board with the auto components, including one spark plug, and an analog computer to simulate the vehicle power train. Phase 3 was run on a test stand with a real engine. Phase 4 was run in a car with a debug system plugged in. Yes, you need all that stuff.
* Safety issues. You may need a separate safety system to monitor the primary system and shut it down. Traffic lights have a simple hard-wired device which has inputs from all the green lamps, and a PC board wired with diodes to indicate which can't be on at the same time. If the checker detects a conflict, a relay trips, all the lights go to flashing red, and the CPU can't do anything about it. Some systems allow the CPU to try again after 30 seconds of flashing red, but usually it requires someone to come out and replace the electronics.
* JTAG is your friend, but JTAG is so low-level that it's a huge pain.
[+] [-] bigiain|9 years ago|reply
(Arduino driven ignition and fuel injection for a motorcycle a bunch of years back. Gave up on it 'cause though it worked well, I never got it reliable enough. Fun way to waste a year or so's worth of weekends/evenings...))
[+] [-] fest|9 years ago|reply
* Not giving up when your errors do lead to physical damage. Repair the heat exchange radiators that froze because you didn't test what happens when outside temperatures drop. Change the timing belts that snapped because of off-by-one mistake when estimating position of part on conveyor. Straighten the door bent by air pressure when your pressure/temperature regulator oscillates because it was tuned without taking sudden disturbances (opening the said door) into account.
* Staying motivated during the long product development cycles. Just when you think you're finished when manufacturing starts- there is still a lot of grunt work to do.
* Designing a logging system that stores enough information to pinpoint issues yet be lightweight enough.
[+] [-] zwieback|9 years ago|reply
Yes, JTAG is definitely your frenemy. Hopefully you have a fast version or even better a full trace probe but those are usually vendor specific so more cost
[+] [-] realo|9 years ago|reply
* Just run unattended, bug free, 24/7, for a year. Very tricky.
[+] [-] projectileboy|9 years ago|reply
[+] [-] daxfohl|9 years ago|reply
You've first got to figure out what kind of SOC you want, and no it's likely not Arduino/RPi or whatever is popular for hacking projects. And no it's not the EE responsibility to choose these things; they just route them. You have to work with vendors to figure out what performance characteristics, lifespan, etc you'll need for your system along with preferences such as OS support (first class, 2nd class, community/experimental, etc, because no, you don't typically just download and install the latest Ubuntu and call it a day; drivers etc are often custom and proprietary, so unless you want to write your own driver stack...). It all doesn't "just work out of the box" like a SAAS service. You've got to put a lot of thought into "other stuff" and make some big decisions before you even write a line of code.
[+] [-] zbrozek|9 years ago|reply
Anything less than equal-and-honest cooperation tends to lead to lopsided or quirky designs.
[+] [-] noobiemcfoob|9 years ago|reply
[+] [-] alecdibble|9 years ago|reply
[+] [-] buserror|9 years ago|reply
* Develop a capture program for the input feed/sensors, and capture as much as you can.
* Develop a software model to recreate the input for a 'target' embedded system
* Write the embedded system against the captured input. That will get you 95% there.
* Run it 'live', check anything wrong (you will). Usual debugging & tweaks.
* Do a feedback loop if problem input comes in, and keep /that/ preciously for your test unit sequence.
* Once software is done, every time you make a change, run your simulator with all the test input you have and check your output for divergence.
I very, VERY rarely need to JTAG into a board, I'd rather spend the time on the simulation model and get it accurate as I can than spend time 'debugging' on the target.
That's why I wrote simavr for example [0], but I also use qemu lot for bigger systems. Unfortunately it's next to impossible to get anything upstream in qemu, so most of the work there just is dropped eventually [1].
[0]: https://github.com/buserror/simavr
[1]: https://github.com/buserror/qemu-buserror
[+] [-] fest|9 years ago|reply
What is the performance you're usually seeing for simulations like that?
[+] [-] su30mki117|9 years ago|reply
[+] [-] kabdib|9 years ago|reply
Once I submitted an issue with a workaround that had some rather nasty side-effects, and asked for a better solution. Weeks went by. They were promising a code sample. More weeks, more "Well, we're still working on this." The solution I got back, about a month later, was exactly the workaround I'd included in the support request, with some of the serial numbers filed off.
One issue took months to resolve. We finally told $VENDOR "Look, we've got to ship. But we can't if we can't get this (critical component) working, and we'll be forced to ditch you." That got all kinds of political bullshit out of the way, and in an hour we were talking to the guy who had designed the circuit we were having trouble with.
"Oh," he said, "You have to do (three simple steps)". And I tried it while one of my cow-orkers kept him on the phone, and it worked.
I never want to deal with $VENDOR again.
[+] [-] dmh2000|9 years ago|reply
I've worked embedded systems for years but every year I tell my colleagues that I'm switching to IT so if my hardware doesn't work I can just throw it away and buy a new workstation.
[+] [-] xyzzy123|9 years ago|reply
So each "cycle" can be as long as, flash the thing, wait for it to warm up, get the device into the right state, test the thing you wanted to test (which might involve another device etc).
[+] [-] stevekemp|9 years ago|reply
By far the most frustrating part was waiting to test it. I didn't want to run the washing machine (empty) just to test it, so I had to schedule debugging and testing times when we were doing laundry for real.
The moment it worked for the first time was definitely a happy one though!
Edit - https://steve.fi/Hardware/washing-machine-alarm/
[+] [-] HeyLaughingBoy|9 years ago|reply
[+] [-] iagooar|9 years ago|reply
[+] [-] alecdibble|9 years ago|reply
* If you or your coworkers are not organized, you can waste a lot of time looking for proper sized wrenches, proprietary screw heads, speciality crimpers, oscilloscope probes, etc. etc. The more hardware the company makes, the worse this problem can be.
* Most internal connectors are not meant to be constantly plugged and unplugged. In a testing scenario where you have to change connectors or test harnesses frequently, it is common for the connectors to break or wires to become loose. Then you have to waste time figuring out why your hardware stopped working.
[+] [-] bborud|9 years ago|reply
I spent a about 15 years mostly writing server code for UNIX machines in C (before ditching it in favor of Java, and 13 years later: Go). Since embedded programming is a bit of a specialty field where things like predictable performance and robustness is important, I expected the embedded world to be pretty professional. Because any time people start using words like "guarantees" or "real-time", you tend to assume that they do some pretty amazing stuff.
I can't really say that's what I found. A lot of code is brutally ugly, many lack understanding of even the most basic defensive programming techniques and there's a lot of superstition around abstractions by people who don't really seem to understand what comes out of a compiler (Having the compiler "compile away" abstraction layers was something we often obsessed over on projects I worked on in the 90s and early 00s).
Code is often badly organized, badly formatted, badly documented and amateurishly maintained (eg bullshit commit logs -- if the source is even kept in a version control system). As a result, you constantly fight the urge to rewrite stuff because the code is just so damn hairy. Of course, any talk of rewriting code makes people nervous ("we invested a lot in order for this to work and now it does! Don't touch it". Yeah, I'm not surprised it took a lot of work)
And all of this was code by serious companies whose brands you have heard of.
I'm hoping the IoT craze is going to accomplish at least one thing: educate embedded developers. Sure, a lot of us "regular" software people are going to run around like a bunch of flatfooted morons because it is unfamiliar territory, but the embedded world is in _dire_ need of some software culture and discipline.
[+] [-] HeyLaughingBoy|9 years ago|reply
As an EE working in software, I volunteered. One of the comments I made was that the many magic numbers in the code should be replaced by definitions that explained what they meant.
I got back code for re-review that contained the line:
#define ZERO 0
To this day I'm not sure if the author didn't understand or was just irritated at having me review his code. Probably both, come to think of it.
[+] [-] odinthenerd|9 years ago|reply
Although coming from the microcontroller world there is practically nothing in the STL which gives me the guarantees I need to use it in ISRs and most devs have no chance at reviewing the STL piece to see if its going to deadlock in ISR or something.
Another problem is that although the abstraction compiles away in release mode the debug build usually still has to fit on the chip and/or meet realtime deadlines. Tooling is still super bad at optimizing part of a build and not other parts even though we have an optimize pragma (I brought this up in the SG14 working group but concluded that its more of a tooling than a language issue).
In short I think at least the drivers should be written in C++ with proper abstraction but for the most part those abstractions have not been written yet and we can't just borrow from other domains because we have to be deterministic in timing and RAM use and also usually use other threading models (RTC event based) and memory management models (pools, state local storage) at least in drivers.
- Odin Holmes
[+] [-] pjmlp|9 years ago|reply
I have seen this a lot, specially like caring about bounds checking or virtual method dispatch without measuring if it really matters.
Even some modern microcontrollers are quite powerful versus those mainframes running Lisp, yet were we are still preaching Assembly and C as if it was the 80's, early 90's.
[+] [-] bradstewart|9 years ago|reply
The IoT craze has forced a lot of software on hardware companies: everything from ZigBee, Thread, and Bluetooth stacks to RTOS's to manage all of those stacks, to IDEs that are actually usable.
The companies embracing software as a vital part of their products are doing well, and will continue to do well at the expense of the companies who treat software as an after thought.
[+] [-] planteen|9 years ago|reply
And for embedded in high value (hundreds of millions of dollars) or human life, I think there has always been lots of proper discipline and engineering process.
[+] [-] tluyben2|9 years ago|reply
[+] [-] api|9 years ago|reply
* Documentation? HAHAHAHAHAHAHAHA LOL
* Debugging can be extremely challenging in real-time systems. Things like JTAG printf will slow things down enough to wreck your timings.
* You have to at least know the basics about the hardware, especially if you're doing control systems and meddling with GPIOs and such.
[+] [-] caseymarquis|9 years ago|reply
[+] [-] unwind|9 years ago|reply
I was taking over development of a new display driver for a small hand-held instrument. The display and drivers were both new to the organization, so there was no experience in-house. And we had these strange "color-flowing" bugs, that nobody could understand. Fields of greens and blues that bled across the screen in weird ways. Of course everybody thought it was a driver (=software) bug.
Weeks passed, my hair got thinner, then finally I looked once more on the schematics, traced a signal back to the CPU, and said to the hardware designer "hey, isn't this a 3.3-volt signal?" Turned out we were backfeeding the display driver from its reset line, causing it to power up due to the voltage overpowering the input circuitry and flowing into the power rail, enough to power it up but not to make it behave correctly. Yikes that was frustrating (but fun to catch, of course).
[+] [-] planteen|9 years ago|reply
I've come to realize software almost always gets the initial blame. Software is almost always the one painting the error screen. So even if it says "Voltage out of range", someone is going to accuse software of not working. So they pull you into the lab, watch you open the box, probe it with a multimeter, show that the voltage is out of range, and then get the hardware engineer.
I once spent over a month (nights & weekends) tracking down a memory corruption bug. Everyone accused software of course. It turned out to be poor signal integrity on the memory bus (hardware problem). It was horrible.
[+] [-] Retric|9 years ago|reply
[+] [-] CodeWriter23|9 years ago|reply
[+] [-] ianhowson|9 years ago|reply
[+] [-] groby_b|9 years ago|reply
There's more than one project that fixes chip issues via errata and curses at software engineers.
[+] [-] nraynaud|9 years ago|reply
RTFM is a nightmare, the time you look at another page for a related device, you have forgotten what you read previously.
[+] [-] Kliment|9 years ago|reply
[+] [-] meddlepal|9 years ago|reply
Besides the undocumented proprietary protocols which isn't embedded specific as a backend engineer I used to struggle heavily with development environment setup. As a JVM and Python guy I'm not used to fucking with weird compiler tool chains at all.
[+] [-] mrlyc|9 years ago|reply
I also found it a challenge writing Linux device drivers for chips with an external 32 bit bus and an internal 16 bit one. The 32 bit data had to be split into two, set the data pins for the first half, set a bit saying the data was available, wait for it to be read in and repeat with the second half. Do the reverse for reading.
Also challenging was setting the bits in a register for a chip with multiple commands per register. My program checked the input data for an error, e.g. trying to set three bits to 128, read the register, masked out the bits to be changed, changed them, wrote the value, read the register, masked out the bits and checked them to make sure they had been changed.
Then there was the time the board manufacturer changed the memory map of the board without telling us. Boards made before April would boot, newer ones didn't.
I also found that some device drivers from SOC chip manufacturers had to be debugged before I could use them.
[+] [-] raverbashing|9 years ago|reply
[+] [-] jononor|9 years ago|reply
I've successfully made code that worked just fine with the online MBed IDE and gcc-none-arm-eabi, I don't recall anything having to be done differently, just more to setup locally (linker scripts and so on).
[+] [-] fake-name|9 years ago|reply
IIRC, most of the stuff they've added are surrounding libraries and such.
It's basically a canonical example of why the Affero GPL exists.
[+] [-] sand500|9 years ago|reply
[+] [-] janjongboom|9 years ago|reply
[+] [-] sand500|9 years ago|reply
[deleted]
[+] [-] realo|9 years ago|reply
Simple challenge: the system must run 24/7, without interruption, for at least a year. Bug free.
Quite the tricky thing to achieve.
[+] [-] branchless|9 years ago|reply
It seems to be not very visible. I'm under the impression that some "not very visible" work is well paid and not going away any time soon. I'm thinking of some cobol or pascal developer called out of retirement at great expense.
Or is this like being at the edge of new/exotic web 2.0 where you are continually looking for the next job, re-learning tech over and over.
I know this is a big topic and perhaps there is no correct answer overall.
If anyone has any resources to job sites or further articles I'd love to read them.
[+] [-] taneq|9 years ago|reply
When you're developing a product, hardware iterations take most of the budget. As 'the software guy' your job is to be handed a piece of hardware and some vague requirements, and to make it work the way the end user (who you probably never get to meet) expects it to. The product will already be over budget and behind schedule, so you don't get funding and you're probably already late in delivering the product (no, your deadline does not move due to this).
It's hard to find embedded work because the people who 'need a software guy' for their hardware product don't know any software developers so you have to be lucky to meet potential employers.
That said, there is opportunity for some COBOL-style big bucks later on, when suddenly they need to make another production run of the product you worked on back in the day, and they want a few tweaks, and you're the only person on the planet who has the faintest clue how to build the software and program the hardware, so you get to pull your hair out all over again but this time for a more reasonable wage.
[+] [-] bsder|9 years ago|reply
The trick is breadth of knowledge. If you can just program a microcontroller, you've got a lot of competition. If you can program a microcontroller AND an FPGA, your competition dropped by orders of magnitude (this is the biggest step and probably the easiest to tack on--Verilog isn't that hard but you WILL foul up until you get race conditions and concurrency properly beaten into your head--do everything synchronously and synchronizers are your friends). If you can program a microcontroller, program an FPGA, AND design the board--you are in rare company. If you are good at debugging these boards, people will worship you as a god. If you have domain knowledge on top of that, you should be starting a company.
> And is it relatively easy to get new work once you've broken into it?
Well, it's networking like anything else. If you've been doing it for 10 years, things seem to just drop into your lap. Otherwise, you have to beat the bushes.
However, if you do a good job, the good people around will notice quickly. And those good people are often at capacity so they will throw stuff over to you.
Obviously, being near a tech hub helps.
> I'm under the impression that some "not very visible" work is well paid and not going away any time soon.
The problem with "not very visible" is that it also means "executive level may not appreciate it". I've seen medical device companies trying very hard to get rid of the single person who actually understands their hardware.
> Or is this like being at the edge of new/exotic web 2.0 where you are continually looking for the next job, re-learning tech over and over.
No and yes. :)
ARM is dominating the embedded space currently. So, especially at the Cortex M end of the spectrum, the base programming environment looks pretty much the same. This hasn't changed for quite a lot of years.
In addition, 8-bit development is mostly going away. The delta between an 8-bit micro and a 32-bit micro is now so small that it makes no sense to use anything other than 32-bits unless you have a very specific use case.
For FPGA, the tools are similarly stable over long time frames.
The peripherals, on the other hand, completely differ from manufacturer to manufacturer. And have bugs from revision to revision. So, that is like learning stuff over and over :(
However, what the peripheral are supposed to DO is stable. Once you understand that, you will be very good at ferreting out the small differences and your life gets some easier.
> If anyone has any resources to job sites or further articles I'd love to read them.
You can read a lot, but doing is better. Go get a Nordic BLE development kit for $39 and build something.
http://www.digikey.com/product-detail/en/nordic-semiconducto...
[+] [-] HeyLaughingBoy|9 years ago|reply
From what I could see pay was on par with most enterprise development. Well above basic web stuff though.
Like the other poster said, if all you do is code, then your position is pretty weak. I have an EE background and I'm pretty good with mechanical things, so I could debug the electronics and figure out issues with mechanisms, etc. (most of my programming career has involved writing code for things that move) My skillset pretty much goes from low-level assembly code up to writing simple web apps, so I was generally able to handle anything thrown at me. I'm also pretty good with creating documentation, understanding the business and built up my domain knowledge at any company I've worked for.
In this field, the more you know, the more valuable you become because it tends to require a wide range of expertise. IME, the people who have done well are always generalists.