My techniques probably are somewhat due to having started 30 years ago.
I rarely enter an interactive debugger. I have TONS of logging statements I can toggle. I make program execution as deterministic and reproducible as possible (for example, all random numbers are generated from random number generators that are passed around). When something goes wrong, I turn on (or add) logging and run it again. Look for odd stuff in the log files. If it doesn't make sense, add more logging. Repeat.
I worked on a pretty large videogame in the 90s where /everything/ was reproducible from a "recording" of timestamped input that was automatically generated. The game crashed after half an hour of varied actions? No problem, just play the recording that was automatically saved and attached to the crash report. It was amazing how fast we fixed bugs that might otherwise take weeks to track down.
Was that game Quake 3 by any chance? After reading about how Quake 3's event system worked (http://fabiensanglard.net/quake3/), I started using those techniques not only in C++ game engines, but even in Python GUI applications and I'm now experimenting with it in Javascript with Redux. I'm a huge fan of that pattern. It takes a bit of work to set up, but it's magical when it works correctly.
Trace is a powerful tool. I've shipped OSs with internal wraparound trace buffers that ran on a million machines for years - just so when I received a crashdump from the field I'd have something to sink my teeth into. Net cost: nearly zero. Net value: occasionally golden.
Data point: The amount of time I've spent doing "debugging work" has gone down VASTLY since I've adopted TDD. Literally, every line of debug code you will ever write (whether it's a println buried in the code or something done in a REPL) is a potential unit test assertion.
What is a debug statement, anyway? You're checking the state at a point in the code. That's exactly what a unit test assertion does... except that a unit test calls a single method/function... meaning the code you're trying to debug needs to be split up and written in a way that is easily unit-testable, with few, if any, dependencies that aren't explicitly given to it... which makes it better code that is easier to reason about (and thus results in fewer bugs).
See where I'm going with this? TDD = say (mostly) goodbye to "debugging"
The question is whether the additional time to write a test before any piece of code and to refactor each piece of code to each new test case (assuming you're doing TDD 'properly' and only coding the minimum to pass the extant tests), plus the time spent debugging (because it doesn't completely eliminate debugging) is less than the time you would spend debugging if you didn't use TDD.
I could say "the time I spend debugging has dramatically decreased since I began proving each bit of code to be correct mathematically." But that tells me nothing about whether it is actually a better approach.
I suspect that's why you're getting downvoted: the comparison is naive. (Edit: Also responding to 'how do you debug' with 'I don't' probably doesn't help).
My personal anecdote - I don't spend much time debugging. I spend a lot of time thinking, a smaller amount coding, and a relatively small amount debugging. Spending, say, 20% extra time preventing bugs before they happen would not be cost effective for me.
I used to be a Visual Studio Debugger Wizard (BTW, it's an excellent debugger)... now I don't remember the last time I used a conditional breakpoint.
Working on a codebase designed, from the start, for testability, changed everything. So I totally agree with your last sentence about TDD ; although it took me nearly one year of practice before I could write solid unit tests (which wouldn't break every now and then because of some interface change), and that I still find it hard to write them before the code being tested (however, I do find it harder (impossible?) to write a unit test for code written more than one month ago).
I still use cgdb from time to time, to quickly print a backtrace or the origin of an exception/segfault.
By the way, I have the feeling that language constructs like lazy arguments, RAII, scope(exit), concurrency, and exceptions, make following the flow of control less and less relevant. In the long term, some amount of emancipation from one's debugging tools might be strategic.
I don't see it. Let's say I 'know' that for all sane input my function should return a value between -1000 and 1000. I write some tests for this and they all pass. So far so good.
Now it's a week later and all of a sudden my function is returning -10e8. Where does TDD help me with debugging?
Sometimes debug printfs out a UART or USB if my system lets me, sometimes I'll create a log in memory if I can't printf or if the issue is timing sensitive (which a lot of my work is).
Pen & paper end up being very useful too - often writing out what I'm thinking helps me figure out what's going wrong faster than poking around the code or circuit board.
About the worst I ever had to do was hooking up a logic analyser to the address bus. (This was pre-instruction-cache days.) The software detected a certain error condition and wrote to an unused address decode, which triggered the logic analyzer. We scrolled back through the addresses to find out how we got there! (This may have been during the process of bringing up a board, so we may not have had a UART available to shove debug messages out.)
Hey, you should check out tools like Lauterbach's Power Trace (which seem to be missing from your list), they are quite awesome for debugging embedded SW (especially some seldom reproducible races).
Good debugging is a lot like good science. You start with a firm understanding of the system, form a hypothesis about what could be wrong with it, test your hypothesis, and then repeat. The stronger your initial understanding of the system, the faster you can debug things.
As a Python/JS developer a few print/console.log statements are usually all it takes for me to figure out what's wrong with something. For more thorny situations there's always PDB/chrome dev tools.
At the end of the day, the people who are the best at debugging things aren't that way because of the tools they use. They're the best because they can clearly visualize the system, how data is flowing through it, and where potential problems might arise. Intuition from dealing with other similar problems also helps.
I wrote a pretty long comment about the tools I use for debugging JS, but yeah, in the end the guy who's been here twice as long as me and wrote most of the system can usually debug things in a fraction of the time because he can guess what's causing a problem just from reading a description of the bug.
But I can't give that to people through a comment on HN, so I stuck to tools.
I'm the same, a few console.logs and I've usually found the problem. Meanwhile, others are attaching debuggers to step through code and try to understand it and I've already been done for 30 minutes and got a cup of tea and some crumpets.
Debugging for me is about my using my brain to step through code, not some fancy IDE that stops me from thinking. It wasn't always so easy though, but the first step is to stop using big tools to help you.
In Python I use a small collection of built-ins frequently for debugging:
For any object foo:
type(foo) # show an object's type
dir(foo) # list an object's methods
help(foo) # pydoc
id(foo) # show an object's memory address
foo.__dict__ # show an object's internal data structure
And I have a snippet triggered by "pdb<tab>" for pdb:
One minor benefit of using a "real debugger" is that it makes it easier to stumble across situations or flows that you didn't expect just from a direct-reading of the code.
Granted, often those moments are cases where the code is working correctly but you misunderstood or misremembered things, but the fact that you identified (and resolved) the disconnect is valuable, particularly if you're doing a deep-dive to figure out a nearby problem.
>... is a lot like good science. You start with a firm understanding of the system, form a hypothesis about what could be wrong with it, test your hypothesis, and then repeat. The stronger your initial understanding of the system, the faster you can debug things.
>They're the best because they can clearly visualize the system, how data is flowing through it, and where potential problems might arise
This also applies very well to appsec/vulnerability finding.
Most of my development is done in my own Lisp dialect, which is quite similar to Common Lisp. I use any of five different methods. (4) is the most interesting, and very useful. Does anyone else use it or something similar?
(1) Just add debug statements near where the bug is happening. These print a string, and the name and value of variables. Printed values in Lisp are human-readable, not just a pointer.
(2) Trace selected functions. This outputs the function name, arguments, and return value on function entry and exit.
(3) Use the virtual machine debugger. I can set breakpoints, display the stack, and trace VM instructions, but it's most useful for printing out disassembled compiled code.
(4) Browse data structures in Firefox. While my Lisp is running, a web browser runs in another thread, and every symbol has its own URL. Data structures are displayed as HTML tables.
(5) Unit tests. I've used these to debug complex algorithms, e.g. for event handling, and type inference.
I wrote a version of (4) a few weeks ago for convenience while doing some clojurescript work. I suspect yours is more helpful to you because "knowing state" isn't as useful when you're trying hard to minimize your use of it.
I've never heard of viewing your data structures in a web browser, that's pretty wild. With something like Visual Studio, you can look at data structures in a tree view though.
I start by asking,
Did it used to work?
If so, when did it last work? What changed? When?
Does it work anywhere else?
And most importantly, can you reliably recreate the bug?
Only after I've grappled with these questions will I move onto log analysis, printfs, the debugger, data fuzzing, etc.
I can typically just add print statements and figure out the problem in less time than it would take to setup and attach a debugger. But occasionally, I will use the PyCharm debugger with Python. And even more occasionally, I'll use an assembly level debugger (especially if I'm interested in the system calls, and it is not convenient to attach a Python debugger).
I'm using debug logging ( that isn't deleted ) more and more as I code. It's useful not only for solving the current problem you're experiencing but also helps the next person understand what the code is/isn't doing, adding to its self documenting nature.
Debuggers are great, but the knowledge gained by using them to solve a problem is completely lost once that close button has been pressed.
Also if I'm having to use a debugger to work out what's going on, usually it's a good sign my code is overly complicated...
It depends on the nature of the bug. If it's some behaviour that has changed then i try to isolate it to a test case, if one doesn't already exist. I can then use git to automatically bisect to the offending commit which, nine times out of ten, makes it immediately clear where the bug is. The net result is the bug being fixed and a useful regression test to make sure the bug stays squashed. And i got the vcs to do most of the work for me.
If it's something i think is trivial i'll just use a few print statements. This is 90% of the time.
If i end up with too many print statements then i step into the debugger. Others scoff at debuggers, which is odd because they can be powerful tools. Maybe you only use it once every couple of months, but they can be very helpful. When you're waist deep in the stack, or dealing with action at a distance, trying to fix something in poorly factored code, want to watch a variable, think there's some weird timing issue, need to step down into libraries beyond your control, then debuggers can help.
Don't think of the debugger as a debugger, think of it as a REPL. You just happen to be using the REPL with buggy code.
If I can narrow it down to what line, or even file, is throwing an error I just take a few minutes, read all the code and all the code of branching methods, and then can narrow it down to a single line.
From there it is actually developing a fix. As you mess around with more and more languages, you will notice that most compilers lend something far from a helping hand.
This only works, and I will stress this, for programs under 1 million lines. Past that mark, you need to do some extra steps.
When I debug one million line projects, I narrow it down to a file. I pull out the code from the file, and I mock all of the methods that file calls (This gets REALLY hard with networked code. Trust me). From this small subset, I slowly break the functionality of the external method until I resemble the error being made in the main project. From that I now know the method(s) that are actually causing the problem.
But, there is one thing that this makes an assumption about: your compiler is working.
Put blatantly, they're crap.
Usually they won't show you the correct file causing the error or they will not generate a helpful error. Runtime errors are even worse.
The best thing to do is avoid making the tricky errors. Make unit tests, using fuzzing tests, and well made documentation.
Documentation alone, that details all of the possible output and input states of the function will save you days on some bugs.
In Java, the @Nullable tag is a godsend. Use these features, they WILL help.
If you do tests, fuzzing, and documentation.
Using your brain and some things to make your brain's job easier will make you faster at debugging then any debugger like your buds GDB/DDD setup.
I've done most of my debugging on dynamic languages, where you have a lot of power in the runtime, so my style is based on that. You can perform superpowered feats of uber debugging this way. These are generally also available on other environments, but the tools are less flexible, so they are much harder to pull off, much less inventing a new debugging method on the fly.
So imagine doing things like narrowing down execution to just before and just after your error, then taking snapshots of the runtime memory and diffing the objects. Or a conditional breakpoint that changes the class of a particular instance to a special debug class.
You can do many of the same things in compiled languages, I've since discovered, if you have a decent incremental compile set up, and you use some tactical thinking. But the environment always seems like it's trying to get in your way. (As opposed to a good dynamic environment, which seems more like an eager golden retriever wanting to play more fetch.)
If all goes well, I work through a process like this. I reach for this tool when the bug in question seems to have its source in faulty logic, as opposed to say, resource consumption.
1. Reproduce the bug as consistently as possible.
2. Find a pivot for the bug. Whether this is a previous commit where the bug did not occur, or a piece of code that can be commented out to clear the bug, I need to find some kind of on/off switch for the behavior.
3. I comment out code / insert break points / use git bisect to flip the switch on and off, performing a kind of binary search to narrow the scope of the issue until I have it down to one line or method.
4. Once the source is found, read the surrounding source code to gain context for the error and reason toward a solution.
Of course, this works best if the bug originates from a single source. Sometimes this is not case and there are multiple variables interacting to create the undesirable behavior. That’s when things get really fun :)
In Rust, which I mainly work in these days: Either println using the Debug trait to pretty-print data structures or LLDB. Which is easier kind of depends on the situation, at least for me.
I still use a lot of printfs, but I make heavy use of the debugger. In general, my strategy for difficult bugs is to find a point A where you know the code is working correctly, find point B where you know it's broken and can catch it in the debugger or log, and step forwards from A and backwards from B until you find the bug in the middle.
When debugging difficult, intermittent problems (e.g. non-repro crashes) my strategy is to keep a log of when it occurs, add lots of diagnostics and asserts around where I think the problem is, until hopefully I can catch it in the debugger or notice a pattern.
90% of the work of debugging is creating a quickly reproducible test case. Once you have that you can usually solve it.
Don't see the top comments mentioning that, so I will chip in: I always, whenever possible, try to reproduce the bug in tests first, before launching debugger / adding some statements, etc.
Being able to quickly reproduce the bug time and time again makes a big difference. Some permanent verification that it's actually fixed (at least in the given case) at the end of the session is also nice and adds a lot when doing a major refactoring or something similar. Especially for bugs related to the domain specific requirements, rather than the technical ones.
I actually use that as an interview question: "A user reports a bug. What do you do?". Ideally they mention "writing a failing test" somewhere before "fire up the debugger".
It depends strongly on the circumstances of course.
As a Java developer I rely heavily on using the debugger in Eclipse and using grep to search through files. I first try to have a solid understanding of what the program is supposed to do, reproduce the bug, and then step through the code with a debugger to understand why the bug is happening. Other times I just try to find where the bug is happening, and working backwards to the root of the cause of the problem just be reading the code. This works most of the time, but as a junior developer most of the issues I have to debug are not too complex.
One very important "soft skill" aspect to debugging code is time and ego management. There are times when all I want to do is "solve the problem" and if it means losing 4 hours of sleep over it, my obsession will let it happen. But I'm starting to learn there should be thresholds before letting the problem go and asking for help. Sometimes the fastest way to solving a problem isn't brute forcing it, but getting away from the problem, letting it simmer, or asking someone more experienced.
Another aspect of ego management is constantly keeping in mind that you could be wrong, while trying to track down some tricky problem.
I remember quite a few times sitting next to someone trying to debug something, asking something like: "So are we sure that parameter there is correct?" ... they'll say "Oh yeah, that's definitely not the problem" ... fifteen minutes later, after bashing our heads on the desk a bit, we actually check that value: "Whoa, what?! That's impossible!"
I work with mainly Javascript. I like to log everything, so I rarely need to use debuggers. When I have to, the Chrome developer is all you need. You can even use it to debug Node stuff
depends mostly on the type of issue, i don't believe that there's a one size fits all sort of solution. for timing sensitive issues or core dumps, it's typically low level things like gdb, trace, etc. for memory issues, obviously tools like valgrind, etc help.
in the olden days when i used ide's like visual studio or netbeans, i'd often times leverage their native debuggers to set watchpoints and step through code. but those days are over, now i mostly use interpreted languages like python, ruby, and compiled languages like golang (highly recommended). print statements are the way to go, especially if you're writing server side code (restful apis, websockets, etc), you'll want the log information as you won't be able to attach a debugger to a production system.
just a random thought based on this topic, if debug/log/print statements were detailed enough, one could actually take a log file and write some code to parse that and transform into test cases in your favorite test framework, that may have some effect on saving time writing test cases. for production bugs, you could take the log output as the reproducer steps to generate test cases to cover this.
and i really liked the comment about tdd and more importantly unit testing, it's critical and helps developers better organize their code.
I code in Python and use a step through debugger all the time. It's especially helpful when its other peoples code I am debugging, or when I am digging deep into Django to work out why something isn't working. And its possible to set up debugging on a remote server (though I personally never managed to get that set up and working).
[+] [-] dfan|10 years ago|reply
I rarely enter an interactive debugger. I have TONS of logging statements I can toggle. I make program execution as deterministic and reproducible as possible (for example, all random numbers are generated from random number generators that are passed around). When something goes wrong, I turn on (or add) logging and run it again. Look for odd stuff in the log files. If it doesn't make sense, add more logging. Repeat.
I worked on a pretty large videogame in the 90s where /everything/ was reproducible from a "recording" of timestamped input that was automatically generated. The game crashed after half an hour of varied actions? No problem, just play the recording that was automatically saved and attached to the crash report. It was amazing how fast we fixed bugs that might otherwise take weeks to track down.
[+] [-] malexw|10 years ago|reply
[+] [-] JoeAltmaier|10 years ago|reply
[+] [-] blub|10 years ago|reply
[+] [-] pmarreck|10 years ago|reply
What is a debug statement, anyway? You're checking the state at a point in the code. That's exactly what a unit test assertion does... except that a unit test calls a single method/function... meaning the code you're trying to debug needs to be split up and written in a way that is easily unit-testable, with few, if any, dependencies that aren't explicitly given to it... which makes it better code that is easier to reason about (and thus results in fewer bugs).
See where I'm going with this? TDD = say (mostly) goodbye to "debugging"
[+] [-] sago|10 years ago|reply
I could say "the time I spend debugging has dramatically decreased since I began proving each bit of code to be correct mathematically." But that tells me nothing about whether it is actually a better approach.
I suspect that's why you're getting downvoted: the comparison is naive. (Edit: Also responding to 'how do you debug' with 'I don't' probably doesn't help).
My personal anecdote - I don't spend much time debugging. I spend a lot of time thinking, a smaller amount coding, and a relatively small amount debugging. Spending, say, 20% extra time preventing bugs before they happen would not be cost effective for me.
[+] [-] Ace17|10 years ago|reply
I used to be a Visual Studio Debugger Wizard (BTW, it's an excellent debugger)... now I don't remember the last time I used a conditional breakpoint.
Working on a codebase designed, from the start, for testability, changed everything. So I totally agree with your last sentence about TDD ; although it took me nearly one year of practice before I could write solid unit tests (which wouldn't break every now and then because of some interface change), and that I still find it hard to write them before the code being tested (however, I do find it harder (impossible?) to write a unit test for code written more than one month ago).
I still use cgdb from time to time, to quickly print a backtrace or the origin of an exception/segfault.
By the way, I have the feeling that language constructs like lazy arguments, RAII, scope(exit), concurrency, and exceptions, make following the flow of control less and less relevant. In the long term, some amount of emancipation from one's debugging tools might be strategic.
[+] [-] dagw|10 years ago|reply
Now it's a week later and all of a sudden my function is returning -10e8. Where does TDD help me with debugging?
[+] [-] svec|10 years ago|reply
Sometimes debug printfs out a UART or USB if my system lets me, sometimes I'll create a log in memory if I can't printf or if the issue is timing sensitive (which a lot of my work is).
Pen & paper end up being very useful too - often writing out what I'm thinking helps me figure out what's going wrong faster than poking around the code or circuit board.
[+] [-] AnimalMuppet|10 years ago|reply
[+] [-] DdCb7Qlk2lyaw|10 years ago|reply
[+] [-] skylark|10 years ago|reply
As a Python/JS developer a few print/console.log statements are usually all it takes for me to figure out what's wrong with something. For more thorny situations there's always PDB/chrome dev tools.
At the end of the day, the people who are the best at debugging things aren't that way because of the tools they use. They're the best because they can clearly visualize the system, how data is flowing through it, and where potential problems might arise. Intuition from dealing with other similar problems also helps.
[+] [-] aerovistae|10 years ago|reply
But I can't give that to people through a comment on HN, so I stuck to tools.
[+] [-] imdsm|10 years ago|reply
Debugging for me is about my using my brain to step through code, not some fancy IDE that stops me from thinking. It wasn't always so easy though, but the first step is to stop using big tools to help you.
[+] [-] tedmiston|10 years ago|reply
For any object foo:
And I have a snippet triggered by "pdb<tab>" for pdb:[+] [-] Terr_|10 years ago|reply
Granted, often those moments are cases where the code is working correctly but you misunderstood or misremembered things, but the fact that you identified (and resolved) the disconnect is valuable, particularly if you're doing a deep-dive to figure out a nearby problem.
[+] [-] rudolf0|10 years ago|reply
>They're the best because they can clearly visualize the system, how data is flowing through it, and where potential problems might arise
This also applies very well to appsec/vulnerability finding.
[+] [-] DonaldFisk|10 years ago|reply
(1) Just add debug statements near where the bug is happening. These print a string, and the name and value of variables. Printed values in Lisp are human-readable, not just a pointer.
(2) Trace selected functions. This outputs the function name, arguments, and return value on function entry and exit.
(3) Use the virtual machine debugger. I can set breakpoints, display the stack, and trace VM instructions, but it's most useful for printing out disassembled compiled code.
(4) Browse data structures in Firefox. While my Lisp is running, a web browser runs in another thread, and every symbol has its own URL. Data structures are displayed as HTML tables.
(5) Unit tests. I've used these to debug complex algorithms, e.g. for event handling, and type inference.
[+] [-] dropit_sphere|10 years ago|reply
[+] [-] Ono-Sendai|10 years ago|reply
[+] [-] korethr|10 years ago|reply
[+] [-] drinchev|10 years ago|reply
I've never read about specific methods on how to do this "by the book". I'm usually doing ( for NodeJS ) :
1. Set a breakpoint inside my NodeJS 2-3 lines before the exception that I got.
2. Run debug mode
3. Do what I need in order to reach the breakpoint
4. Analyze the variables inside ( via watch ) or run code in that function ( via console )
Helps a lot more than `console.log(some_var_before_exception);` :D
[+] [-] imdsm|10 years ago|reply
Would you be able to debug something without these tools?
Do you think potentially these tools abstract some of the work away from you?
Genuine questions, just interested
[+] [-] blendo|10 years ago|reply
Only after I've grappled with these questions will I move onto log analysis, printfs, the debugger, data fuzzing, etc.
[+] [-] calebm|10 years ago|reply
[+] [-] siquick|10 years ago|reply
this 99% of the time
[+] [-] collyw|10 years ago|reply
I use print statements > 50% of the time, but certain problems are better suited to the debugger. Especially if its code that I did not write.
[+] [-] Alan01252|10 years ago|reply
Debuggers are great, but the knowledge gained by using them to solve a problem is completely lost once that close button has been pressed.
Also if I'm having to use a debugger to work out what's going on, usually it's a good sign my code is overly complicated...
[+] [-] leejo|10 years ago|reply
If it's something i think is trivial i'll just use a few print statements. This is 90% of the time.
If i end up with too many print statements then i step into the debugger. Others scoff at debuggers, which is odd because they can be powerful tools. Maybe you only use it once every couple of months, but they can be very helpful. When you're waist deep in the stack, or dealing with action at a distance, trying to fix something in poorly factored code, want to watch a variable, think there's some weird timing issue, need to step down into libraries beyond your control, then debuggers can help.
Don't think of the debugger as a debugger, think of it as a REPL. You just happen to be using the REPL with buggy code.
[+] [-] greenyoda|10 years ago|reply
That's a great analogy.
"You just happen to be using the REPL with buggy code."
Despite the name "debugger", it's not just for buggy code. A debugger can be a very useful tool for understanding how someone else's code works.
[+] [-] gravypod|10 years ago|reply
If I can narrow it down to what line, or even file, is throwing an error I just take a few minutes, read all the code and all the code of branching methods, and then can narrow it down to a single line.
From there it is actually developing a fix. As you mess around with more and more languages, you will notice that most compilers lend something far from a helping hand.
This only works, and I will stress this, for programs under 1 million lines. Past that mark, you need to do some extra steps.
When I debug one million line projects, I narrow it down to a file. I pull out the code from the file, and I mock all of the methods that file calls (This gets REALLY hard with networked code. Trust me). From this small subset, I slowly break the functionality of the external method until I resemble the error being made in the main project. From that I now know the method(s) that are actually causing the problem.
But, there is one thing that this makes an assumption about: your compiler is working.
Put blatantly, they're crap.
Usually they won't show you the correct file causing the error or they will not generate a helpful error. Runtime errors are even worse.
The best thing to do is avoid making the tricky errors. Make unit tests, using fuzzing tests, and well made documentation.
Documentation alone, that details all of the possible output and input states of the function will save you days on some bugs.
In Java, the @Nullable tag is a godsend. Use these features, they WILL help.
If you do tests, fuzzing, and documentation.
Using your brain and some things to make your brain's job easier will make you faster at debugging then any debugger like your buds GDB/DDD setup.
[+] [-] stcredzero|10 years ago|reply
So imagine doing things like narrowing down execution to just before and just after your error, then taking snapshots of the runtime memory and diffing the objects. Or a conditional breakpoint that changes the class of a particular instance to a special debug class.
You can do many of the same things in compiled languages, I've since discovered, if you have a decent incremental compile set up, and you use some tactical thinking. But the environment always seems like it's trying to get in your way. (As opposed to a good dynamic environment, which seems more like an eager golden retriever wanting to play more fetch.)
[+] [-] jtr1|10 years ago|reply
1. Reproduce the bug as consistently as possible.
2. Find a pivot for the bug. Whether this is a previous commit where the bug did not occur, or a piece of code that can be commented out to clear the bug, I need to find some kind of on/off switch for the behavior.
3. I comment out code / insert break points / use git bisect to flip the switch on and off, performing a kind of binary search to narrow the scope of the issue until I have it down to one line or method.
4. Once the source is found, read the surrounding source code to gain context for the error and reason toward a solution.
Of course, this works best if the bug originates from a single source. Sometimes this is not case and there are multiple variables interacting to create the undesirable behavior. That’s when things get really fun :)
[+] [-] pcwalton|10 years ago|reply
[+] [-] joeld42|10 years ago|reply
When debugging difficult, intermittent problems (e.g. non-repro crashes) my strategy is to keep a log of when it occurs, add lots of diagnostics and asserts around where I think the problem is, until hopefully I can catch it in the debugger or notice a pattern.
90% of the work of debugging is creating a quickly reproducible test case. Once you have that you can usually solve it.
[+] [-] gedrap|10 years ago|reply
Being able to quickly reproduce the bug time and time again makes a big difference. Some permanent verification that it's actually fixed (at least in the given case) at the end of the session is also nice and adds a lot when doing a major refactoring or something similar. Especially for bugs related to the domain specific requirements, rather than the technical ones.
[+] [-] wlievens|10 years ago|reply
It depends strongly on the circumstances of course.
[+] [-] halpme|10 years ago|reply
[+] [-] physcab|10 years ago|reply
[+] [-] mbrock|10 years ago|reply
I remember quite a few times sitting next to someone trying to debug something, asking something like: "So are we sure that parameter there is correct?" ... they'll say "Oh yeah, that's definitely not the problem" ... fifteen minutes later, after bashing our heads on the desk a bit, we actually check that value: "Whoa, what?! That's impossible!"
[+] [-] anarchy8|10 years ago|reply
[+] [-] epynonymous|10 years ago|reply
in the olden days when i used ide's like visual studio or netbeans, i'd often times leverage their native debuggers to set watchpoints and step through code. but those days are over, now i mostly use interpreted languages like python, ruby, and compiled languages like golang (highly recommended). print statements are the way to go, especially if you're writing server side code (restful apis, websockets, etc), you'll want the log information as you won't be able to attach a debugger to a production system.
just a random thought based on this topic, if debug/log/print statements were detailed enough, one could actually take a log file and write some code to parse that and transform into test cases in your favorite test framework, that may have some effect on saving time writing test cases. for production bugs, you could take the log output as the reproducer steps to generate test cases to cover this.
and i really liked the comment about tdd and more importantly unit testing, it's critical and helps developers better organize their code.
[+] [-] collyw|10 years ago|reply