I am curious how do you write software that takes a day to run. Today I can write line by line, and because computers are so fast, even "long" tasks complete/crash fast enough for me to iterate. I assume you had test cases but I still want to know.
Do you mean how do you write something that takes that long at all? By having data that takes that long to process.
If you mean the actual accomplishment of it in general:
1. Get a subset of data that lets you create a representative set of test cases. You can validate your code in minutes, not hours or days. Then you run it on the full thing once you've seen that the 5-minute job works as expected.
2. Sacrifice a goat, pray to hopefully the right god, and hope it wasn't their son from a goat mother that you sacrificed.
3. Spend a lot more time thinking before doing.
4. Read a lot.
(1) works if you can run the task on your own system or have access to a system to run it on. This is the best option, or some variation on it.
(2) is unfortunately what happens a lot, they usually sacrifice the wrong goat and end up cursed instead of blessed.
(3) and (4) are the next best option after (1), and should be done alongside (1) anyways. The system I'm on now is legacy and being upgraded. The test bed is available to me maybe 1 hour a week right now because everyone needs it and everything is urgent, apparently. I spent a lot of time reading the code, the documentation, and thinking about how to structure my solution to a problem our users have. My first pass validated the solution concept and took 3 days or so of thinking hard about the problem and 5 minutes of coding. The second pass is more robust (error handling) that took several more days of reading and thinking to identify what error cases could even occur, and two hours of coding.
Yeah, if I could run it locally I probably could have finished it all in those first 3 days (honestly my 5 minutes of coding was figured out by the end of day 1, but I wasn't certain since I was new to the codebase). But when that's not an option, you spend more time thinking about the problem you're solving and the problems your solution will generate so you solve it before you run it in the first place.
Jtsummers|1 year ago
If you mean the actual accomplishment of it in general:
1. Get a subset of data that lets you create a representative set of test cases. You can validate your code in minutes, not hours or days. Then you run it on the full thing once you've seen that the 5-minute job works as expected.
2. Sacrifice a goat, pray to hopefully the right god, and hope it wasn't their son from a goat mother that you sacrificed.
3. Spend a lot more time thinking before doing.
4. Read a lot.
(1) works if you can run the task on your own system or have access to a system to run it on. This is the best option, or some variation on it.
(2) is unfortunately what happens a lot, they usually sacrifice the wrong goat and end up cursed instead of blessed.
(3) and (4) are the next best option after (1), and should be done alongside (1) anyways. The system I'm on now is legacy and being upgraded. The test bed is available to me maybe 1 hour a week right now because everyone needs it and everything is urgent, apparently. I spent a lot of time reading the code, the documentation, and thinking about how to structure my solution to a problem our users have. My first pass validated the solution concept and took 3 days or so of thinking hard about the problem and 5 minutes of coding. The second pass is more robust (error handling) that took several more days of reading and thinking to identify what error cases could even occur, and two hours of coding.
Yeah, if I could run it locally I probably could have finished it all in those first 3 days (honestly my 5 minutes of coding was figured out by the end of day 1, but I wasn't certain since I was new to the codebase). But when that's not an option, you spend more time thinking about the problem you're solving and the problems your solution will generate so you solve it before you run it in the first place.