I was working on an old old old "ERP" system written in D3 PICK. It's a database, programming language and OS all in one with roots in tracking military helicopter parts in the 1960's. I was working on it in the mid-2000s.
It had SQL like syntax for manipulating data, but it was interactive. So you would SELECT the rows from the table that you wanted, then those rows would be part of your state. You would then do UPDATE or DELETE without any kind of WHERE, because the state had your filter from the previous SELECT.
It has a fun quirk though - if your SELECT matched no rows, the state would be empty. So SELECT foo WHERE 1=2 would select nothing.
UPDATE and DELETE are perfectly valid actions even without a state...
Working late one night, I ran a SELECT STKM WHERE something that matched nothing, then before I realised I realised my state had no rows matched, I followed up with DELETE STKM.
Yep, the entire Stock Movements table four the last 20+ years of business were gone.
The nightly backup had not run, and I didn't want to lose an entire day of processing to roll back to the previous night.
I spent the entire night writing a program to recreate that data based on invoices, purchase orders, stocktake data, etc. I was able to recreate every record and got home about 9am. Lots of lessons learnt that night.
> if your SELECT matched no rows, the state would be empty
> UPDATE and DELETE are perfectly valid actions even without a state
Some may call this a fun quirk :) but I'd call it a horrible mistake in the design of the system! It should have been conceptually obvious to the designer that an empty set of rows is a perfectly valid state and is fundamentally different from "no state".
>It's a database, programming language and OS all in one with roots in tracking military helicopter parts in the 1960's. I was working on it in the mid-2000s.
A friend of mine worked for a company repairing transmissions on drilling platforms. To say it was a specialty shop is an understatement. However, they had been in business for 40 years. While not a computer system from the 60s, theirs was a custom written solution from the 80s running on a DOS 286 PC. It was stuck on the top shelf of a closet with a UPS. The developer had long been "permanently" unavailable for at least a decade, and they were too scared to loose the data in having it rebuilt. I asked how scared, and the answer was "it's the only computer in the building with a UPS". I'm laughing at the insanity just retelling the story
Whe. I first learned SQL on an oracle cli client I was told to type begin; then any other command. For a brief while until i learned about transactions, I assumed that it was part of the cli startup :)
I deal with D3 pick and the fear I have of doing select bp and catalog bp is insane. The entire system is a mess so wiping the md would be catastrophic. :)
I wrap all of my production manipulations in a transaction, and commit only if the results are expected. Yes, it may take locks that block customer-facing transactions, so I have selects ready to go in the transaction to minimise this.
15 years and counting since wiping out a large production table and taking a day to restore from backup.
"All of your colleagues have done something dumb. Don't be afraid to tell us when you make a mistake. We all remember our first screw up and will be happy to help."
Never have truer words been spoken.
As I tell all the new juniors at work doing sysadmin type tasks, everyone has deleted the production database at least once. Mistakes will always happen, it's how you deal with them that defines how good you are at the end of the day.
It’s also a measure of how well management does in response. We were transferring a software production repository from one machine to another, Lord knows why, when a junior admin I was supposed to supervise had arrived a half hour early and started the operation without me. He got the source and destination arguments reversed in the file transfer; we were using DD for this one part of it.
Management reacted pretty well. They assured both of us that while we made the mistake, it was not our fault that data was lost: the problem was that backups were not being checked, which caused us to lose the resulting three days (120 developer days) of work. The manager in charge of the folks doing the backup got taken to task - but nobody else did.
> Don't be afraid to tell us when you make a mistake. We all remember our first screw up and will be happy to help.
This is very much dependent on the circumstances - sometimes people won't be supportive or encouraging, but cold at best and toxic (rude, making fun of mistakes, letting their egos run wild) at worst. This is more likely to happen in some work cultures/companies than others, and there can also be individuals that are allowed to persist with their problematic conduct in otherwise okay environments.
If you are ever in such environments, acknowledge the fact, possibly push back against stuff like that and definitely be on the lookout for alternatives, where you'd be able to prosper.
Thankfully, my personal experiences have mostly been okay, but I've definitely seen both attitudes and choices that can make everyone's lives worse, to the point where I wrote the satirical article "The Unethical Developer's Guide to Personal Success": https://blog.kronis.dev/articles/the-unethical-developers-gu...
At a previous place I worked, if they were working on the cli (eg in psql or similar) they'd always use these two steps, either of which would provide adequate protection:
1. Start a transaction before even thinking of writing the delete/update/etc (BEGIN; ...)
2. Always write the WHERE query out first, THEN go back to the start of the line and fill out the DELETE/UPDATE/etc.
It worked well, and it's a habit I've since tried to keep on doing myself as well.
Once I wanted to do `rm -fr *~` to delete backup files, but the `~` key didn't register...
Now I have learnt to instinctively stop before doing anything destructive and double-check and double-check again! This also applies to SQL `DELETE` and `UPDATE`!
I know that `-r` was not neccessary but hey that was a biiiig mistake of mine!
Same thing on a Unix/Linux level: When using find, I always do it first with -print until I see the files I want. Only then do I add the actual action I want.
I did something worst many years ago: I was working for a regional ISP and during a major incident, I had to reroute traffic through a different path. Under big pressure, I did the infamous Cisco mistake "switchport trunk allowed vlan 50" instead that "switchport trunk allowed add vlan 50" and I locked out myself and all the customer from our broadband customers. We had to call a DC technician and ask him to share a console through a local console server.
Lesson learned: even if you are under big pressure take your time to plan and review the modification
15 minutes can save hours.
Early on when I'd first started making the transition from pure developer role working only on product to a platform role running the development environment, I was encountering problems with build scripts on CI servers leaving behind a bunch of dead symlinks. Tired of tracking them down manually, I wrote a nice script that automatically found all dead symlinks and deleted them.
It turned out, for some arcane reason I still don't understand, our production instance of Artifactory was running on top of Docker Compose with host path volume mounts, and somehow, symlinks that were not valid from the perspective of the host actually were valid from inside the container, and doing this on all of our servers broke Artifactory. For some even stupider reason, we weren't doing full filesystem-level snapshots at any regular interval (which we started doing after this), so instead I needed to enlist the help of the classic wizard ninja guy who had been acting as a mostly unsupervised one-man team for the past six years who had hacked all of this mess together, documented none of it, and was the only person on the planet who knew how to reconstruct everything.
This was probably still only the second-stupidest full on-prem lab outage I remember, behind the time the whole network stopped working and the only person who had been around long enough remembered they had trialed a temporary demo hardware firewall years earlier, management abandoned the evaluation effort, and it somehow remained there as the production firewall for years without ever being updated before finally breaking.
A few months into my first job out of college, I brought down the main production server in the middle of the workday. It took us about an hour to recover. Afterward, I was very embarrassed and apologetic, but my boss just shrugged and said:
"You're not a real technology worker until you've brought the company down. Welcome."
Might not be the best words to live by, but it was exactly what I needed to hear at that time early in my career.
Back in the 90s I remember a work colleague asking 'can you rollback a drop table?', to which I replied 'no', and all the blood drained from his face in seconds. It's one of those things you've heard happens to people, but until you see it, you can't quite believe it.
>_< , something similar happened to me when I did an rm -Rf * in the old pictures directory on my system ... Well ... It wasn't the old pictures directory, it was the backup directory with all pictures from my older phone !! To say that blood drained from my face would be an understatement, thankfully I was able to recover most of it.
The first thing I did after the recovery marathon was to alias rm so that it instead works by moving stuff to /tmp
Since this seems to be such a common occurrence, why haven't databases evolved to maintain some limited form of history that can be rolled back? Have no idea of the complexity, so forgive me if that sounds daft.
My chooosen database explorer is Dbeaver. Horrible name but great app.
You can set colours for local/test/prod servers and a red colored tab will scream at you to be cautious. And with red color every edit will pop up an "are you sure?" question. And autocommit is off.
Hop in the test database, run some SQL, update a bazillion records, run some selects to verify, open the test app and verify there. All good! Someone else checks… LGTM.
So I do the same thing in prod. Make changes, validate in the DB, looks good. Open the prod app… down. Requests are hanging.
Our software is shit so I’m not surprised. I start frantically digging through trying to figure out where and why it’s hanging. Eventually chase it down to some lock wait timeouts in SQL. Start looking at the database… no queries seem to be running against the table in question. Eventually dig a little deeper and find… the call was coming from inside the house! The locks are held by my connection. Go hit “commit” in dbeaver and prod comes right back up.
And that’s the boring story of the day I learned that dbeaver used transactions in prod connections by default.
Con: There are too many licensing options. I'm not even sure which one I would need.
Pro: They'll send out an invoice, like a real company. I work for a billion-dollar healthcare company, and if your company only takes PayPal, or Venmo, or something else that makes it look like two guys in a basement, it'll never get through our purchase approval process. Big companies do business with other big companies. Startups have to learn to interface with businesses on a business level if they ever want to grow out of being a startup.
My only complaint about DBeaver is how powerful it is -I feel like I have only ever scratched 1% of the functionality it offers. I need a tutorial class on all of the goodies the platform can do.
All of the tutorials I seem to find online are basic level and/or mixed with introducing SQL.
i'm using it on Mac, and it seems to want to update every time I turn it on... i get that having timely updates is great, but the way to update the app on a mac is:
click ok on the update dialog, wait until the download finishes, shut down the app, drag the new app over to Applications, click Replace, wait for the actual moving dialog to pop up, wait for the moving dialog to finish, minimize all your windows so you can see the desktop, unmount the dbeaver image, click dbeaver again
and you can get back to work! now if only I could remember all the details that were in my head when i opened the app in the first place...
dbeaver team: can you do what paint.net did please? that is, only bother me to update as i'm turning the app off, not on?
My manager had got me to look at backups.. but for cheap.
I decided on bacula - I had the clients installed on all the computers in the office, and it worked for some small tests.
My manager decided we would try this with a USB drive attached to one of the servers (somehow this didn't seem like a bad idea).
In the morning, very uncaffinated he sent me to the other site - an unmanned basement office with the servers.
Being uncaffinated I forgot the door password and set off the alarm.
I had to go into the office and phone him with the alarm going to get the code to turn the alarm off.
OK, that was stressful but sorted out at least.
I plugged in the hard drive to the selected server and headed back.
Once I got back it turned out all the websites on that server had gone down - trying to send all the backups to this poor USB harddrive had overwhelmed the IO on that-era Linux server and the poor thing just froze.
Fairly soon after I was let go, and joined my friends at a much more fun company making mobile games.
It's the responsibility of the technical person to uphold engineering ethics, especially in the face of potentially inadequate recovery and security solutions.
I was once let go from a big name university for refusing to weaken and rush changes to a payment processing network (PCI-DSS) when there was "no time" to review them in detail. That's a future FBI press conference sort of thing when it all comes crashing down. Not long after, all SS#s, DOBs, and deets for every employee was stolen from a "rogue" laptop taken by a consultant, likely to be sold on carder and identity theft forums because of an utter failure at data protection processes. That place was a shitshow because they didn't have the professional ethics or leadership backbone to do what was prudent and necessary.
Yeah, for a surprisingly long time, linux had a big write cache problem. You were okay if you used all fast-writing devices. You were okay if you used all slow-writing devices. But if you mixed them, the slow writes could totally fill the write cache and starve everything else.
Not only an USB (1?) could cause it but also a cifs transfer over a 100 Mbps link.
My first sysadmin job at a call center, the call center reps use the same directory for all the users. And, I'm working tickets to delete old users accounts...
The old grey haired sysadmin backs up the directory so he can instantly restore it. Seems this happens all the time.
When you really mess up a call center, it is called a gopher event - entire floor having lost their audio suddenly stands up in their cubicles and looks around. Been there, caused that.
One day my colleague was wondering whether RegEdit used some private API for renaming keys or just copied + deleted them. "Try a rename with a big tree, see if it's still instantaneous" I helpfully proposed. But what's a really big tree? How about System\Windows? The rename completed instantaneously - "told you it's a private API" I said happily, just as the machine crashed in twenty ways at once.
We had a redis with sessions. Early on, someone decided every write to redis should also cause a write to S3 as backup. My first task was to get rid of this 4-digits a month extra cost in PUT requests. I decided to instead write all changed session objects into a set keyed by half-hour timestamps and then write only those sessions every 30 minutes. Unfortunately initially I used a KEYS to find the set corresponding to my half-hour stamp, not having read up exactly on what it does. It's not exactly advisable to do on a redis with a million or so objects. A later version of the archiver wrote the last emptied set to a stable key instead and then checked the set keys between then and now instead...
1. Write your WHERE clause first
2. Return to the beginning of the line to finish writing the statement
3. Check your statement
4. If it looks good, then -- and only then -- add your closing semicolon
Having said that, once during my second week at a new company, I plugged in an ethernet cable to an APC UPS, so I could set up networking on it. It shut down production. Why? APC makes (for that model at least) proprietary ethernet cables for networking, and if you plug in a regular cable it does an autoshutdown...an engineers attempt at marketing perhaps!? I did RTFM before, and after out of confusion, and there was no mention of this.
I will typically do something like this when writing SQL in an interactive tool:
SELECT * from table
WHERE id = 12345
and once that's giving me the selection I want, insert the update statement into the middle:
SELECT * from table
-- UPDATE table set c1 = v1, c2 = v2
WHERE id = 12345
Then, accidentally running the entire buffer doesn't do anything destructive, but selecting the query from update to the end of the statement lets me do the update. (It's still imperfect, because selecting only the update line will still be destructive.)
(Most of the RDBMS tools that I've used would happily ignore the lack of a closing semi-colon and that will not save you for a single-statement case.)
I wish a DELETE or UPDATE only affected a single row by default (and perhaps even wouldn't commit if it would hit multiple rows), unless a keyword for MANY or something similar was added.
Aka DELETE ALL where x == y or DELETE MANY where x == y or perhaps you need an explicit limit for it to not be 1, so DELETE where x == y LIMIT ALL
The way a DELETE without a WHERE clause gets the whole table makes sense conceptually but it always gives me the creepy crawlies when typing the statement. While I normally dislike superfluous syntax I would welcome changing the sql grammar to enforce a where clause on delete, that is, to delete all rows in a table would require "delete from table where true;"
A charming story which almost everyone can relate to! Only one of your rules will ever save you. “Don't run updates directly in the database console”. Whole methodologies are crafted around this rule/principle to not do development in production environment.
Ha, no, that was a far more mundane issue. The CMS I'm using requires double opt-in to subscribe, meaning you need to enter your email address and click the confirmation link.
It also apparently requires double email configuration, meaning it has two places where you can configure your mailer. I had only set it up in one place, meaning the confirmations never got sent.
Decades ago my ex colleague was supposed to enter a command handwritten on a piece of paper saying "rm -rf /var/log/blah/" which she typed in as "rm -rf /var / log/blah / ". Everyone knows it's awesome to insert white space to increase legibility. It was a production database server.
Roughly 10 years ago, I was working for a startup that offered a live conversational video service where you could also have hundreds (or eventually, thousands) of near-live watchers - with recording and later playback. The founder pitched the service to news orgs and celebrities. Anderson Cooper had a regular "show" there for a while, and we had a number of interviews with mostly 2nd-tier celebrities.
When the service started, they made the decision to not actually delete any content (delete just set a flag which disabled the content but didn't actually remove it).
Fast forward a year or so, and it became clear that a real delete was needed. So they had a junior engineer write up a sort of delayed sweep - delete all the videos with the delete flag set. But then, for some reason, they decided put the implementation behind a delay. Something like "actually delete all soft-deleted videos, but don't start doing it until 30 days from now". However, unbeknownst to the team, there was a bug in the implementation that deleted everything, regardless of whether the 'delete' flag was set.
So one night, roughly a month later, all the content started disappearing from the site. One guy heroically tried to stop the process, but I think he was too late. The engineering director happened to be on a vacation down in South America somewhere and I think the founder fired him in a fit of pique. I managed to reclaim a small bit of content (basically the videos that were cached on the actual recording servers before they were uploaded to S3).
You can imagine the technical over-reaction:
* Delete switched back to a soft delete
* Turned on S3 object versioning
* Started redundantly copying content onto a totally different hosting service
This was fine (hah!) until we had to start taking down the inevitable child porn that always shows up on services like this - I got stuck with writing the takedown code and it took me forever to track down all the various tendrils of stuff.
As you might expect, we lost a ton of users over mass content deletion and the service never really rebounded. The company held on for a couple more years, pivoting a couple of times, but eventually folded.
Own up to it when it happens. Can only get worse if you don't.
Hopefully you work with people that follow that as well.
I can still remember a more senior coworker copying a directory from a network share (Ctrl + C), deleting the directory, and then trying to paste it somewhere else.
I didn't speak up (he also rarely touched the mouse so flew through it), so we both got a chuckle when he realized his mistake and pinged network folks for a backup from tape.
It happens. The real world is messy. Can always learn something new. Can always do something you know not to do.
Depending on the situation it can be risky to start thinking like this, trying to fix stuff with lots of stress, in secret, maybe working around access restrictions, it's easy to make further mistakes
I switched companies, and moving from a MySQL cli to pgadmin is a godsend. Would still like a confirmation dialogue, but having to click a button seems less error prone than pressing enter too quickly.
These kinds of scenarios happen when money is "cheap", and highlight why the current recession, and coming 2nd great depression aren't really a bad thing.
This was 13 years ago in a small business with no significant investment. No “cheap money” was involved, just the realities of a small business with chronic NIH syndrome.
I had a thing recently where someone was updating some entries on a system I look after, decided the changes hadn't applied properly, and clicked "Roll back" to put it back to its original state.
Whatever had gotten into it, it rolled back to 2009. It rolled everything back, including user accounts.
No-one who worked there in 2009 still worked there, so no-one had a valid password any more.
Fortunately it was easy enough to copy the last-but-one backup over the top and lose the day before's config updates, and cure its Flowers for Algernon state, but it was a pretty hairy afternoon.
fukawi2|2 years ago
It had SQL like syntax for manipulating data, but it was interactive. So you would SELECT the rows from the table that you wanted, then those rows would be part of your state. You would then do UPDATE or DELETE without any kind of WHERE, because the state had your filter from the previous SELECT.
It has a fun quirk though - if your SELECT matched no rows, the state would be empty. So SELECT foo WHERE 1=2 would select nothing.
UPDATE and DELETE are perfectly valid actions even without a state...
Working late one night, I ran a SELECT STKM WHERE something that matched nothing, then before I realised I realised my state had no rows matched, I followed up with DELETE STKM.
Yep, the entire Stock Movements table four the last 20+ years of business were gone.
The nightly backup had not run, and I didn't want to lose an entire day of processing to roll back to the previous night.
I spent the entire night writing a program to recreate that data based on invoices, purchase orders, stocktake data, etc. I was able to recreate every record and got home about 9am. Lots of lessons learnt that night.
cvoss|2 years ago
> UPDATE and DELETE are perfectly valid actions even without a state
Some may call this a fun quirk :) but I'd call it a horrible mistake in the design of the system! It should have been conceptually obvious to the designer that an empty set of rows is a perfectly valid state and is fundamentally different from "no state".
dylan604|2 years ago
A friend of mine worked for a company repairing transmissions on drilling platforms. To say it was a specialty shop is an understatement. However, they had been in business for 40 years. While not a computer system from the 60s, theirs was a custom written solution from the 80s running on a DOS 286 PC. It was stuck on the top shelf of a closet with a UPS. The developer had long been "permanently" unavailable for at least a decade, and they were too scared to loose the data in having it rebuilt. I asked how scared, and the answer was "it's the only computer in the building with a UPS". I'm laughing at the insanity just retelling the story
albert_e|2 years ago
You deserve an award !!!
vivekv|2 years ago
EricBetts|2 years ago
nivethan|2 years ago
drsim|2 years ago
15 years and counting since wiping out a large production table and taking a day to restore from backup.
philderbeast|2 years ago
Never have truer words been spoken.
As I tell all the new juniors at work doing sysadmin type tasks, everyone has deleted the production database at least once. Mistakes will always happen, it's how you deal with them that defines how good you are at the end of the day.
russfink|2 years ago
Management reacted pretty well. They assured both of us that while we made the mistake, it was not our fault that data was lost: the problem was that backups were not being checked, which caused us to lose the resulting three days (120 developer days) of work. The manager in charge of the folks doing the backup got taken to task - but nobody else did.
KronisLV|2 years ago
This is very much dependent on the circumstances - sometimes people won't be supportive or encouraging, but cold at best and toxic (rude, making fun of mistakes, letting their egos run wild) at worst. This is more likely to happen in some work cultures/companies than others, and there can also be individuals that are allowed to persist with their problematic conduct in otherwise okay environments.
If you are ever in such environments, acknowledge the fact, possibly push back against stuff like that and definitely be on the lookout for alternatives, where you'd be able to prosper.
Thankfully, my personal experiences have mostly been okay, but I've definitely seen both attitudes and choices that can make everyone's lives worse, to the point where I wrote the satirical article "The Unethical Developer's Guide to Personal Success": https://blog.kronis.dev/articles/the-unethical-developers-gu...
freetanga|2 years ago
Don’t ask me how I learned this.
justinclift|2 years ago
1. Start a transaction before even thinking of writing the delete/update/etc (BEGIN; ...)
2. Always write the WHERE query out first, THEN go back to the start of the line and fill out the DELETE/UPDATE/etc.
It worked well, and it's a habit I've since tried to keep on doing myself as well.
_nalply|2 years ago
Now I have learnt to instinctively stop before doing anything destructive and double-check and double-check again! This also applies to SQL `DELETE` and `UPDATE`!
I know that `-r` was not neccessary but hey that was a biiiig mistake of mine!
jojobas|2 years ago
Joke's on you, we all learned it this exact way.
gumballindie|2 years ago
mjcohen|2 years ago
tgv|2 years ago
chrisandchris|2 years ago
iJesus|2 years ago
lemper|2 years ago
lormayna|2 years ago
Lesson learned: even if you are under big pressure take your time to plan and review the modification 15 minutes can save hours.
nonameiguess|2 years ago
It turned out, for some arcane reason I still don't understand, our production instance of Artifactory was running on top of Docker Compose with host path volume mounts, and somehow, symlinks that were not valid from the perspective of the host actually were valid from inside the container, and doing this on all of our servers broke Artifactory. For some even stupider reason, we weren't doing full filesystem-level snapshots at any regular interval (which we started doing after this), so instead I needed to enlist the help of the classic wizard ninja guy who had been acting as a mostly unsupervised one-man team for the past six years who had hacked all of this mess together, documented none of it, and was the only person on the planet who knew how to reconstruct everything.
This was probably still only the second-stupidest full on-prem lab outage I remember, behind the time the whole network stopped working and the only person who had been around long enough remembered they had trialed a temporary demo hardware firewall years earlier, management abandoned the evaluation effort, and it somehow remained there as the production firewall for years without ever being updated before finally breaking.
rrrrrrrrrrrryan|2 years ago
"You're not a real technology worker until you've brought the company down. Welcome."
Might not be the best words to live by, but it was exactly what I needed to hear at that time early in my career.
ahazred8ta|2 years ago
"One of us!! One of us!!" :-)
cesaref|2 years ago
jdsalaro|2 years ago
>_< , something similar happened to me when I did an rm -Rf * in the old pictures directory on my system ... Well ... It wasn't the old pictures directory, it was the backup directory with all pictures from my older phone !! To say that blood drained from my face would be an understatement, thankfully I was able to recover most of it.
The first thing I did after the recovery marathon was to alias rm so that it instead works by moving stuff to /tmp
msravi|2 years ago
croo|2 years ago
You can set colours for local/test/prod servers and a red colored tab will scream at you to be cautious. And with red color every edit will pop up an "are you sure?" question. And autocommit is off.
I sorta stopped making unrecoverable mistakes.
nucleardog|2 years ago
Hop in the test database, run some SQL, update a bazillion records, run some selects to verify, open the test app and verify there. All good! Someone else checks… LGTM.
So I do the same thing in prod. Make changes, validate in the DB, looks good. Open the prod app… down. Requests are hanging.
Our software is shit so I’m not surprised. I start frantically digging through trying to figure out where and why it’s hanging. Eventually chase it down to some lock wait timeouts in SQL. Start looking at the database… no queries seem to be running against the table in question. Eventually dig a little deeper and find… the call was coming from inside the house! The locks are held by my connection. Go hit “commit” in dbeaver and prod comes right back up.
And that’s the boring story of the day I learned that dbeaver used transactions in prod connections by default.
reaperducer|2 years ago
I just checked it out, and it looks promising.
Con: There are too many licensing options. I'm not even sure which one I would need.
Pro: They'll send out an invoice, like a real company. I work for a billion-dollar healthcare company, and if your company only takes PayPal, or Venmo, or something else that makes it look like two guys in a basement, it'll never get through our purchase approval process. Big companies do business with other big companies. Startups have to learn to interface with businesses on a business level if they ever want to grow out of being a startup.
fbdab103|2 years ago
All of the tutorials I seem to find online are basic level and/or mixed with introducing SQL.
jareklupinski|2 years ago
click ok on the update dialog, wait until the download finishes, shut down the app, drag the new app over to Applications, click Replace, wait for the actual moving dialog to pop up, wait for the moving dialog to finish, minimize all your windows so you can see the desktop, unmount the dbeaver image, click dbeaver again
and you can get back to work! now if only I could remember all the details that were in my head when i opened the app in the first place...
dbeaver team: can you do what paint.net did please? that is, only bother me to update as i'm turning the app off, not on?
acomjean|2 years ago
I run a local copy of the database I connect to remotely and I have to be careful about which database/table I’m connected too.
psnehanshu|2 years ago
stuaxo|2 years ago
I decided on bacula - I had the clients installed on all the computers in the office, and it worked for some small tests.
My manager decided we would try this with a USB drive attached to one of the servers (somehow this didn't seem like a bad idea).
In the morning, very uncaffinated he sent me to the other site - an unmanned basement office with the servers.
Being uncaffinated I forgot the door password and set off the alarm.
I had to go into the office and phone him with the alarm going to get the code to turn the alarm off.
OK, that was stressful but sorted out at least.
I plugged in the hard drive to the selected server and headed back.
Once I got back it turned out all the websites on that server had gone down - trying to send all the backups to this poor USB harddrive had overwhelmed the IO on that-era Linux server and the poor thing just froze.
Fairly soon after I was let go, and joined my friends at a much more fun company making mobile games.
sacnoradhq|2 years ago
I was once let go from a big name university for refusing to weaken and rush changes to a payment processing network (PCI-DSS) when there was "no time" to review them in detail. That's a future FBI press conference sort of thing when it all comes crashing down. Not long after, all SS#s, DOBs, and deets for every employee was stolen from a "rogue" laptop taken by a consultant, likely to be sold on carder and identity theft forums because of an utter failure at data protection processes. That place was a shitshow because they didn't have the professional ethics or leadership backbone to do what was prudent and necessary.
kubanczyk|2 years ago
Not only an USB (1?) could cause it but also a cifs transfer over a 100 Mbps link.
IronWolve|2 years ago
The old grey haired sysadmin backs up the directory so he can instantly restore it. Seems this happens all the time.
Whew.
limaoscarjuliet|2 years ago
alexmolas|2 years ago
If the author had decided to say nothing the problem would have been bigger - an unhappy boss and probably fired.
thombat|2 years ago
asmor|2 years ago
We had a redis with sessions. Early on, someone decided every write to redis should also cause a write to S3 as backup. My first task was to get rid of this 4-digits a month extra cost in PUT requests. I decided to instead write all changed session objects into a set keyed by half-hour timestamps and then write only those sessions every 30 minutes. Unfortunately initially I used a KEYS to find the set corresponding to my half-hour stamp, not having read up exactly on what it does. It's not exactly advisable to do on a redis with a million or so objects. A later version of the archiver wrote the last emptied set to a stable key instead and then checked the set keys between then and now instead...
mootzville|2 years ago
1. Write your WHERE clause first 2. Return to the beginning of the line to finish writing the statement 3. Check your statement 4. If it looks good, then -- and only then -- add your closing semicolon
Having said that, once during my second week at a new company, I plugged in an ethernet cable to an APC UPS, so I could set up networking on it. It shut down production. Why? APC makes (for that model at least) proprietary ethernet cables for networking, and if you plug in a regular cable it does an autoshutdown...an engineers attempt at marketing perhaps!? I did RTFM before, and after out of confusion, and there was no mention of this.
sokoloff|2 years ago
(Most of the RDBMS tools that I've used would happily ignore the lack of a closing semi-colon and that will not save you for a single-statement case.)
nytesky|2 years ago
https://youtu.be/Vywf48Dhyns
matsemann|2 years ago
Aka DELETE ALL where x == y or DELETE MANY where x == y or perhaps you need an explicit limit for it to not be 1, so DELETE where x == y LIMIT ALL
somat|2 years ago
sgarland|2 years ago
sangriafria|2 years ago
makach|2 years ago
Nextgrid|2 years ago
> Sorry! Subscriptions were broken last week, but are now working. If you tried to subscribe and ran into issues, please try again!
I wonder if a similar incident involving a “UPDATE subscriptions” query happened recently.
dfcowell|2 years ago
It also apparently requires double email configuration, meaning it has two places where you can configure your mailer. I had only set it up in one place, meaning the confirmations never got sent.
Bit of a facepalm moment.
hahamaster|2 years ago
jorgesborges|2 years ago
alarge|2 years ago
When the service started, they made the decision to not actually delete any content (delete just set a flag which disabled the content but didn't actually remove it).
Fast forward a year or so, and it became clear that a real delete was needed. So they had a junior engineer write up a sort of delayed sweep - delete all the videos with the delete flag set. But then, for some reason, they decided put the implementation behind a delay. Something like "actually delete all soft-deleted videos, but don't start doing it until 30 days from now". However, unbeknownst to the team, there was a bug in the implementation that deleted everything, regardless of whether the 'delete' flag was set.
So one night, roughly a month later, all the content started disappearing from the site. One guy heroically tried to stop the process, but I think he was too late. The engineering director happened to be on a vacation down in South America somewhere and I think the founder fired him in a fit of pique. I managed to reclaim a small bit of content (basically the videos that were cached on the actual recording servers before they were uploaded to S3).
You can imagine the technical over-reaction:
This was fine (hah!) until we had to start taking down the inevitable child porn that always shows up on services like this - I got stuck with writing the takedown code and it took me forever to track down all the various tendrils of stuff.As you might expect, we lost a ton of users over mass content deletion and the service never really rebounded. The company held on for a couple more years, pivoting a couple of times, but eventually folded.
headline|2 years ago
james-skemp|2 years ago
Hopefully you work with people that follow that as well.
I can still remember a more senior coworker copying a directory from a network share (Ctrl + C), deleting the directory, and then trying to paste it somewhere else.
I didn't speak up (he also rarely touched the mouse so flew through it), so we both got a chuckle when he realized his mistake and pinged network folks for a backup from tape.
It happens. The real world is messy. Can always learn something new. Can always do something you know not to do.
jdsalaro|2 years ago
/s
Animats|2 years ago
FreshStart|2 years ago
m348e912|2 years ago
throwaway3618|2 years ago
justsomehnguy|2 years ago
And repeat the same error years later when the batch file failed to cd to the destination folder. Added
to avoid that problem.theduder99|2 years ago
liampulles|2 years ago
user6723|2 years ago
dfcowell|2 years ago
CoastalCoder|2 years ago
rsynnott|2 years ago
TheGigaChad|2 years ago
[deleted]
simonblack|2 years ago
Did much the same, but only on my own system (thankfully!) and yes, I had an up-to-date backup MySQL dump on hand.
iJohnDoe|2 years ago
I think this goes against every principle most on HN have.
Gordonjcp|2 years ago
Whatever had gotten into it, it rolled back to 2009. It rolled everything back, including user accounts.
No-one who worked there in 2009 still worked there, so no-one had a valid password any more.
Fortunately it was easy enough to copy the last-but-one backup over the top and lose the day before's config updates, and cure its Flowers for Algernon state, but it was a pretty hairy afternoon.
l0ngyap|2 years ago
[deleted]
quijoteuniv|2 years ago
throwbadubadu|2 years ago
"But now all has been deleted??!?! WTF HEEELP"
"I apologize for my previous answer, you are correct, you now deleted everything!"