So I’m working on a server from home.
I do a cat /sys/class/net/eth0/operstate
and it says unknown
despite the interface being obviously up, since I’m SSH’ing into the box.
I try to explicitely set the interface up to force the status to say up
with ip link set eth0 up
. No joy, still unknown
.
Hmm… maybe I should bring it down and back up.
So I do ip link set eth0 down
and… I drive 15 miles to work to do the corresponding ip link set eth0 up
50 years using Unix and I’m still doing this… 😥
Lol; I’ve done this too. Thankfully not to anything important.
@[email protected] You’re doing it wrong. Just setup a KVM behind your server. So then you never need to leave home again.
There, but for the grace of god…
Did this once on a router in a datacenter that was a flight away. Have remembered to set the reboot in future command since. As I typed the fatal command I remember part of my brain screaming not to hit enter as my finger approached the keyboard. 🤦♂️
Have remembered to set the reboot in future command since
That’s not a bad idea actually. I’ll have to reuse that one. Thanks!
This.
Do it. This saved my life on more than one occasion.
You’ll think “nah, it’ll be fine” and then at 11pm when your brain’s fried on vending machine coffee you’ll be glad that you did it… 3 times over…
I’ve done this kind of thing remotely in screen with
ifdown eth0 ; sleep 10 ; ifup eth0 ;
Remember what Bruce Lee said:
I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.
A few months ago I accidentally dd’d ~3GiB to the beginning of one of the drives in a 4 drive array… That was fun to rebuild.
Your 4 drive raid5 array, right?
Right?!
I wish.
It was a bcachefs array with data replicas being a mix of 1,2 & 4 depending on what was most important, but thankfully I had the foresight to set metadata to be mirrored for all 4 drives.
I didn’t get the good fortune of only having to do a resilver, but all I really had to do was fsck to remove references to non-existent nodes until the system would mount read-only, then back it up and rebuild it.
NixOS did save my bacon re: being able to get back to work on the same system by morning.
not RAID10 I hope…
Like 3 weeks ago on my (testing) server I accidentally DD’d a Linux ISO to the first drive in my storage array (I had some kind of jank manual “LVM” bullshit I set up with odd mountpoints to act as a NAS, do not recommend), no Timeshift, no Btrfs snapshot. It gave me the kick in the pants I needed to stop trying to use a macbook air with 6 external hard drives as a server though. Also gave me the kick in the pants I needed to stop using volatile naming conventions in my fstab.
I have a failsafe service for one of my servers, it pings the router and if it hasn’t reached it once for an entire hour then it will reboot the server.
This won’t save me from all mistakes but it will prevent firewall, link state, routing and a few other issues when I’m not present.
Until you block ICMP one day and then wonder why the server keeps rebooting…
(Been there. Done it)
Every network engineer must lock themselves out of a node at some point, it is a rite of passage.
I formated an OS drive by mistake last night, thought it was my flash drive…
Almost did the same last night on a device that has its internal drive (flash) mounted as mmc and the USB drive was sda
That entire scenario scares me lol
I started to DBAN (wipe) my internal drive once instead of an attached drive. That was the last time I ran DBAN on a machine with any drives of value plugged in.
Lol I’ve locked myself out of so many random cloud and remote instances like this that now I always make a sleep chain or a kill timer with tmux/screen.
Usually like:
./risky_dumb_script.sh ; sleep 30 ; ./undo.sh
Or
./risky_dumb.script.sh
Which starts with a 30 second sleep, and:
(tmux) sleep 300 ; kill PID
At $DAYJOB, we’re currently setting up basically a way to bridge an interface over the internet, so it transports everything that enters on an interface across the aether. Well, and you already guessed it, I accidentally configured it for
eth0
and couldn’t SSH in anymore.Where it becomes fun, is that I actually was at work. I was setting it up on two raspis, which were connected to a router, everything placed right next to me. So, I figured, I’d just hook up another Ethernet cable, pick out the IP from the router’s management interface and SSH in that way.
Except I couldn’t reach the management interface anymore. Nothing in that network would respond.Eventually, I saw that the router’s activity lights were blinking like Christmas decoration. I’m guessing, I had built a loop and therefore something akin to a broadcast storm was overloading the router. Thankfully, the solution was then relatively straightforward, in that I had to unplug one of the raspis, SSH in via the second port, nuke our configuration and then repeat for the other raspi.
I knew a guy who did this and had to fly to Germany to fix it because he didn’t want to admit what he’d done.
This hits…
Why don’t you use chained commands, or better yet simply create an alias that chains down/up, then use the alias instead?
Because I plain forgot I was remote. It’s as simple and as stupid as that.
time to setup a console server so that you don’t do that again.
Until they have to troubleshoot the console server …
then setup a super console server. lol
It’s console servers all the way down (up?)
and you make each one geographically closer than the previous one until there’s one right next to you. lol
So that’s why we have mobile phones
Nah, you only need two, each connected to the other. Use one to work on the other.
I have once actually used a console server console server to troubleshoot a misbehaving console server.
i once worked at a place that had something like this and; it sounds silly; but i got a live demonstration that it was the smartest thing ever.
That is why you have KVMs…
Fair enough. I’ve done worse in my time as a keyboard jockey.
We’ve all been there. If you do this stuff for a living, you’ve done that way more than once.
That is a totally fair explanation. End of story. No blame. Honest mistake.
Or use some kind of molly guard. Or have an OOB management channel.
You’d think you’d learn from your mistakes after one or two of them, not fifty years’ worth…
In my defense, I just installed the machine. I was configuring it from home after hours.
after hours
I’ve configured PAM to not let me login remotely after hours, because I just know that someday I’ll want to fix “just this tiny thing” and I’ll break production because I’m too tired. I clearly need protection from myself, and this is one slice in Dr.Reasons’s Swiss cheese model.
Don’t let the people drag you down, this happens to all of us.
Harsh (to yourself), but fair
You’d think you’d learn from your mistakes
Yes, that what you’d think. And then you’ll sit with a blank terminal once again when you did some trivial mistake yet again.
A friend of mine developed a habit (working on a decent sized ISP 20+ years ago) to set up a scheduled reboot for everything in 30 minutes no matter what you’re going to do. The hardware back then (I think it was mostly cisco) had a ‘running conrfig’ and ‘stored config’ which were two separate instances. Log in, set up scheduled reboot, do whatever you’re planning to do and if you mess up and lock yourself out the system will restore to previous config in a while and then you can avoid the previous mistake. Rinse and repeat.
And, personally, I think that’s the one of the best ways to differentiate actual professionals from ‘move fast and break things’ group. Once you’ve locked yourself out of the system literally half way across the globe too many times you’ll eventually learn to think about the next step and failovers. I’m not that much of a network guy, but I have shot myself in the foot enough that whenever there’s dd, mkfs or something similar on the root shell I automatically pause for a second to confirm the command before hitting enter.
And while you gain experience you also know how to avoid the pitfalls, the more important part (at least for myself) is to think ahead. The constant mindset of thinking about processes, connectivity, what you can actually do if you fuck up and so on becomes a part of your workflow. Accidents will happen, no matter how much experience you have. The really good admins just know that something will go wrong at some point in the process and build stuff to guarantee that when you fuck things up you still have availability to fix it instead of calling someone 6 timezones away in the middle of the night to clean up your mess.
Without repeating my other comment. This approach saved my life many times
Don’t be shitty.
I was scared to move the cloud for this reason. I was used to running to the server room and the KVM if things went south. If that was frozen, usually unplugging the server physically from the switch would get it calm down.
Now Amazon supports a direct console interface like KVM and you can virtually unplug virtual servers from their virtual servers too.
It’s VMs within VMs within VMs.