Over the past five years, I have bricked many Linux installs, but my most recent one sucked big time. I recently moved Srumpy my webserver to bare metal I did this for a few reasons. One, because it was super slow in VM form, I/O wait times were sometimes as high as 15 seconds, This made interacting with my websites irritable due to unresponsiveness, or requests just timing out altogether. Two, when nCrater went down everything went down. Now back to the topic at hand, bricking systems, 2 days ago I wanted to switch Scrumpy to NVMe, I had a 256gb Teamgroup drive on hand that I was going to use, so I installed it on a daughter card because the system is still on 4th gen Haswell. Then when the system was up and running, I installed ZFS for ease of use. I made a single drive vDev because this system backs up every 15 minutes. I should also mention here that Unbeknownst to me backups had not been working for days. I started copying all the data to the new drive. It went well. Turns out it’s slower than my SATA drive… So I swapped back to the SATA drive and apt remove zfsutils-linux. Doing this caused the system to kernel panic. The system refused to boot to Ubuntu, and each time I would boot into recovery it would kernel panic all over again. So I just reinstalled Ubuntu. This was worsened because I had been up for almost an entire day, about >20 hours, and dealt with a bad DDos attack that morning. Looking back, it could have been fixed but my brain was super foggy I just started from scratch. Always make sure backups are working before doing serious File system work. The moral of the story is, Do as I say not as I do.