I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.
This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.
I’m currently backing up my /dev folder to my unlimited cloud storage. The backup of the file
/dev/random
is running since two weeks.No wonder. That file is super slow to transfer for some reason. but wait till you get to /dev/urandom. That file hat TBs to transfer at whatever pipe you can throw at it…
Cool, so I learned something new today. Don’t run
cat /dev/random
Why not try /dev/urandom?
😹
Ya know, if not for the other person’s comment, I might have been gullible enough to try this…
That’s silly. You should compress it before uploading.
I’m guessing this is a joke, right?
/dev/random and other “files” in /dev are not really files, they are interfaces which van be used to interact with virtual or hardware devices. /dev/random spits out cryptographically secure random data. Another example is /dev/zero, which spits out only zero bytes.
Both are infinite.
Not all “files” in /dev are infinite, for example hard drives can (depending on which technology they use) be accessed under /dev/sda /dev/sdb and so on.
I’m aware of that. I was quite sure the author was joking, with the slightest bit of concern of them actually making the mistake.