I generalize this to saying "have N+1 replicas, where N is the number of simultaneous failures you want to survive". I'm perfectly happy having my .emacs backed up in 1 place, GitHub. If all my computers and GitHub lose data at the exact same time, I will just make new key bindings. The same might not hold true for your family heirlooms or company's invoices or whatever. Just make the right decision for your application, and don't trust any one system too much.
I also feel like adding, replication is less likely to work correctly the more abstract it tries to be. Compare hardware RAID (pretends to the rest of the computer that there is one disk instead of two) to application-level replication (don't return success until the document has been written successfully to 3 nodes). That first one never works correctly, because there's so much that can go wrong and so little you can do to fix it. The on-disk format depends on some random firmware that is treated as disposable by the manufacturer of your motherboard; with no documentation or source code to be had. The second one pretty much always works because it's simple, and when it breaks, you can just examine the content of each node and figure out what went wrong, because you designed it and it's as simple as it can possibly be. (If you want to not store 3 copies of your data, you add your own Reed-Solomon coding to the data you write, and pick the ratio of data size to lost chunks to suit your needs. That's what RAID-5 is, just abstracted over an entire POSIX filesystem and hundreds of thousands of lines of code with no unit tests.)
I would never use RAID again, as I consider the complexity too high for the benefits. Treat filesystems and disks as disposable and transient; don't try to build a filesystem that's durable.
I treat my .emacs as being as valuable as family photos etc.
I've found ZFS 'software' RAID to be very good (RAID isn't a back-up, of course). I think data-integrity is another important component - earlier I had multiple versioned backups in different places etc. and still suffered data-loss due to bitrot (which my backups faithfully propagated).
If one of the copies is local and another is through an online backup, it's going to be a different in almost every way from local copies so you get the "2" by any definition.
If all your backups are to a set of tapes or, say 3 hard drives, and you have a rotation to keep one tape offsite, every copy shares too many traits in common. A fair number of Mac people use Time Machine to do versioned backups to an external drive (or drive array) and use separate software to simply mirror their drive to external drives, periodically swapping the onsite and offsite mirrored drives. To me, physically moving drives around is hardcore, I don't trust anything not automated, but it seems to be not that rare. I'm fairly lazy but my machine is a work laptop so I have a backup drive in the office, one at home, and have a cloud backup service so I have 3-2-1 without much thought or effort.
If backups are solely on hard drives, it's probably best to not use the same make/model purchased at the same time, for fear that they'll all fail within the same time frame.
The idea behind '2' is to avoid having all your backups dependent on a single point of failure.
For example you religiously have three copies of data but they all came from the same tape drive that silently went bad months ago and has been writing garbage. Or you recieved a batch of bad tapes etc...
3 copies, on 2 different media, and 1 offsite.