There are also distributed file systems (MooseFS, Gluster, Ceph) which can duplicate data across physical locations, protect it with checksums, scrub in a distributed manner and auto-heal, all of that transparently.
About hard as setting up any UNIX daemon on a few computers. Reliability is a complex topic, depends on what you need -- for an anecdotal evidence I maintain a microscopic (10-20TB/4-6 desktop hosts) Moose deployment for a distributed home dir use-case over 5 years without any data loss despite two full computer losses, one disk malfunction and numerous network and power outages (it never detected a bit-rot event though). There are many serious success and horror stories one Google query away.
scrupulusalbion|10 years ago
mbq|10 years ago