Abstract:
HDFS (Hadoop Distributed File System) is
designed to store big data reliably, and it has
been distributed file. However, it is not
replacement for failure nodes. Replication
technology is a key component in every
computing enterprise as it is essential to
applications such as backup, file distribution,
file synchronization as well as file sharing
collaboration. The replacement concept is simple
but its algorithmic optimization and system
implementations are challenging and difficult.
Data Replacement Algorithm was implemented
to make the replicas to get placed on failure
nodes. As this system can reduce data failure and
cannot affect other Data Nodes, it is more
reliable and usable.