dc.contributor.author |
Zaw, Myint
|
|
dc.contributor.author |
Oo, Khine Khine
|
|
dc.date.accessioned |
2019-07-03T04:21:09Z |
|
dc.date.available |
2019-07-03T04:21:09Z |
|
dc.date.issued |
2016-02-25 |
|
dc.identifier.uri |
http://onlineresource.ucsy.edu.mm/handle/123456789/193 |
|
dc.description.abstract |
HDFS (Hadoop Distributed File System) is
designed to store big data reliably, and it has
been distributed file. However, it is not
replacement for failure nodes. Replication
technology is a key component in every
computing enterprise as it is essential to
applications such as backup, file distribution,
file synchronization as well as file sharing
collaboration. The replacement concept is simple
but its algorithmic optimization and system
implementations are challenging and difficult.
Data Replacement Algorithm was implemented
to make the replicas to get placed on failure
nodes. As this system can reduce data failure and
cannot affect other Data Nodes, it is more
reliable and usable. |
en_US |
dc.language.iso |
en |
en_US |
dc.publisher |
Fourteenth International Conference On Computer Applications (ICCA 2016) |
en_US |
dc.subject |
Data Replacement Algorithm |
en_US |
dc.subject |
Big Data |
en_US |
dc.subject |
Hadoop |
en_US |
dc.subject |
Map Reduce |
en_US |
dc.subject |
HDFS (Hadoop Distributed File System) |
en_US |
dc.subject |
Replica System |
en_US |
dc.title |
Recovering Nodes Failure in Replica System by Using Data Replacement Algorithm in a HDFS Cluster |
en_US |
dc.type |
Article |
en_US |