抽象的

Implementation of DHT on Load Rebalancing In Cloud Computing

G.Naveen, J.Praveen Chander

Distributed file systems are key building blocks for cloud computing applications based on the Map Reduce programming. In such file systems, nodes simultaneously serve computing and storage functions. Files can also be dynamically created, deleted, and appended. This results in load imbalance in a distributed file system. The file chunks are not distributed as uniformly as possible among the nodes. In existing we use round robin algorithm, due to this algorithm the load balance happens to the server good with some extend. All the servers should response in same time duration to complete the task.If any one of the server makes delay in response for the given task which impact the CPU computing resource. As a result it can be a bottleneck resource and a single point of failure. Our target is to optimize the computing resource (server), maximum throughput of servers, to avoid overload or crash of computing servers and to increase the response time. In our system DHT algorithm in such a way to make optimize computing resource and increase response time. In our model we divided the total bytes in to number of active servers and fed the same accordingly. This will make the effective utilization of the servers. And also we are divided the each file in to number of chunks for easy processing which increase the response time as well as easy error re-transmission if any data was dropped during transmission. Additionally, we aim to reduce network traffic or movement cost caused by rebalancing the loads of nodes as much as possible to maximize the network bandwidth available to normal applications.

免责声明: 此摘要通过人工智能工具翻译,尚未经过审核或验证

索引于

学术钥匙
研究圣经
引用因子
宇宙IF
参考搜索
哈姆达大学
世界科学期刊目录
学者指导
国际创新期刊影响因子(IIJIF)
国际组织研究所 (I2OR)
宇宙

查看更多