Skripsi
ANALISA KINERJA INFRASTRUCTURE AS A SERVICE PADA CLOUD COMPUTING DENGAN PENDEKATAN REINFORCEMENT LEARNING.
Cloud Computing, particularly Infrastructure as a Service (IaaS), offers high flexibility, cost efficiency, and scalability in managing computing resources. However, fluctuating demands and dynamic workloads require adaptive management strategies to maintain service quality. This study explores the use of Reinforcement Learning (RL) algorithms, specifically Deep Q-Network (DQN) and Proximal Policy Optimization (PPO), to optimize IaaS performance. Using simulations based on datasets from IEEE DataPort, RL agents were designed to dynamically learn resource usage patterns. The results show that DQN achieved 89% accuracy in predicting system status, while PPO achieved 82%. Results indicate DQN performs slightly better, with a batch size of 32 compared to PPO's 64. Both models utilize identical network architectures [64, 64] and similar learning rates (DQN: 0.00025, PPO: 0.0003). Additionally, both algorithms reduced resource wastage by improving the efficiency of CPU, memory, bandwidth, and response time usage. However, challenges remain in addressing imbalances in negative class detection. This research contributes to the optimization of IaaS management using RL, with potential for further development through the integration of other algorithms and application to more complex cloud computing scenarios.
Inventory Code | Barcode | Call Number | Location | Status |
---|---|---|---|---|
2407007101 | T163153 | T1631532024 | Central Library (REFERENS) | Available but not for loan - Not for Loan |
No other version available