This thesis addresses the use of cloud-native autoscalers, including Horizontal Pod Autoscaler (HPA), KEDA, and Cluster Autoscaler, in conjunction with streaming processing platforms like KStream and Flink within Kubernetes environments. As Kubernetes emerges as the de-facto standard for container orchestration, the demand for effective autoscaling solutions to manage dynamic workloads has grown significantly. This study evaluates the performance and scalability of selected autoscalers trialled in real-world scenarios created with a benchmarking framework. Particular attention is paid to complex streaming use cases, such as hierarchical aggregation, which expose the inherent limitations of existing autoscaling strategies. Through an analysis of different deployment configurations, scaling techniques, and performance metrics, this work seeks to replicate results and claims from published literature, and consolidate the corresponding insights. The findings discussed in this document aim to provide practical guidance for optimizing the use of autoscalers in cloud-native architectures, enhancing both scalability and operational efficiency.

This thesis addresses the use of cloud-native autoscalers, including Horizontal Pod Autoscaler (HPA), KEDA, and Cluster Autoscaler, in conjunction with streaming processing platforms like KStream and Flink within Kubernetes environments. As Kubernetes emerges as the de-facto standard for container orchestration, the demand for effective autoscaling solutions to manage dynamic workloads has grown significantly. This study evaluates the performance and scalability of selected autoscalers trialled in real-world scenarios created with a benchmarking framework. Particular attention is paid to complex streaming use cases, such as hierarchical aggregation, which expose the inherent limitations of existing autoscaling strategies. Through an analysis of different deployment configurations, scaling techniques, and performance metrics, this work seeks to replicate results and claims from published literature, and consolidate the corresponding insights. The findings discussed in this document aim to provide practical guidance for optimizing the use of autoscalers in cloud-native architectures, enhancing both scalability and operational efficiency.

Scaling Stream Processing in Cloud-Native Environments

MORLIN, GIOVANNI
2023/2024

Abstract

This thesis addresses the use of cloud-native autoscalers, including Horizontal Pod Autoscaler (HPA), KEDA, and Cluster Autoscaler, in conjunction with streaming processing platforms like KStream and Flink within Kubernetes environments. As Kubernetes emerges as the de-facto standard for container orchestration, the demand for effective autoscaling solutions to manage dynamic workloads has grown significantly. This study evaluates the performance and scalability of selected autoscalers trialled in real-world scenarios created with a benchmarking framework. Particular attention is paid to complex streaming use cases, such as hierarchical aggregation, which expose the inherent limitations of existing autoscaling strategies. Through an analysis of different deployment configurations, scaling techniques, and performance metrics, this work seeks to replicate results and claims from published literature, and consolidate the corresponding insights. The findings discussed in this document aim to provide practical guidance for optimizing the use of autoscalers in cloud-native architectures, enhancing both scalability and operational efficiency.
2023
Scaling Stream Processing in Cloud-Native Environments
This thesis addresses the use of cloud-native autoscalers, including Horizontal Pod Autoscaler (HPA), KEDA, and Cluster Autoscaler, in conjunction with streaming processing platforms like KStream and Flink within Kubernetes environments. As Kubernetes emerges as the de-facto standard for container orchestration, the demand for effective autoscaling solutions to manage dynamic workloads has grown significantly. This study evaluates the performance and scalability of selected autoscalers trialled in real-world scenarios created with a benchmarking framework. Particular attention is paid to complex streaming use cases, such as hierarchical aggregation, which expose the inherent limitations of existing autoscaling strategies. Through an analysis of different deployment configurations, scaling techniques, and performance metrics, this work seeks to replicate results and claims from published literature, and consolidate the corresponding insights. The findings discussed in this document aim to provide practical guidance for optimizing the use of autoscalers in cloud-native architectures, enhancing both scalability and operational efficiency.
distributed system
kafka
fault-tolerance
kubernetes
kubernetes operator
File in questo prodotto:
File Dimensione Formato  
Giovanni_Morlin_1104091.pdf

accesso aperto

Descrizione: Scaling Stream Processing in Cloud-Native Environments
Dimensione 2.82 MB
Formato Adobe PDF
2.82 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/103129