The rapid expansion of mobile communication users and smart devices has drawn significant attention from researchers and industry leaders to the underutilized millimeter-wave (mmWave) frequency bands for next-generation 5G and beyond networks. These bands promise up to a hundredfold increase in capacity compared to current 5G networks. Historically, the mmWave spectrum was overlooked due to its high susceptibility to signal blockages, which could result in service interruptions. However, with modern mobile users demanding reliable, high-speed connectivity, overcoming the vulnerability of mmWave signals has become essential. In some cases, to exploit such bands, new technologies such as reconfigurable intelligent surfaces (RISs) should also be deployed. However, RISs need to be reconfigured using instantaneous channel conditions. Reinforcement learning (RL) has proven effective for optimal decision-making in small state-action spaces in modern networks. For larger and more complex networks, deep reinforcement learning (DRL) excels in deriving optimal policies. This work leverages these techniques to ensure continuous user service while minimizing computational costs. A combination of physics-based and deep learning-based digital twins (DTs) is employed, not only to reduce computational overhead but also to estimate channel properties that are otherwise unattainable in real-world interactions. Thus, RISs can be optimized to enhance signal strength in NLOS directions, improving overall user experience.
The rapid expansion of mobile communication users and smart devices has drawn significant attention from researchers and industry leaders to the underutilized millimeter-wave (mmWave) frequency bands for next-generation 5G and beyond networks. These bands promise up to a hundredfold increase in capacity compared to current 5G networks. Historically, the mmWave spectrum was overlooked due to its high susceptibility to signal blockages, which could result in service interruptions. However, with modern mobile users demanding reliable, high-speed connectivity, overcoming the vulnerability of mmWave signals has become essential. In some cases, to exploit such bands, new technologies such as reconfigurable intelligent surfaces (RISs) should also be deployed. However, RISs need to be reconfigured using instantaneous channel conditions. Reinforcement learning (RL) has proven effective for optimal decision-making in small state-action spaces in modern networks. For larger and more complex networks, deep reinforcement learning (DRL) excels in deriving optimal policies. This work leverages these techniques to ensure continuous user service while minimizing computational costs. A combination of physics-based and deep learning-based digital twins (DTs) is employed, not only to reduce computational overhead but also to estimate channel properties that are otherwise unattainable in real-world interactions. Thus, RISs can be optimized to enhance signal strength in NLOS directions, improving overall user experience.
Interference Nulling in Millimeter Wave MIMO Systems with RIS
SADRIDDINOV, FATTOKH BAKHTIYOR UGLI
2024/2025
Abstract
The rapid expansion of mobile communication users and smart devices has drawn significant attention from researchers and industry leaders to the underutilized millimeter-wave (mmWave) frequency bands for next-generation 5G and beyond networks. These bands promise up to a hundredfold increase in capacity compared to current 5G networks. Historically, the mmWave spectrum was overlooked due to its high susceptibility to signal blockages, which could result in service interruptions. However, with modern mobile users demanding reliable, high-speed connectivity, overcoming the vulnerability of mmWave signals has become essential. In some cases, to exploit such bands, new technologies such as reconfigurable intelligent surfaces (RISs) should also be deployed. However, RISs need to be reconfigured using instantaneous channel conditions. Reinforcement learning (RL) has proven effective for optimal decision-making in small state-action spaces in modern networks. For larger and more complex networks, deep reinforcement learning (DRL) excels in deriving optimal policies. This work leverages these techniques to ensure continuous user service while minimizing computational costs. A combination of physics-based and deep learning-based digital twins (DTs) is employed, not only to reduce computational overhead but also to estimate channel properties that are otherwise unattainable in real-world interactions. Thus, RISs can be optimized to enhance signal strength in NLOS directions, improving overall user experience.File | Dimensione | Formato | |
---|---|---|---|
Sadriddinov_Fattokh.pdf
accesso riservato
Dimensione
2.86 MB
Formato
Adobe PDF
|
2.86 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/82334