The study of sequences of dependent random variables arose at the beginning of the twentieth century. In 1906 the Russian mathematician Andrei Andreyevich Markov (1856-1922), a Chebyshev’s pupil, introduced some mathematical models to this end. His focus was where the present is a sufficient statistis of the past to predict the future. These sequence have been named Markov chains. Even if it was a significant step in probability theory history, these models were not immediately considered by the scientific community. They were really appreciated only a few years later. Indeed, the study of Markov chains hugely spread only from the 1950s on. Nowadays, on the other hand, they are utilized in a variety of applications ranging from biology to psychology, from genetics to electrical engineering. It is interesting to study how such models evolve over time and if they converge to a stationary situation, namely there is a limiting probability distribution. The convergence of a Markov chain, however is not always guaranteed, and it is not known a priori how much time takes to converge. These facts make the use of a Markov chain model more complex than its relatively simple theory. Suppose we only deal with Markov chains whose convergence is guaranteed. In many applications, it is desirable to control the Markov chain by changing its transition mechanisms so as to achieve minimum cost, minimum queue length, etc. Another goal may be to drive the chain to a desired distribution at a given final time. This is achieve by the theory of Schrödinger bridges. The purpose of this thesis is to describe the recently developed theory of Schrödinger bridges for Markov chains, and to investigate its effectiveness by simulation on various examples. Of particular interest to us are chains that converges slowly to the equilibrium distribution such as those that arise from random geometric graphs. Schrödinger bridges allow in principle the possibility of controlling a chain to its invariant distribution in finite time. The outline of this thesis is as follows. In Chapters 1-3, we collect some basics material on probability, combinatorics and random variables; In Chapters 4-7, we introduce Markov chains, their properties and classificate them. We then give some examples distinguishing a Markov chain according its state space, namely finite or countable. Finally, we analyze the most important examples previously given. In Chapters 8-9, first we deal with general maximum entropy problems, then we focus on the theory of Schrödinger bridges. In Chapters 10-11, after introducing the average consensus problem, we discuss the importance of random geoemtric graphs. Finally we explain the algorithm of simulation for Schrödinger bridges giving a time analysis of its execution.

Markov chains and schrodinger bridges

De Battisti, Riccardo
2010/2011

Abstract

The study of sequences of dependent random variables arose at the beginning of the twentieth century. In 1906 the Russian mathematician Andrei Andreyevich Markov (1856-1922), a Chebyshev’s pupil, introduced some mathematical models to this end. His focus was where the present is a sufficient statistis of the past to predict the future. These sequence have been named Markov chains. Even if it was a significant step in probability theory history, these models were not immediately considered by the scientific community. They were really appreciated only a few years later. Indeed, the study of Markov chains hugely spread only from the 1950s on. Nowadays, on the other hand, they are utilized in a variety of applications ranging from biology to psychology, from genetics to electrical engineering. It is interesting to study how such models evolve over time and if they converge to a stationary situation, namely there is a limiting probability distribution. The convergence of a Markov chain, however is not always guaranteed, and it is not known a priori how much time takes to converge. These facts make the use of a Markov chain model more complex than its relatively simple theory. Suppose we only deal with Markov chains whose convergence is guaranteed. In many applications, it is desirable to control the Markov chain by changing its transition mechanisms so as to achieve minimum cost, minimum queue length, etc. Another goal may be to drive the chain to a desired distribution at a given final time. This is achieve by the theory of Schrödinger bridges. The purpose of this thesis is to describe the recently developed theory of Schrödinger bridges for Markov chains, and to investigate its effectiveness by simulation on various examples. Of particular interest to us are chains that converges slowly to the equilibrium distribution such as those that arise from random geometric graphs. Schrödinger bridges allow in principle the possibility of controlling a chain to its invariant distribution in finite time. The outline of this thesis is as follows. In Chapters 1-3, we collect some basics material on probability, combinatorics and random variables; In Chapters 4-7, we introduce Markov chains, their properties and classificate them. We then give some examples distinguishing a Markov chain according its state space, namely finite or countable. Finally, we analyze the most important examples previously given. In Chapters 8-9, first we deal with general maximum entropy problems, then we focus on the theory of Schrödinger bridges. In Chapters 10-11, after introducing the average consensus problem, we discuss the importance of random geoemtric graphs. Finally we explain the algorithm of simulation for Schrödinger bridges giving a time analysis of its execution.
2010-02-18
146
markov, schrodinger
File in questo prodotto:
File Dimensione Formato  
Markov_chains_and_Schrodinger_bridges.pdf

accesso aperto

Dimensione 1.76 MB
Formato Adobe PDF
1.76 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/12716