MCMC methods are powerful but fragile instruments. Both their efficacy and efficiency are problem-specific and depend on a number of degrees of freedom which varies relative to the type of algorithm employed. One of the more promising class of MCMC algorithms is indeed Hybrid Monte Carlo (HMC, often referred to as Hamiltonian Monte Carlo) (Duane et al. 1987). It borrows concepts from differential geometry, particle physics, and statistical mechanics to efficiently explore smooth probability density. Their popularity has increased in recent years thanks to the development of STAN (Gelman, Lee, and Guo 2015), a probabilistic programming language for Bayesian inference that overcome those competitors which relied on Metropolis within Gibbs type algorithms. The power of this type of Hamiltonian-based algorithms resides on their geometric foundation. The drawback, however, is that they are not applicable on non-smooth surfaces, due to the lack of differentiability. Statistically speaking, this can happen when some of the desired objects of inference are not continuous. In a Bayesian framework this is the case when discrete parameters are used. To obviate this difficulty, a particular version of HMC has been proposed in (Nishimura, Dunson, and Lu 2020). It deals with discrete parameters by embedding them in a continuous space and assigning them a prior step-function shaped distribution. Although obviously not differentiable, it is still possible to exploit the properties of Hamiltonian dynamics to explore the resulting posterior density, borrowing from physics a concept related to light refraction. The goal of this thesis is to discuss this method in detail and propose an algorithmic variant which is computationally more efficient. This algorithmic variant is similar to the one used in STAN, made available in the form of an R library named XDNUTS. It employs two termination criteria (Betancourt, 2016) capable of identifying the optimal length for the approximation of the trajectories defined by Hamilton's equations. In this way, the depth of each probability space exploration is established by the geometry of the probability law for each algorithm iteration. Furthermore, like in STAN, a stochastic optimization procedure is proposed to adaptively calibrate the method parameters in an almost automatic manner.
Gli algoritmi di MCMC sono strumenti potenti ma fragili. Sia la loro efficacia che la loro efficienza variano in base al problema in esame e dipendono da un numero di gradi di libertà specifici al tipo di algoritmo utilizzato. Una delle classi più promettenti tra questi è Hybrid Monte Carlo (HMC, spesso indicato come Hamiltonian Monte Carlo) (Duane et al. 1987), che prende in prestito concetti dalla geometria differenziale, dalla fisica delle particelle e dalla meccanica statistica per esplorare in modo efficiente molte densità di probabilità. La loro popolarità è aumentata negli ultimi anni grazie allo sviluppo di STAN (Gelman, Lee e Guo 2015), un linguaggio di programmazione probabilistico per l'inferenza Bayesiana che ha superato i suoi concorrenti che facevano affidavano su algoritmi di tipo Metropolis-within-Gibbs. La potenza di questo tipo di algoritmi risiede nella struttura geometrica implicata dai sistemi Hamiltoniani, ma questo presenta l'importante svantaggio di non essere applicabile a superfici non lisce, a causa della mancanza di differenziabilità. In un contesto statistico, ciò può accadere quando alcuni degli oggetti di inferenza desiderati non sono continui. In un contesto Bayesiano accade ad esempio quando la natura di alcuni parametri è discreta. In risposta a questa mancanza, in (Nishimura, Dunson e Lu 2020) viene proposta una versione particolare di HMC, che incorpora i parametri discreti in uno spazio continuo e assegna loro una distribuzione a priori a gradini. Sebbene ovviamente ancora non differenziabile, è comunque possibile sfruttare le proprietà delle dinamiche Hamiltoniane per esplorare la risultante densità a posteriori, grazie ad un concetto, preso in prestito ancora una volta dalla fisica, relativo alla rifrazione della luce. L’obiettivo di questa tesi è quello di discutere tale metodo in dettaglio e proporne una variante algoritmica, più efficiente da un punto di vista computazionale, simile a quella utilizzata da STAN, resa disponibile sotto forma di libreria R con il nome XDNUTS. Tale variante utilizza dei criteri di terminazione (Betancourt, 2016) in grado di identificare di volta in volta la lunghezza ottimale per l’approssimazione delle traiettorie definite dalle equazioni di Hamilton, in maniera tale da lasciare che sia la geometria della legge di probabilità a stabilire quanto a fondo esplorare il relativo spazio per ogni iterazione dell’algoritmo. Inoltre, viene proposta, sempre in analogia con STAN, una procedura di ottimizzazione stocastica per calibrare adattivamente i parametri del metodo in maniera quasi automatica.
Hamiltonian Monte Carlo discontinuo per parametri discreti e continui: una versione efficiente
MANILDO, PAOLO
2023/2024
Abstract
MCMC methods are powerful but fragile instruments. Both their efficacy and efficiency are problem-specific and depend on a number of degrees of freedom which varies relative to the type of algorithm employed. One of the more promising class of MCMC algorithms is indeed Hybrid Monte Carlo (HMC, often referred to as Hamiltonian Monte Carlo) (Duane et al. 1987). It borrows concepts from differential geometry, particle physics, and statistical mechanics to efficiently explore smooth probability density. Their popularity has increased in recent years thanks to the development of STAN (Gelman, Lee, and Guo 2015), a probabilistic programming language for Bayesian inference that overcome those competitors which relied on Metropolis within Gibbs type algorithms. The power of this type of Hamiltonian-based algorithms resides on their geometric foundation. The drawback, however, is that they are not applicable on non-smooth surfaces, due to the lack of differentiability. Statistically speaking, this can happen when some of the desired objects of inference are not continuous. In a Bayesian framework this is the case when discrete parameters are used. To obviate this difficulty, a particular version of HMC has been proposed in (Nishimura, Dunson, and Lu 2020). It deals with discrete parameters by embedding them in a continuous space and assigning them a prior step-function shaped distribution. Although obviously not differentiable, it is still possible to exploit the properties of Hamiltonian dynamics to explore the resulting posterior density, borrowing from physics a concept related to light refraction. The goal of this thesis is to discuss this method in detail and propose an algorithmic variant which is computationally more efficient. This algorithmic variant is similar to the one used in STAN, made available in the form of an R library named XDNUTS. It employs two termination criteria (Betancourt, 2016) capable of identifying the optimal length for the approximation of the trajectories defined by Hamilton's equations. In this way, the depth of each probability space exploration is established by the geometry of the probability law for each algorithm iteration. Furthermore, like in STAN, a stochastic optimization procedure is proposed to adaptively calibrate the method parameters in an almost automatic manner.File | Dimensione | Formato | |
---|---|---|---|
Manildo_Paolo.pdf
accesso aperto
Dimensione
4.31 MB
Formato
Adobe PDF
|
4.31 MB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/71210