diff --git a/Website/dp2003-abstracts.htm b/Website/dp2003-abstracts.htm index fba0760..7a0f722 100644 --- a/Website/dp2003-abstracts.htm +++ b/Website/dp2003-abstracts.htm @@ -35,7 +35,7 @@

2003/03: Time-line Hidden Markov Experts and its application in time series prediction

X. Wang, P. Whigham and D. Deng

-

A modularised connectionist model, based on the Mixture of Experts (ME) algorithm for time series prediction, is introduced. A set of connectionist modules learn to be local experts over some commonly appearing states of a time series. The dynamics for mixing the experts is a Markov process, in which the states of a time series are regarded as states of a HMM. Hence, there is a Markov chain along the time series and each state associates to a local expert. The state transition on the Markov chain is the process of activating a different local expert or activating some of them simultaneously by different probabilities generated from the HMM. The state transition property in the HMM is designed to be time-variant and conditional on the first order dynamics of the time series. A modified Baum�Welch algorithm is introduced for the training of the time-variant HMM and it has been proved that by EM process the likelihood function will converge to a local minimum. Experiments, with two time series, show this approach achieves significant improvement in the generalisation performance over global models.

+

A modularised connectionist model, based on the Mixture of Experts (ME) algorithm for time series prediction, is introduced. A set of connectionist modules learn to be local experts over some commonly appearing states of a time series. The dynamics for mixing the experts is a Markov process, in which the states of a time series are regarded as states of a HMM. Hence, there is a Markov chain along the time series and each state associates to a local expert. The state transition on the Markov chain is the process of activating a different local expert or activating some of them simultaneously by different probabilities generated from the HMM. The state transition property in the HMM is designed to be time-variant and conditional on the first order dynamics of the time series. A modified Baum–Welch algorithm is introduced for the training of the time-variant HMM and it has been proved that by EM process the likelihood function will converge to a local minimum. Experiments, with two time series, show this approach achieves significant improvement in the generalisation performance over global models.

Keywords: series prediction, Mixture of Experts,