|Table of Contents|

Mathematical Programming Models for Estimating the InitialStates of Markov Chains by Maximum Likelihood(PDF)

《南京师大学报(自然科学版)》[ISSN:1001-4616/CN:32-1239/N]

Issue:
2019年02期
Page:
50-56
Research Field:
·数学与计算机科学·
Publishing date:

Info

Title:
Mathematical Programming Models for Estimating the InitialStates of Markov Chains by Maximum Likelihood
Author(s):
Lou Zhenkai1Hou Fujun1Lou Xuming2
(1. School of Management and Economics,Beijing Institute of Technology,Beijing 100081,China)(2.School of Economics and Management,Xi’an University of Posts and Telecommunications,Xi’an 710121,China)
Keywords:
estimation of the initial statemaximum likelihoodmathematical programming modelsK-T condition
PACS:
O221
DOI:
10.3969/j.issn.1001-4616.2019.02.008
Abstract:
Parameters estimationis a common issue in Markov models. Owing to the importance of the initial state,in this paper we estimate the initial state for Markov chain models which the initial state is unknown. According to whether the states are visible,we divide the models into general Markov models and hidden Markov models. We build linear programming models and non-linear programming models by maximum likelihood with considering the amount of states or observation symbols,and prove that the probabilities of state at each stage meet the normalization of probability. For linear programming models,we point out that they can be solved by the simplex method,and show the expression of solution. For non-linear programming models,we account for the existence of the optimal solution,and turn models into equations by K-T condition. In the examples,we apply lingo to obtain the optimal solution while the equations are hard to solve.

References:

[1] ARNAB N,LAURENT E G. Robust control of Markov decision processes with uncertain transition matrices[J]. Operations research,2005,53(5):780-798.
[2]ERICK D,SHIE M. Percentile optimization for Markov decision processes with parameter uncertainty[J]. Operations research,2009,58(1):203-213.
[3]YU S Z,HISASHI K. An efficient forward-backward algorithm for an explicit-duration hidden Markov model[J]. IEEE signal processing letters,2003,10:11-14.
[4]XIE L,UGRINOVSKII V A,PETERSEN I R. Petersen. Finite horizon robust state estimation for uncertain finite alphabet hidden Markov models with conditional relative entropy constraints[J]. SIAM journal on control and optimization,2008,4(1):4497-4502.
[5]JAMIE S E,VIKRAM K. Hidden Markov model state estimation with randomly delayed observations[J]. IEEE transactions on signal processing,1999,47(8):2157-2166.
[6]LIANG J L,WANG Z D,LIU X H. Distributed state estimation for uncertain Markov-type sensor networks with mode-dependent distributed delays[J]. International journal of robust and nonlinear control,2012,22(3):331-346.
[7]SHI D W,ROBERT J E,CHEN T W. Event-based state estimation of discrete-state hidden Markov models[J]. Automatica,2016,65:12-26.
[8]BRIAN G L. Maximum-likelihood estimation for hidden Markov models[J]. Stochastic processes and their applications,1992,40:127-143.
[9]SUN Q,LIM C C,LIU F. Maximum likelihood state estimation for Markov jump systems with uncertain mode-dependent delays[J]. Journal of the Franklin institute,2016,353:594-614.
[10]张虎,胡淑兰. 马尔可夫转换模型的极大似然估计的算法[J]. 统计与决策,2011,6:26-27.
[11]RICHARD Z,ZHOU M C. Petri nets and industrial applications:a tutorial[J]. IEEE transactions on industrial electronics,1994,41(6):567-583.
[12]PHUC D V,ANNE B,CHRISTOPHE B. Reliability importance analysis of Markovian systems at stead state using perturbation analysis[J]. Reliability engineering and system safety,2013,93(11):1605-1615.
[13]荣腾中,肖智. 高阶马尔可夫链平稳分布的存在唯一性[J]. 系统工程理论与实践,2013,33(8):2015-2020.
[14]MEDER Z Z,FLESCH J,PEETERS R. Optimal choice for finite and infinite horizons[J]. Operations research letters,2012,40(6):469-474.
[15]MACDONALD I,ZUCCHINI W. Hidden Markov and other models for discrete-valued time series[M]. London:Chapman and Hall,1997.
[16]CHING W K,MICHAEL K N. Markov chains:models,algorithms and applications[M]. New York:Publications of the American Statistical Association,2006.
[17]刘克,曹平. 马尔可夫决策过程理论与应用[M]. 北京:科学出版社,2015.

Memo

Memo:
-
Last Update: 2019-06-30