Inventor(s)

Abstract

In the realm of finite Markov Decision Processes (MDPs), we explore a sophisticated approach utilizing enhanced subspace methods to solve the Bellman equation efficiently. Recent advancements have shifted the focus from traditional matrix splitting techniques, as delineated in Puterman’s seminal work, to the application of the Enhanced Subspace Method (ESM). This method, notable for its dynamism and non-stationarity, has been proven to vastly outperform conventional iterative solutions, achieving significant efficiency gains during policy evaluations. This paper delves into the motivations, underlying principles, and the practical utility of ESM in the context of reinforcement learning, comparing its effectiveness against other established methods through rigorous experimentation.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS