Consider a finite-state finite-action Markovian decision process with unobservable costs in the sense that the total discounted cost is to be assessed at infinity. It is assumed that the initial probability distribution over the state space is known. A new Markovian decision process is then constructed having the same action space as before, but with the new state space being the set of all probability distributions over the original state space. Two sets of policies are defined and some immediate results are developed. 8 pp. Bibliog.
This report is part of the RAND Corporation paper series. The paper was a product of the RAND Corporation from 1948 to 2003 that captured speeches, memorials, and derivative research, usually prepared on authors' own time and meant to be the scholarly or scientific contribution of individual authors to their professional fields. Papers were less formal than reports and did not require rigorous peer review.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.