Consider a finite-state finite-action Markovian decision process with unobservable costs in the sense that the total discounted cost is to be assessed at infinity. It is assumed that the initial probability distribution over the state space is known. A new Markovian decision process is then constructed having the same action space as before, but with the new state space being the set of all probability distributions over the original state space. Two sets of policies are defined and some immediate results are developed. 8 pp. Bibliog.
This report is part of the RAND Corporation Paper series. The paper was a product of the RAND Corporation from 1948 to 2003 that captured speeches, memorials, and derivative research, usually prepared on authors' own time and meant to be the scholarly or scientific contribution of individual authors to their professional fields. Papers were less formal than reports and did not require rigorous peer review.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.