An Approximate Dynamic Programming Approach to Decentralized Control of Stochastic Systems

We consider the problem of computing decentralized control policies for stochastic systems with finite state and action spaces. Synthesis of optimal decentralized policies for such problems is known to be NP-hard \cite{Tsitsiklis85}. Here we focus on methods for efficiently computing meaningful suboptimal decentralized control policies. The algorithms we present here are based on approximation of optimal $Q$-functions. We show that the performance loss associated with choosing decentralized policies with respect to an approximate $Q$-function is related to the approximation error.