Decentralized Control

Decentralized control has been a large area of open research for over forty years. To cover every aspect of it would require a vast knowledge of applied mathematics and considerable time. Consequently, in this tutorial we will attempt to restrict our attention to optimal control of systems which are linear with Gaussian random noise and disturbances, where the objective is a quadratic cost function. This encompasses a very general, and commonly encountered, class of systems. While the results herein will be aimed at this class, much of our discussion may be applied more generally to other problems. Most of the results in this chapter are, of course, not new and can be found in the references. To begin our discussion we will highlight some of the key features of decentralized control with a few motivating examples. From there, we will address what will be called static systems, and show that decentralized problems of this nature admit tractable solutions. Our discussion will then turn to the class of dynamic problems, involving feedback. Decentralized feedback problems in general are known to be difficult. Nevertheless, there exist some problems for which optimal solutions may be obtained. We will end our discussion with some methods for solving these types of problems.