The traditional approach to solve dynamic decision problems under uncertainty is to approximate the stochastic process of the underlying random parameters by a scenario tree. This method suffers from a curse of dimensionality since most reasonable scenario trees grow exponentially with the number of decision stages. Instead of discretizing the stochastic process, one can alternatively approximate the recourse decisions by a linearly parameterized family of decision rules. This approach is computationally attractive since the resulting approximate problems often scale polynomially with the number of decision stages. While this decision rule approximation provides a feasible solution and an upper bound on the optimal value of the original problem, it can incur a substantial loss of optimality. In this talk we show that applying the decision rule approximation to the primal as well as a dual version of the original problem yields upper and lower bounds on the true optimal value. The gap between the bounds estimates the approximation error. We also show that linear decision rules, which approximate the recourse decisions by linear functions of the random parameters, can solve a benchmark problem from the literature with up to 80 stages at about 5% accuracy. However, linear decision rules can perform poorly if the true optimal recourse decisions are highly non-linear, as is often the case for stochastic programs with many random parameters per stage. In this case, we improve the approximation quality by using lifting techniques to optimize efficiently over a class of piecewise linear continuous decision rules. We show that the proposed techniques can also be used to estimate the suboptimality of parameteric output feedback policies in robust and stochstic control problems with state constraints.
Back to Workshop IV: Robust Optimization