The traditional methods of numerical linear algebra are prohibitively expensive for high-dimensional problems for which even a single matrix multiplication by a dense vector may be too costly. In this talk I will discuss a general framework for reducing the cost of classical iterative schemes like Jacobi iteration by randomly sparsifying the approximate solution at each iteration. I will provide a thorough characterization of the error properties of Jacobi iteration with repeated random sparsification and show results of numerical tests applying the scheme to coupled cluster calculations.