Combinatorial Optimization as a problem is fundamentally different from recognizing cats and dogs: in the limit of infinite computation, the task is trivial. Therefore, the goal of learning based approaches is to reduce total computation, possibly at the expense of solution quality, by learning from past experience.
A well-trained model can reduce the effective size of the (relevant) search space, but computation used for inferencing a model can no longer be used for search, so this is not a guaranteed improvement. Therefore, one should carefully choose how and where to apply learned models: count your flops and make your flops count!
In this talk I will go a bit more in depth and discuss a number of practical examples in the context of this trade-off. I will present a number of challenges and provide some guidelines that may help define future research directions to effectively apply Deep Learning for Combinatorial Optimization.