Reasoning on Natural Inputs

Petar Veličković
DeepMind Technologies

Classical algorithms are designed with abstraction in mind, enforcing their inputs to conform to stringent preconditions. This is done for an apparent reason: keeping the inputs constrained enables an uninterrupted focus on "reasoning" and makes it far easier to certify the resulting procedure's correctness, i.e., stringent postconditions. However, we must never forget why we design algorithms: to apply them to real-world problems.

For an example of why this is at timeless odds with the way such algorithms are designed, we will look back to a 1955 study by Harris and Ross, which is among the first to introduce the maximum flow problem. One issue discovered in this study continues to be relevant today: abstractifying raw inputs often leads to either a loss of information or an incorrect model of the problem, meaning that the problem we solved algorithmically could be no longer relatable to the problem as we truly wish to solve.

Mindful of the above, we can identify that the latest advances in neural algorithmic reasoning could lend a remarkably elegant pipeline for reasoning on natural inputs, carefully leveraging the tried-and-tested power of deep neural networks as feature extractors. Specifically, I will present some of the initial work suggesting that it is possible to use this pipeline to usefully support classical algorithmic reasoning in generic reinforcement learning environments (such as Atari).

Presentation (PDF File)

Back to Long Programs