We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on L_p perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the L_8 perturbations case is provably computationally harder than the case 2 = p < 8.
Joint work with Ilias Diakonikolas and Daniel M. Kane.