Abstract :
We consider the task of learning the so-called reversed-wedge problem, using a multi-interacting perceptron with first- and third-order synap-ses, where the third-order synaptic couplings are expressed as products of the first-order synapses associated to the neurons involved in the corresponding multi-interaction. This correlation condition allows the training of the multi-interacting perceptron to be achieved by adjusting the set of first-order weights, in such a way that the learning rates scales with the dimensionality of a simple perceptron. Remarkably, if the width of the “reversed” inner region (wedge) is smaller than , the high-temperature approach predicts a transition from a poor generalization regime to a state with good performance, where the generalization error is identical to the results for the problem of a simple perceptron learning a linearly separable rule. The simulation results are in excellent agreement with the analytical predictions