Improving the Generality of Neural Network Potentials with Constant Input Scaling

Mark DelloStritto and Michael Klein, submitted to Journal of Chemical Physics


The use of neural network potentials (NNPs) to fit ab-initio potential energy surfaces is one of the most accurate and versatile methods to expand the utility of high quality, small-scale quantum calculations, allowing for ab-initio accuracy in simulations of large systems with an increase of efficiency by several orders of magnitude. As with all neural network applications however, NNPs present challenges with respect to over-fitting and generalization. In particular, choosing and normalizing the training data can be a complex task, as biases or gaps in the data can negatively impact the NNP performance. Here, it is shown that normalizing the inputs to the NNP by a constant related to the interaction cutoff volume leads to a significant improvement in the generalization of NNPs trained for Ar, Si, and NaCl compared to the standard approach, where each input is normalized by subtracting the average and scaling by the standard deviation. Specifically, NNPs trained using inputs scaled by a constant yield more accurate energies for defect structures not in the training set, and a NNP trained on isolated Ar clusters yields an accurate potential for solid and liquid Ar. The standard approach to NNP input scaling generally yields worse defect formation energies and cannot be transferred from isolated cluster calculations to periodic systems. We note that, when using volume scaling, the first layer of the neural network effectively renormalizes the inputs, reversing trends in the input data and narrowing the distribution of input values. Seemingly, the first layer is learning the distribution of inputs and renormalizing it for processing by subsequent layers, thereby leading to improved performance by removing a priori assumptions on the input distributions.