Resolving the Mechanisms of Ca and Mg Carbonate Ion-Pair Formation with Multi-Level Molecular Dynamics/Quantum Mechanics Simulations
J.-N. Boyn and E. A. Carter, submitted to J. Phys. Chem. B (2023)
Abstract: coming soon!
A Neural Network Water Model based on the MB-pol Many-Body Potential
Muniz, Maria, Car, Roberto, Panagiotopoulos, Athanassios, accepted, J. Phys. Chem. B (2023)
Critical behavior in a chiral molecular model
Pablo M. Piaggi, Roberto Car, Frank H. Stillinger, Pablo G. Debenedetti, submitted to J Chem Phys 2023
Ab initio Generalized Langevin Equation
Pinchen Xie, Roberto Car, Weinan E., submitted to Proc. Nat. Acad. Sci. 2023, arXiv preprint arXiv:2211.06558
Effect of Phonons and Impurities on the Quantum Transport in XXZ Spin-Chains
Amartya Bose, preprint https://arxiv.org/abs/2206.11156 (2022)
Zero-Cost Corrections to Influence Functional Coefficients from Bath Response Functions
Amartya Bose, preprint https://arxiv.org/abs/2205.15072 (2022)
Improving the Generality of Neural Network Potentials with Constant Input Scaling
Mark DelloStritto and Michael Klein, submitted to Journal of Chemical Physics
The use of neural network potentials (NNPs) to fit ab-initio potential energy surfaces is one of the most accurate and versatile methods to expand the utility of high quality, small-scale quantum calculations, allowing for ab-initio accuracy in simulations of large systems with an increase of efficiency by several orders of magnitude. As with all neural network applications however, NNPs present challenges with respect to over-fitting and generalization. In particular, choosing and normalizing the training data can be a complex task, as biases or gaps in the data can negatively impact the NNP performance. Here, it is shown that normalizing the inputs to the NNP by a constant related to the interaction cutoff volume leads to a significant improvement in the generalization of NNPs trained for Ar, Si, and NaCl compared to the standard approach, where each input is normalized by subtracting the average and scaling by the standard deviation. Specifically, NNPs trained using inputs scaled by a constant yield more accurate energies for defect structures not in the training set, and a NNP trained on isolated Ar clusters yields an accurate potential for solid and liquid Ar. The standard approach to NNP input scaling generally yields worse defect formation energies and cannot be transferred from isolated cluster calculations to periodic systems. We note that, when using volume scaling, the first layer of the neural network effectively renormalizes the inputs, reversing trends in the input data and narrowing the distribution of input values. Seemingly, the first layer is learning the distribution of inputs and renormalizing it for processing by subsequent layers, thereby leading to improved performance by removing a priori assumptions on the input distributions.
Using differentiable programming to obtain an energy and density-optimized exchange-correlation functional
Sebastian dick and Marivi Fernandez-Serra, submitted to Phys. Rev. Lett. (2021)
Automatic machine-learning potential generation scheme and simulation protocol for the LiGePS-type superionic conductors
Jianxing Huang, Linfeng Zhang, Han Wang, Jinbao Zhao, Jun Cheng, Weinan
J. Phys. Chem. Lett.
arXiv: 2006.03320.