Lanczos method

Although there are several packages for the full diagonalization such as lapack, it is almost impossible to perform the full diagonalization for large-scale matrices whose dimension is more than one million. In the condensed matter physics, we want to obtain the lowest (ground-state) eigenvalue and the eigenvector for characterizing the nature of the target quantum many-body systems. For that purpose, the Lanczos method is commonly used for obtaining the eigenvalues and the eigenvector of ground state.

In the Lanczos method, we successively multiply the Hamiltonian to the initial vector (typically we take the random vector as the initial vector). Then, we can obtain the lowest eigenvectors. Only two vectors are necessary for performing the Lanczos method, we can obtain the ground state of larger matrices whose dimension is up to tens of billion.

The Lanczos method is implemented in several exact diagonalization packages such as
TITPACK,KobePACK,SpinPACK,ALPS and HΦ. Especially, in HΦ, recently developed modern algorithm for obtaining several low-energy excited states (the LOBPCG method) is implemented. By using the LOPBCG method, we can obtain the several excited states at one calculations.

Markov-chain Monte Carlo mothod (MCMC)

Efficient way of computing statistical average of physical quantities at equilibrium by replacing the statistical summation over all microscopic states by stochastic sampling. It is often called just “Monte Carlo method”. For example, in the case of Ising model, the total number of microscopic states increases as a function of the number of spins. Therefore, it is practically impossible to computer the expectation values strictly following the definition. In Markov-chain Monte Carlo method, a stochastic process is defined so that it satisfies the ergodic condition and the balance condition. Temporal averages over the microscopic states generated in this way should equal the thermal averages at equilibrium. Slow relaxation is often problematic for systems near the criticality or with frustration. There are a number of techniques designed for dealing with this problem, such as extended ensemble methods and variational Monte Carlo method.

Molecular Dynamics (MD)

Methods of simulating many particle systems by solving equations of motion such as the Newton equation. Mathematically, in molecular dynamics simulation one solves a system of simultaneous ordinary differential equations by using, e.g., the Runge-Kutta method and the velocity Verlet method. While a simple implementation would allow simulation with fixed energy and fixed volume, introducing the Nose-Hoover thermostat makes it possible to simulate with fixed temperature possible as well. Similarly, simulation with fixed pressure or with chemical potential is possible. There are various choices of the force field, the interaction energy between particles, ranging from the simple short-ranged force such as the hard sphere potential and the Lennard-Jones potential to the long-ranged one such as the Coulomb potential or more realistic and more complicated ones, depending on the purpose of the simulation.

Monte Carlo

A method of simulation is called a Monte Carlo method if sampling with pseudo random numbers is used. The simplest example is the random sampling with the weight that is uniform in the configuration space. An important category is that of importance sampling methods, e.g., Markov-chain Monte Carlo. The method is also used for solving optimization problems via simulated annealing.

Neural network

An (artificial) neural network is one of the machine learning methods that imitate the neural structure of the animal brain. A neural network has a structure in which many nodes (neurons) are connected. There are various types of neural networks. Typical examples are feed-forward neural networks (also called perceptrons) used for supervised learning and restricted Boltzmann machines used for unsupervised learning. In recent years, it has become possible to dramatically improve the learning ability by introducing a structure composed of many layers (deep neural networks). Neural networks are widely used in various fields such as image recognition, speech recognition, language analysis, model generation, and class classification. Even in the field of materials science, applications to machine learning force fields, variational wave functions, exploration of new materials (materials informatics), etc., are being advanced.

Nonequilibrium Green’s function method

A method for calculating quantum transport properties of a nanostructure coupled to two or more leads under bias. The electron density and conductance of the system under bias can be obtained by calculating the Green’s function of the nanostructure using self energies that account for the effect of the leads.

Path-integral Monte Carlo

A Markov-chain Monte Carlo method for quantum many body systems. Simulation by this method is based on a Markov process in the space of d+1 dimensional configurations obtained via path-integral representation of d dimensional quantum many-body systems. The method is also called the world-line Monte Carlo. While the weight of a given state can be easily computed in classical systems, its computational cost for quantum systems is exponentially high. Therefore, the mapping into the d+1 dimensional classical problem through Suzuki-Trotter transformation or high-temperature series expansion is necessary for reducing the cost to a manageable level. The method is applied to various strongly-correlated lattice models such as transverse Ising model, Heisenberg model, and Hubbard model, as well as bosonic systems such as He4. However, in application to frustrated Heisenberg models and fermionic systems, the weight can be negative in general, making the method unpractical. This problem is called the negative sign problem, the most severe limitation of the method in studying quantum many-body systems.

Phase field method

A method of handling a continuum model in an inhomogeneous field using field variables called phase fields. By introducing a continuous field that describes the state of the phase in addition to the density and temperature fields as field variables, it can be applied to the simulation of many physical phenomena (solidification phenomena, phase transformation, etc.) accompanied by phase transition. The continuous field is described based on the Ginzburg-Landau equation, and the parameters in the model are determined by the phase free energies. Since all physical quantities are written in a continuous field, calculation codes are easy to write, and public or commercially available programs can be used.

Related software

Plane wave basis

In electronic structure calculations, the wave function is often expanded as a linear combination of plane waves. Plane waves comprise an orthonormal basis set, so that increasing the basis size (using many plane waves with different wavelengths) leads to a monotonic improvement in the reproduction of the wave function. However, it is unsuitable for describing steeply-varying wave functions near the core since disproportionately many plane waves are necessary for expanding steep functions. The (L)APW and pseudopotential methods were developed to circumvent this difficulty.

Pseudopotential method

Core electrons play a very small part in chemical bond formation, so that computational load can be decreased with small loss in accuracy by replacing core electrons by a pseudopotential that act on the valence electrons. In this manner, only the relatively slowly-varying valence wave functions need to be considered explicitly, and this allows for decreasing the basis set size when using a plane wave basi