Neural Network Quantum states


Pure State Networks

Neural Quantum State

This is an implementation of a Neural Quantum State, a simple RBM state as proposed by Carleo et Troyer Science 2018. Please note that you can chose the activation function (the third argument, f) to be either a softplus function af_softplus or a logcosh function af_logcosh. The best performance is found by combining logcosh with a spin-like hilbert space.

NeuralQuantum.RBMType
RBMSplit([T=Complex{STD_REAL_PREC}], N, α, f=af_softplus, [initW, initb])

Constructs a Restricted Bolzmann Machine to encode a wavefunction, with weights of type T, N input neurons and αN hidden neurons. This is the Neural Quantum State (NQS) Ansatz of [1].

N must match the size of the lattice.

By default the activation function is a sigmoid. You can also use logcosh by passing as an additional parameter af_logcosh.

The initial parameters of the neurons are initialized with a rescaled normal distribution of width 0.01 for the coupling matrix and 0.05 for the local biases. The default initializers can be overriden by specifying

initW=(dims...)->rescalednormal(T, 0.01, dims...) initb=(dims...)->rescalednormal(T, 0.05, dims...) inita=(dims...)->rescaled_normal(T, 0.01, dims...)

Refs: https://arxiv.org/abs/1606.02318

source

Mixed State Networks

In general, given an Hilbert space $\mathcal{H}$ with basis $\vec\sigma\in\mathcal{H}$, a density matrix defined on this space lives in the space of the Bounded Operators $\mathcal{B}$ with (overcomplete) basis $(\sigma, \tilde{\sigma} ) \in \mathcal{H}\otimes\mathcal{H}$. A network is a (high-dimensional non-linear) function

\[\rho(\sigma, \tilde\sigma, W)\]

depending on the variational parameters $W$, and on the entries of the density matrix labelled by $(\sigma, \tilde\sigma)$.

Neural Density Matrix

Torlai et Melko PRL 2019

A real-valued neural network to describe a positive-semidefinite matrix. Complex numbers are generated by duplicating the structure of the network, and using one to generate the modulus and the other to generate the phase. See the article for details.

NeuralQuantum.NDMType
NDM([T=STD_REAL_PREC], N, αₕ, αₐ, f=af_softplus, [initW, initb, inita])

Constructs a Neural Density Matrix with numerical precision T (Defaults to Float32), N input neurons, N⋅αₕ hidden neurons and N⋅αₐ ancillary neurons. This network ensure that the density matrix is always positive definite.

The number of input neurons N must match the size of the lattice.

By default the activation function is a sigmoid. You can also use logcosh by passing as an additional parameter af_logcosh.

The initial parameters of the neurons are initialized with a rescaled normal distribution of width 0.01 for the coupling matrix and 0.005 for the local biases. The default initializers can be overriden by specifying

initW=(dims...)->rescalednormal(T, 0.01, dims...), initb=(dims...)->rescalednormal(T, 0.005, dims...), inita=(dims...)->rescaled_normal(T, 0.005, dims...))

Refs: https://arxiv.org/abs/1801.09684 https://arxiv.org/abs/1902.10104

source

RBM Density Matrix

A simple state that does not preserve positivity, but which is hermitian.

NeuralQuantum.RBMSplitType
RBMSplit([T=Complex{STD_REAL_PREC}], N, α, [initW, initb, inita])

Constructs a Restricted Bolzmann Machine to encode a vectorised density matrix, with weights of type T (Defaults to ComplexF32), 2N input neurons, 2N⋅α hidden neurons. This network does not ensure positive-definitness of the density matrix.

N must match the size of the lattice.

The initial parameters of the neurons are initialized with a rescaled normal distribution of width 0.01 for the coupling matrix and 0.05 for the local biases. The default initializers can be overriden by specifying

initW=(dims...)->rescalednormal(T, 0.01, dims...), initb=(dims...)->rescalednormal(T, 0.05, dims...), initb=(dims...)->rescaled_normal(T, 0.01, dims...),

Refs: https://arxiv.org/abs/1902.07006

source