Algorithms

An algorithm specifies how the loss function is minimised.

We support two types of algorithms: a trivial Gradient Descent and the more sophisticated Stochastic Reconfiguration method.

NeuralQuantum.SRType
SR([use_iterative=true, ϵ=0.001, λ0=100, b=0.95, λmin=1e-4, [precondition_type=sr_shift, algorithm=sr_qlp, precision=1e-4])

Stochastic Reconfiguration preconditioner which corrects the gradient according to the natural gradient computed as S^-1 ∇C. Using this algorithm will lead to the computation of the S matrix together with the gradient of the cost function ∇C. To compute the natural gradient S^-1∇C an iterative scheme (Minres-QLP) or a direct inversion is used.

The linear system x = S^-1 ∇C is by default solved with minres_qlp iterative solver. Alternatively you can use sr_minres, sr_cg or sr_lsq to use respectively the minres, conjugate gradient and least square solvers from IterativeSolvers.jl. For small systems you can also solve it by computing the pseudo-inverse (sr_diag), the cholesky factorisation (sr_cholesky) the pivoted-cholesky factorisation (sr_pivcholesky), and using the automatic julia solver, usually involving qr decomposition (sr_div). Those non-iterative methods are all from Base.LinearAlgebra.

If use_iterative=true the inverse matrix S^-1 is not computed, and an iterative MINRES-QLP algorithm is used to compute the product S^-1*F

If precondition_type=sr_shift then a diagonal uniform shift is added to S S –> S+ϵ*identity

If precondition_type=sr_multiplicative then a diagonal multiplicative shift is added to S S –> S + max(λ0b^n,λmin)Diagonal(diag(S)) where n is the number of the iteration.

source