Samplers
In v0.2 there are two types of monte carlo samplers implemented:
Exact Sampler
ExactSampler
, to be used only for relatively small systems, constructs the full probability distribution and samples it directly. This has exponential memory cost.
The only parameter for this sampler is the number of desired samples and an optional starting seed.
Metropolis sampler
MetropolisSampler
, can be used on arbitrarily big system. It samples from a Markov Chain where states are updated according to the Metropolis-Hastings rule. In a nutshell, the Metropolis sampler starts from a state sampled from the system's hilbert space, then proposes a new state according to some rule, and accepts or rejects this new state depending on some probability.
MetropolisSampler takes 3 parameters: rule
, chain_length
and passes
. The second, chain_length
specifies how many samples should the chain have, while passes
specifies how many times rule
must be applied between two returned samples. The effective chain length will actually be chain_length * passes
, but only a fraction will be returned, in order to reduce correlation among different samples.
For ergodicity, passes should always be even.
Metropolis Rules
Local
LocalRule
is a transition rule for Metropolis-Hastings sampling where at every step a random site is switched to another random state.
This rule does not preserve any particular property or symmetry of the hilbert space or of the operator.
Exchange
ExchangeRule
is a transition rule for Metropolis-Hastings sampling where at every step a random couple of sites i,j is selected, and their states switched. The couples of sites i,j considered are those coupled by the hamiltonian, for example from tight binding terms.
This rule does preserve total magnetization or particle number, and related symmetries of the hamiltonian.
Operator
OperatorRule
is a transition rule for Metropolis-Hastings sampling where at every step the state is changed to any other state to which it is coupled by an Operator (usually the hamiltonian).
This rule preserves symmetries of the operator.
Reference
Samplers
NeuralQuantum.ExactSampler
— TypeExactSampler(n_samples; seed=rand)
Constructs an exact sampler, which builds the full pdf of the quantum state and samples it exactly.
This sampler can only be used on indexable
spaces, and should be used only for somewhat small systems (Closed N<14, Open N<7), as the computational cost increases exponentially with the number of sites.
Initial seed can be set bu specifying seed
.
NeuralQuantum.MetropolisSampler
— TypeMetropolisSampler(rule, chain_length, passes; burn=0, seed=rand)
Constructs a Metropolis-Hastings sampler which samples Markov chains of length chain_length + burn
, ignoring the first burn
samples. Transition rules are specified by the rule rule
. To reduce auto-correlation, passes
number of metropolis-hastings steps are performed for each sample returned (minimum 1).
Effectively, this means that the chain is actually passes * (chain_lenght + burn)
long, but only 1 every passes elements are stored and used to compute expectation values.
Initial seed can be set bu specifying seed
.
### Metropolis Rules
NeuralQuantum.LocalRule
— TypeLocalRule()
Transition rule for Metropolis-Hastings sampling where at every step a random site is switched to another random state.
NeuralQuantum.ExchangeRule
— TypeExchangeRule(graph)
Transition rule for Metropolis-Hastings sampling where at every step a random couple of sites i,j is switched.
Couples of sites are generated from the graph graph
(or operator), where all sites that are connected (or coupled by a 2-body term) are considered for switches.
NeuralQuantum.OperatorRule
— TypeOperatorRule(Ô)
Transition rule for Metropolis-Hastings sampling where at every step a move is drawn from those allowed by the Operator Ô
.