Skip to content

Commit

Permalink
Final final fixes (#234)
Browse files Browse the repository at this point in the history
* fix DDMRG test

* Fix typo

* tweak test to stabilize
  • Loading branch information
lkdvos authored Jan 22, 2025
1 parent c419625 commit 7df0cba
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 18 deletions.
2 changes: 1 addition & 1 deletion docs/src/examples/classic2d/1.hard-hexagon/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ F = 0.8839037051703857 S = 0.546862287635581 ξ = 13.8496825856899

## The scaling hypothesis

The dominant eigenvector is of course only an approximation. The finite bond dimension enforces a finite correlation length, which effectively introduces a length scale in the system. This can be exploited to formulate a [pollmann2009](@cite), which in turn allows to extract the central charge.
The dominant eigenvector is of course only an approximation. The finite bond dimension enforces a finite correlation length, which effectively introduces a length scale in the system. This can be exploited to formulate a scaling hypothesis [pollmann2009](@cite), which in turn allows to extract the central charge.

First we need to know the entropy and correlation length at a bunch of different bond dimensions. Our approach will be to re-use the previous approximated dominant eigenvector, and then expanding its bond dimension and re-running VUMPS.
According to the scaling hypothesis we should have ``S \propto \frac{c}{6} log(ξ)``. Therefore we should find ``c`` using
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/classic2d/1.hard-hexagon/main.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@
"source": [
"## The scaling hypothesis\n",
"\n",
"The dominant eigenvector is of course only an approximation. The finite bond dimension enforces a finite correlation length, which effectively introduces a length scale in the system. This can be exploited to formulate a [pollmann2009](@cite), which in turn allows to extract the central charge.\n",
"The dominant eigenvector is of course only an approximation. The finite bond dimension enforces a finite correlation length, which effectively introduces a length scale in the system. This can be exploited to formulate a scaling hypothesis [pollmann2009](@cite), which in turn allows to extract the central charge.\n",
"\n",
"First we need to know the entropy and correlation length at a bunch of different bond dimensions. Our approach will be to re-use the previous approximated dominant eigenvector, and then expanding its bond dimension and re-running VUMPS.\n",
"According to the scaling hypothesis we should have $S \\propto \\frac{c}{6} log(ξ)$. Therefore we should find $c$ using"
Expand Down
2 changes: 1 addition & 1 deletion examples/classic2d/1.hard-hexagon/main.jl
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ println("F = $F\tS = $S\tξ = $ξ")
md"""
## The scaling hypothesis
The dominant eigenvector is of course only an approximation. The finite bond dimension enforces a finite correlation length, which effectively introduces a length scale in the system. This can be exploited to formulate a [pollmann2009](@cite), which in turn allows to extract the central charge.
The dominant eigenvector is of course only an approximation. The finite bond dimension enforces a finite correlation length, which effectively introduces a length scale in the system. This can be exploited to formulate a scaling hypothesis [pollmann2009](@cite), which in turn allows to extract the central charge.
First we need to know the entropy and correlation length at a bunch of different bond dimensions. Our approach will be to re-use the previous approximated dominant eigenvector, and then expanding its bond dimension and re-running VUMPS.
According to the scaling hypothesis we should have ``S \propto \frac{c}{6} log(ξ)``. Therefore we should find ``c`` using
Expand Down
23 changes: 8 additions & 15 deletions test/algorithms.jl
Original file line number Diff line number Diff line change
Expand Up @@ -596,27 +596,20 @@ end
end

@testset "Dynamical DMRG" verbose = true begin
ham = force_planar(-1.0 * transverse_field_ising(; g=-4.0))
gs, = find_groundstate(InfiniteMPS([ℙ^2], [ℙ^10]), ham, VUMPS(; verbosity=0))
window = WindowMPS(gs, copy.([gs.AC[1]; [gs.AR[i] for i in 2:10]]), gs)

szd = force_planar(S_z())
@test [expectation_value(gs, i => szd) for i in 1:length(window)]
[expectation_value(window, i => szd) for i in 1:length(window)] atol = 1e-10

openham = open_boundary_conditions(ham, length(window.window))
polepos = expectation_value(window.window, openham,
environments(window.window, openham))
L = 10
H = force_planar(-transverse_field_ising(; L, g=-4))
gs, = find_groundstate(FiniteMPS(L, ℙ^2, ℙ^10), H; verbosity=verbosity_conv)
E₀ = expectation_value(gs, H)

vals = (-0.5:0.2:0.5) .+ polepos
vals = (-0.5:0.2:0.5) .+ E₀
eta = 0.3im

predicted = [1 / (v + eta - polepos) for v in vals]
predicted = [1 / (v + eta - E₀) for v in vals]

@testset "Flavour $f" for f in (Jeckelmann(), NaiveInvert())
alg = DynamicalDMRG(; flavour=f, verbosity=0, tol=1e-8)
data = map(vals) do v
result, = propagator(window.window, v + eta, openham, alg)
result, = propagator(gs, v + eta, H, alg)
return result
end
@test data predicted atol = 1e-8
Expand Down Expand Up @@ -696,7 +689,7 @@ end
H = force_planar(repeat(transverse_field_ising(; g=4), 2))

dt = 1e-3
sW1 = make_time_mpo(H, dt, TaylorCluster(; N=3))
sW1 = make_time_mpo(H, dt, TaylorCluster(; N=3, compression=true, extension=true))
sW2 = make_time_mpo(H, dt, WII())
W1 = MPSKit.DenseMPO(sW1)
W2 = MPSKit.DenseMPO(sW2)
Expand Down

2 comments on commit 7df0cba

@lkdvos
Copy link
Member Author

@lkdvos lkdvos commented on 7df0cba Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator register

Release notes:

v0.12 of MPSKit boasts a wide variety of changes, most notably related to the usage of BlockTensorKit for the MPOs and environments. This enables a wide range of simplifications behind the screens, that should hopefully both simplify maintenance, as well as lower the bar for contributions and future developments.
Various things have been refactored to make use of this new framework, and the documentation pages have seen some improvements (but are still WIP).
As such, the main breaking changes refer to the internals of the operator structs (FiniteMPO, InfiniteMPO, FiniteMPOHamiltonian and InfiniteMPOHamiltonian), as well as the internal structure of the environments. Furthermore, some algorithms have gotten updates to their fields to streamline the interface a bit.
Additionally, this release ensures compatibility with the new versions of TensorKit and TensorOperations, and thus some performance gains are to be expected because of that.
Finally, the interface to work with parallelization has been reworked to now use OhMyThreads.jl, which should ease the procedure and yield a bit more flexibility.

As a warning, for users that were relying on the infinite environment managers to automatically recompute their environments, this feature has been disabled, so recalculate! has to be called manually now.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/123479

Tagging

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.12.0 -m "<description of version>" 7df0cbaf21aa5e157e546dc1c3365dbc35f502c1
git push origin v0.12.0

Please sign in to comment.