NDTensors.jl is the more traditional tensor algebra package underlying ITensors.jl
. It defines an n-dimensional tensor Tensor
that can have a variety of storage data types for various sparse and constrained tensors, such as dense, block sparse, and diagonal, with more planned such as tensors with isometric/unitary constraints. It implements high performance operations between mixtures of different tensor types such as addition, permutation, matrix factorization, and contraction. Additionally, it supports block sparse multithreaded contraction.
Tensors with isometric/unitary constraints: A special tensor storage type representing a tensor with isometric/unitary constraints would be useful in a variety of applications, such as isometrically constrained gradient optimization, automated simplification of tensor network contractions involving contractions of isometric tensors, etc. Please reach out if you are interested in helping us implement this feature.
Lazy complex conjugation: It would be helpful for improving performance and memory usage to add support for lazy complex conjugation. For example, tensor contractions involving complex conjugation could be mapped directly to matrix multiplication calls to BLAS without allocating temporary complex conjugated tensors.
ITensorNetworks.jl This is the next-generation general tensor network library built on top of ITensors.jl. It will generalize the MPS solvers like DMRG, TDVP, and linear solving, as well as tools for gate evolution, that are available in ITensors.jl and ITensorTDVP.jl to tree tensor networks (TTN) and even more general tensor networks. Stay tuned for more developments!
ITensorGLMakie.jl is a package I wrote for easily making interactive visualizations of tensor networks written with ITensors.jl, based on GraphMakie.jl and Makie.jl . It supports clicking and dragging nodes/tensors of the tensor network.
More interactive customization: Currently, ITensorGLMakie.jl
only support simple interactivity, such as clicking and dragging the nodes/tensors of the tensor network diagram. We would like to have more interactivity, such as interactively selecting the color, shape, and labels of the nodes/tensors.
Multigraph visualization: ITensorGLMakie.jl
currently visualizes tensors with multiple shared indices with a single edge and a label with information about the multiple edges. It would be helpful to directly visualize the multiple edges/indices. GraphMakie.jl
, the package we use as a backend for ITensorGLMakie
, implicitly supports visualizing multigraphs , so support for this should be straightforward to add.
ITensorUnicodePlots.jl is an alternative backend for visualizing networks of ITensors as text output, based on UnicodePlots.jl .
ITensorGaussianMPS.jl is a Julia package I wrote for transforming free fermion states into tensor network states, based on an algorithm I developed during my Ph.D. with Steven White .
ITensorInfiniteMPS.jl is a Julia package I wrote for extending the functionality of ITensors.jl to infinite MPS.
Many packages are in development that extend the functionality of ITensors.jl, such as packages for performing network level contractions and gradient optimizations of tensor networks, packages for interfacing with quantum chemistry libraries like PySCF, and more. Stay tuned and keep an eye out on my Github page , the ITensor Github organization , and the ITensor website !
PastaQ.jl is a package I co-develop with Giacomo Torlai for simulating and analyzing quantum computers, including noisy state and process simulation with customizable noise models, state-of-the-art algorithms for tomography and ongoing work using automatic differentiation to optimize quantum circuits for implementing algorithms like variational quantum eigensolver (VQE) and optimal control.
ITensor is a C++ library for developing and performing tensor network calculations. I was the lead developer of C++ ITensor Version 3 , the latest major release of the library which had many improvements to the interface and performance of block sparse calculations, including the introduction of block sparse multithreading with OpenMP.
I co-developed Observers.jl with Giacomo Torlai . It is a package for conveniently specifying a set of measurements you want to make inside of an iterative method. It is currently being used in PastaQ.jl inside iterative optimization methods like quantum state and process tomography as well as quantum circuit evolution, and we plan to make use of it in ITensors.jl .
Using Observers.jl
in ITensors.jl
: We are interested in using Observers.jl inside iterative methods in ITensors.jl like the density matrix renormalization group (DMRG) eigensolver as well as our circuit simulation functionality (apply
). Please reach out to me if you are interested in helping out with this! It would be a good project for a new user trying to learn about DMRG, Julia, and ITensors.jl.
SerializedElementArrays.jl is a package I wrote that provides a new Julia Array type (a SerializedElementArray) whose elements are saved to disk. This can help in cases where you have collections of large contiguous data (like an Array of very large Arrays) which individually fit in memory but collectively do not. This is used for the write-to-disk feature in ITensors.jl.
We recently developed a new method for gauging tensor networks based on belief propagation, and applied it to simulate the kicked transverse field Ising model on a heavy-hex lattice, a model that was recently emulated on IBM's Eagle quantum processor.
Steven White and I developed an algorithm for obtaining a compact quantum circuit of local gates for a free fermion state. This leads to a straightforward way to construct tensor network state like matrix product states (MPS), tree tensor networks (TTN), and multi-scale entanglement renormalization ansatz (MERA) for free fermion states.
We have recently applied this method to develop next-generation impurity solvers based on disentangling the non-interacting bath, as well as by representing the influence matrix of the non-interacting bath as a matrix product state (MPS).
In collaboration with colleagues at the University of Ghent and the University of Vienna, I helped to develop a new algorithm for finding ground states of quasi-1D quantum systems directly in the thermodynamic limit, which is faster at finding ground states than the state-of-the-art alternatives. The algorithm is called the variational uniform matrix product state (VUMPS) algorithm.
In collaboration with colleagues at the CCQ, I worked on extending the VUMPS algorithm to solve for ground states of infinite tree tensor networks states, such as states on the Bethe lattices, in an algorithm we called the variational uniform tree state (VUTS) algorithm.
In collaboration with colleagues at the University of Ghent and the University of Vienna, I worked on extending the VUMPS algorithm to the problem of contracting infinite 2D tensor networks and showed that in many cases it outperforms the standard method, the corner transfer matrix renormalization group (CTMRG) algorithm. In addition, I worked on a fixed point formulation of CTMRG which we also showed was faster than the original CTMRG algorithm, which we called the fixed point corner method (FPCM).
With colleagues from CCQ and other institutions, I helped develop a method for decreasing the average sign of a wavefunction by optimizing a quantum circuit ansatz with automatic differentiation. This could have implications for improving the performance of monte carlo algorithms.
Tindall, Fishman Gauging tensor networks with belief propagation, 2023.
Tindall, Fishman, Stoudenmire, Sels Efficient tensor network simulation of IBM's Eagle kicked Ising experiment, 2023.
Kloss, Thoenniss, Sonner, Lerose, Fishman, Stoudenmire, Parcollet, Georges, Abanin Equilibrium Quantum Impurity Problems via Matrix Product State Encoding of the Retarded Action, 2023.
Wu, Fishman, Pixley, Stoudenmire Disentangling Interacting Systems with Fermionic Gaussian Circuits: Application to the Single Impurity Anderson Model, 2022.
Lunts, George, Stoudenmire, Fishman The Hubbard model on the Bethe lattice via variational uniform tree states: metal-insulator transition and a Fermi liquid, 2020.
Fishman, White, Stoudenmire The ITensor Software Library for Tensor Network Calculations, 2020.
Torlai, Carrasquilla, Fishman, Melko, Fisher Wavefunction positivization via automatic differentiation, 2019.
Fishman, Vanderstraeten, Zauner-Stauber, Haegeman, Verstraete Faster Methods for Contracting Infinite 2D Tensor Networks, 2017.
Zauner-Stauber, Vanderstraeten, Fishman, Verstraete, Haegeman Variational optimization algorithms for uniform matrix product states, 2017.
Fishman, White Compression of Correlation Matrices and an Efficient Method for Forming Matrix Product States of Fermionic Gaussian States, 2015.