Matthew Fishman

Projects

  1. ITensors.jl
    1. NDTensors.jl
    2. ITensorNetworks.jl
    3. ITensorGLMakie.jl
    4. ITensorUnicodePlots.jl
    5. ITensorGaussianMPS.jl
    6. ITensorInfiniteMPS.jl
    7. Other extensions to ITensors.jl
  2. PastaQ.jl
  3. ITensor (C++)
  4. Miscellaneous Julia Packages
    1. Observers.jl
    2. SerializedElementArrays.jl
  5. Tensor network algorithm development
    1. Gauging tensor networks with belief propagation and applications to circuit simulation
    2. Gaussian circuits and impurity solvers
    3. Variational uniform matrix product states and tree tensor networks
    4. Improved contraction methods for infinite 2D tensor networks
    5. Easing the sign problem with variational circuits and automatic differentation
  6. References

Coding Projects

ITensors.jl

I am the lead developer of ITensors.jl , a full port of the C++ ITensor library to the Julia language. It is a library I co-develop with Miles Stoudenmire for easily developing and running high performance tensor network algorithms, with applications to quantum physics, quantum computing, chemistry, and data science/machine learning.

itensor.org

Source code

Fishman et al. (2020)

ITensors.jl news

Automatic differentiation: Derivatives of basic tensor network operations like tensor contraction and addition as well as many MPO/MPS operations like gate evolution using automatic differentiation are now supported. Reverse mode automatic differentiation (AD) primitives are defined using ChainRulesCore.jl , and derivatives can be computed with AD libraries like Zygote.jl (and eventually next generation AD libraries like Diffractor.jl ).

Lazy Operator Algebra (Ops) system: Within ITensors.jl, we now have support for a general lazy operator algebra system called Ops . This system can be used to represent arbitrary sums, products, and other algebraic manipulations of quantum operators (or more generally linear transformations in tensor product spaces). This can be used to represent local Hamiltonians and quantum circuits, as well as useful algebraic operations like expanding product of sums of operators and representing Trotter-Suzuki decompositions to transform exponentials of sums of operators into products of exponentials of operators. Additionally, operator representations can then be converted into explicit tensor representations for use in tensor network algorithms or to perform diagonalizations.

The Ops systems needs testing, documentation, and automatic differentation support. Please reach out if you are interested in helping out!

ITensors.jl wish list

Automatic differentiation: Our goal is to make the entire ITensors.jl package differentiable, but currently we only have basic operations like tensor contraction, addition, and index manipulation. More reverse mode automatic differentation rules (ChainRulesCore.jl rrules) and/or modifications of internal ITensors.jl functions to avoid non-differentiable code patterns like mutation are needed to make this happen. The following functionality is high priority:

More block sparse multithreading: We currently support block sparse multithreaded tensor contractions. We would like to add multithreading support to other block sparse operations, such as addition, permutation, and decomposition. Please reach out if you are interested in contributing code for that!

More block sparse factorizations: We currently support block sparse multithreaded tensor contractions. We would like to add multithreading support to other block sparse operations, such as addition, permutation, and decomposition. Please reach out if you are interested in contributing code for that!

ITensor slicing: We have good support for slicing dense tensors at the level of NDTensors.Tensor, as well as slicing a single block of a block sparse NDTensors.Tensor. However, it would be helpful to have high level support for slicing at the level of ITensors, which requires having an interface for slicing Index objects, which could be represented by objects like i => 2:3 of type Pair{Index{Int64},UnitRange{Int64}}. Additionally, it would be helpful to have more robust support for more general slices of block sparse tensors, including slices across multiple blocks and within blocks.


NDTensors.jl

NDTensors.jl is the more traditional tensor algebra package underlying ITensors.jl. It defines an n-dimensional tensor Tensor that can have a variety of storage data types for various sparse and constrained tensors, such as dense, block sparse, and diagonal, with more planned such as tensors with isometric/unitary constraints. It implements high performance operations between mixtures of different tensor types such as addition, permutation, matrix factorization, and contraction. Additionally, it supports block sparse multithreaded contraction.

Source code

NDTensors.jl wish list

Tensors with isometric/unitary constraints: A special tensor storage type representing a tensor with isometric/unitary constraints would be useful in a variety of applications, such as isometrically constrained gradient optimization, automated simplification of tensor network contractions involving contractions of isometric tensors, etc. Please reach out if you are interested in helping us implement this feature.

Lazy complex conjugation: It would be helpful for improving performance and memory usage to add support for lazy complex conjugation. For example, tensor contractions involving complex conjugation could be mapped directly to matrix multiplication calls to BLAS without allocating temporary complex conjugated tensors.


ITensorNetworks.jl

ITensorNetworks.jl This is the next-generation general tensor network library built on top of ITensors.jl. It will generalize the MPS solvers like DMRG, TDVP, and linear solving, as well as tools for gate evolution, that are available in ITensors.jl and ITensorTDVP.jl to tree tensor networks (TTN) and even more general tensor networks. Stay tuned for more developments!

Source code


ITensorGLMakie.jl

ITensorGLMakie.jl is a package I wrote for easily making interactive visualizations of tensor networks written with ITensors.jl, based on GraphMakie.jl and Makie.jl . It supports clicking and dragging nodes/tensors of the tensor network.

Source code


More interactive customization: Currently, ITensorGLMakie.jl only support simple interactivity, such as clicking and dragging the nodes/tensors of the tensor network diagram. We would like to have more interactivity, such as interactively selecting the color, shape, and labels of the nodes/tensors.

Multigraph visualization: ITensorGLMakie.jl currently visualizes tensors with multiple shared indices with a single edge and a label with information about the multiple edges. It would be helpful to directly visualize the multiple edges/indices. GraphMakie.jl, the package we use as a backend for ITensorGLMakie, implicitly supports visualizing multigraphs , so support for this should be straightforward to add.

ITensorUnicodePlots.jl

ITensorUnicodePlots.jl is an alternative backend for visualizing networks of ITensors as text output, based on UnicodePlots.jl .

Source code

ITensorGaussianMPS.jl

ITensorGaussianMPS.jl is a Julia package I wrote for transforming free fermion states into tensor network states, based on an algorithm I developed during my Ph.D. with Steven White .

Source code

Fishman et al. (2015)

ITensorInfiniteMPS.jl

ITensorInfiniteMPS.jl is a Julia package I wrote for extending the functionality of ITensors.jl to infinite MPS.

Source code

Zauner-Stauber et al. (2017)

Other extensions to ITensors.jl

Many packages are in development that extend the functionality of ITensors.jl, such as packages for performing network level contractions and gradient optimizations of tensor networks, packages for interfacing with quantum chemistry libraries like PySCF, and more. Stay tuned and keep an eye out on my Github page , the ITensor Github organization , and the ITensor website !

PastaQ.jl

PastaQ.jl is a package I co-develop with Giacomo Torlai for simulating and analyzing quantum computers, including noisy state and process simulation with customizable noise models, state-of-the-art algorithms for tomography and ongoing work using automatic differentiation to optimize quantum circuits for implementing algorithms like variational quantum eigensolver (VQE) and optimal control.

pastaq.org

Source code

ITensor (C++)

ITensor is a C++ library for developing and performing tensor network calculations. I was the lead developer of C++ ITensor Version 3 , the latest major release of the library which had many improvements to the interface and performance of block sparse calculations, including the introduction of block sparse multithreading with OpenMP.

itensor.org

Source code

Miscellaneous Julia Packages

Observers.jl

I co-developed Observers.jl with Giacomo Torlai . It is a package for conveniently specifying a set of measurements you want to make inside of an iterative method. It is currently being used in PastaQ.jl inside iterative optimization methods like quantum state and process tomography as well as quantum circuit evolution, and we plan to make use of it in ITensors.jl .

Using Observers.jl in ITensors.jl: We are interested in using Observers.jl inside iterative methods in ITensors.jl like the density matrix renormalization group (DMRG) eigensolver as well as our circuit simulation functionality (apply). Please reach out to me if you are interested in helping out with this! It would be a good project for a new user trying to learn about DMRG, Julia, and ITensors.jl.

Source code

SerializedElementArrays.jl

SerializedElementArrays.jl is a package I wrote that provides a new Julia Array type (a SerializedElementArray) whose elements are saved to disk. This can help in cases where you have collections of large contiguous data (like an Array of very large Arrays) which individually fit in memory but collectively do not. This is used for the write-to-disk feature in ITensors.jl.

Tensor network algorithm development

Gauging tensor networks with belief propagation and applications to circuit simulation

We recently developed a new method for gauging tensor networks based on belief propagation, and applied it to simulate the kicked transverse field Ising model on a heavy-hex lattice, a model that was recently emulated on IBM's Eagle quantum processor.

Tindall et al. (2023)

Tindall et al. (2023)

Gaussian circuits and impurity solvers

Steven White and I developed an algorithm for obtaining a compact quantum circuit of local gates for a free fermion state. This leads to a straightforward way to construct tensor network state like matrix product states (MPS), tree tensor networks (TTN), and multi-scale entanglement renormalization ansatz (MERA) for free fermion states.

Fishman et al. (2015)

We have recently applied this method to develop next-generation impurity solvers based on disentangling the non-interacting bath, as well as by representing the influence matrix of the non-interacting bath as a matrix product state (MPS).

Wu et al. (2022)

Kloss et al. (2023)

Variational uniform matrix product states and tree tensor networks

In collaboration with colleagues at the University of Ghent and the University of Vienna, I helped to develop a new algorithm for finding ground states of quasi-1D quantum systems directly in the thermodynamic limit, which is faster at finding ground states than the state-of-the-art alternatives. The algorithm is called the variational uniform matrix product state (VUMPS) algorithm.

Zauner-Stauber et al. (2017)

In collaboration with colleagues at the CCQ, I worked on extending the VUMPS algorithm to solve for ground states of infinite tree tensor networks states, such as states on the Bethe lattices, in an algorithm we called the variational uniform tree state (VUTS) algorithm.

Lunts et al. (2020)

Improved contraction methods for infinite 2D tensor networks

In collaboration with colleagues at the University of Ghent and the University of Vienna, I worked on extending the VUMPS algorithm to the problem of contracting infinite 2D tensor networks and showed that in many cases it outperforms the standard method, the corner transfer matrix renormalization group (CTMRG) algorithm. In addition, I worked on a fixed point formulation of CTMRG which we also showed was faster than the original CTMRG algorithm, which we called the fixed point corner method (FPCM).

Fishman et al. (2017)

Easing the sign problem with variational circuits and automatic differentation

With colleagues from CCQ and other institutions, I helped develop a method for decreasing the average sign of a wavefunction by optimizing a quantum circuit ansatz with automatic differentiation. This could have implications for improving the performance of monte carlo algorithms.

Torlai et al. (2019)

References

Tindall, Fishman Gauging tensor networks with belief propagation, 2023.

Tindall, Fishman, Stoudenmire, Sels Efficient tensor network simulation of IBM's Eagle kicked Ising experiment, 2023.

Kloss, Thoenniss, Sonner, Lerose, Fishman, Stoudenmire, Parcollet, Georges, Abanin Equilibrium Quantum Impurity Problems via Matrix Product State Encoding of the Retarded Action, 2023.

Wu, Fishman, Pixley, Stoudenmire Disentangling Interacting Systems with Fermionic Gaussian Circuits: Application to the Single Impurity Anderson Model, 2022.

Lunts, George, Stoudenmire, Fishman The Hubbard model on the Bethe lattice via variational uniform tree states: metal-insulator transition and a Fermi liquid, 2020.

Fishman, White, Stoudenmire The ITensor Software Library for Tensor Network Calculations, 2020.

Torlai, Carrasquilla, Fishman, Melko, Fisher Wavefunction positivization via automatic differentiation, 2019.

Fishman, Vanderstraeten, Zauner-Stauber, Haegeman, Verstraete Faster Methods for Contracting Infinite 2D Tensor Networks, 2017.

Zauner-Stauber, Vanderstraeten, Fishman, Verstraete, Haegeman Variational optimization algorithms for uniform matrix product states, 2017.

Fishman, White Compression of Correlation Matrices and an Efficient Method for Forming Matrix Product States of Fermionic Gaussian States, 2015.