# RTNI - A symbolic integrator for Haar-random tensor networks

Together with Motohisa Fukuda and Robert König we have a new paper (and some code!) RTNI - A symbolic integrator for Haar-random tensor networks. This is a very unusual paper for me: the main goal here is not to prove theorems but to introduce a piece of software we developed: RNTI. RTNI is a software package, available for Mathematica and for python, which computes the expectation value of **random tensors** defined by random, **Haar-distributed**, **unitary operators**.

Let us unpack all these technical terms. First, unitary operators: these are linear transformations (or matrices) which are *rotation-like*, meaning that they are invertible and they do not modify the magnitude of the vectors they act on. Here is a counterclockwise 45° (or ) rotation:

and below is a unitary matrix realizing the symmetry with respect to the first coordinate axis in a plane

In the general case, unitary matrices satisfy the condition , where the star operation corresponds to taking the (Hermitian) adjoint (complex conjugation, followed by transposition). The set of unitary matrices form a compact group under multiplication, and this group admits a unique probability measure which is invariant under left/right translations by group elements: the Haar measure. As it turns out, in the case of the unitary group (and even more so for its relatives, the orthogonal and the symplectic groups), the Haar measure is complicated, and computing integrals with respect to it is a daunting task.

But before we get into computing integrals with respect to the Haar measure, let me address an even more fundamental question regarding it: sampling. Producing a Haar-distributed random unitary matrix on a computer is quite easy: construct a random matrix with enough symmetry (say, a Ginibre or a GUE matrix), compute its singular value/eigenvalue decomposition, and just take one of the unitary matrices appearing in that decomposition. The initial symmetry of your random matrix will translate into the invariance property that uniquely identifies the Haar measure (although there are some minor subtleties, see here or here). Numerous software implementations of this procedure exist, here is one example in a MATLAB library popular with the Quantum Information Theory crowd.

Once that we have properly defined the Haar measure and that we know how to produce samples from it, the next question that comes to mind is how to compute integrals with respect to it. This problem was first considered in the physics literature by Weingarten in the asymptotic limit of large dimension. The mathematical theory was developed, rigorously and in full generality, by Benoit Collins (here, and later with Piotr Śniady), who also coined the name *Weingarten formula* for the following result.

**Theorem**[Weingarten formula]

*Let be positive integers and , , , be -tuples of positive integers from . Then*

If then

Here we denoted by the unitary group acting on an -dimensional Hilbert space, and by the symmetric group on elements. The integrals are taken with respect to the normalized Haar measure on . The function is called the unitary Weingarten function.

The above formula allows one to compute the expectation of any monomial (and thus, by linearity, any polynomial) in the entries of a Haar-distributed random unitary matrix. But there is one caveat: the Weingarten function appearing in the formula above is an object of combinatorial nature, which has no explicit formula for general . Nonetheless, for fixed monomial order , there exist algorithmic procedures to compute the values for all permutations and thus one can compute averages of monomials such as

As it turns out, in practical applications, one does not encounter often polynomials in the entries of a random unitary matrix . It is more likely to be faced with expressions which involve (traces of) polynomials in the matrix itself, together with some other given matrices. For instance, the following type of expression is quite common (in, e.g., Quantum Information Theory): , where are two given, fixed matrices. One could, of course, write the trace in coordinates, and compute the expectation as a sum where the individual terms are calculated with the help of the Weingarten formula. Here's where **RTNI** comes to the rescue, saving you all that trouble (we use here the Mathematica implementation of RTNI):

`In[1]:= MultinomialexpectationvalueHaar[d, {1, 2}, {X, Y}, True]`

Out[1]= (Tr[X] Tr[Y])/d

Above, the *MultinomialexpectationvalueHaar* function computes , where (meaning we simply take ) and (meaning we take ). Using this function, one can compute (the trace of) any *linear expression* involving Haar-distributed random unitary matrices and deterministic matrices.

But RTNI can do much more than that! It can compute expectation values of (non-linear) graphs of matrices (i.e. tensors) involving random unitary matrices which can act on tensor products of vector spaces. Such situations are very common in quantum information theory and in theoretical physics in general, where networks of tensors are becoming very popular. One starts from the description of a tensor in Penrose graphical notation; is a graph where the vertices correspond to tensors, and edges correspond to tensor contraction. Importantly, the tensor product operation corresponds to the disjoint union of graphs. Here is an example coming from the theory of random quantum channels. Consider a random quantum channel given by a Haar-distributed random isometry ( is just the truncation to the first columns of a random unitary matrix )

Such random quantum channels were first considered by Hayden and Winter in relation to the additivity conjecture in quantum information theory, solved in the negative by Hastings. In such considerations, one bounds the minimum output entropy of the tensor product channel by using the overlap

between the output of the channels acting on a maximally entangled state with another maximally entangled state (on the output spaces) . Recall that the maximally entangled state is defined, in general, by , where the vector reads

with being an orthonormal basis of . Here is the complex conjugate channel, obtained by taking the entry-wise conjugate of . The diagram for the scalar is given below

One can input the graph above into RTNI using the following Mathematica commands:

`In[1]:= e1 = {{"U*", 2, "out", 1}, {"U", 1, "in", 1}};`

In[2]:= e2 = {{"U*", 1, "out", 1}, {"U", 2, "in", 1}};

In[3]:= e3 = {{"U", 1, "out", 1}, {"U*", 1, "in", 1}};

In[4]:= e4 = {{"U", 2, "out", 1}, {"U*", 2, "in", 1}};

In[5]:= e5 = {{"U", 1, "out", 2}, {"U*", 2, "in", 2}};

In[6]:= e6 = {{"U", 2, "out", 2}, {"U*", 1, "in", 2}};

In[7]:= g = {e1, e2, e3, e4, e5, e6};

In[8]:= listg = {{g, 1/(d k)}};

Note that the final command assigns a weight to the graph ; this corresponds to the normalization of the two maximally entangled states . The edge numbering in the code matches the numbering in the picture, as do the numberings of the different boxed and of the tensor factors. One has to call then the *integrateHaarUnitary* function of the RTNI package to compute the expected value of the scalar defined above with respect to the Haar measure on . The routine *integrateHaarUnitary* implements the **graphical Weingarten calculus** which I developed together with Benoit in this paper. It is basically the Weingarten formula, read in the Penrose graphical calculus. RTNI gives the following output, which we further manipulate in order to obtain the asymptotic overlap in the regime where :

`In[9]:= Eg = integrateHaarUnitary[listg, "U", {d}, {n, k}, n k]`

In[10]:= overlap = Eg[[1, 2]];

In[11]:= overlapt = overlap /. {d -> t n k};

In[12]:= Assuming[t > 0 && k > 1, Limit[overlapt, n -> Infinity]] // Simplify

Out[1]:= {{{}, -((d^2 k^2 n)/(d k - d k^3 n^2)) - (d k n^2)/(

d k - d k^3 n^2) + (d k^2 n)/(d k^2 n - d k^4 n^3) + (d^2 k n^2)/(

d k^2 n - d k^4 n^3)}}

Out[2]:= (1 - t)/k^2 + t

This study case emphasizes the strengths of the RTNI package: computing the expected overlap using the algebraic Weingarten formula would have required tedious summations and case analysis for index collisions. The RTNI package provides a simple, conceptual way of computing such expectations. RTNI can also display the different tensor networks one inputs, here are some examples:

If you are interested in using RTNI for your research, we would be happy to hear about it. Also, if you want to implement the killer feature of RTNI which is missing in the current release, please contact us!