Operations
Contents
Operations#
Edge Operations#
connect#
- tensorkrowch.connect(edge1, edge2)[source]#
- Connects two dangling edges. It is necessary that both edges have the same size so that contractions along that edge can be computed. - Note that this connectes edges from - leaf(or- data,- virtual) nodes, but never from- resultantnodes. If one tries to connect one of the inherited edges of a- resultantnode, the new connected edge will be attached to the original- leafnodes from which the- resultantnode inherited its edges. Hence, the- resultantnode will not “see” the connection until the- TensorNetworkis- reset().- If the nodes that are being connected come from different networks, the - node2(and its connected component) will be movec to- node1’s network. See also- move_to_network().- This operation is the same as - Edge.connect().- Parameters:
- Return type:
 - Examples - >>> nodeA = tk.Node(shape=(2, 3), ... name='nodeA', ... axes_names=('left', 'right')) >>> nodeB = tk.Node(shape=(3, 4), ... name='nodeB', ... axes_names=('left', 'right')) >>> new_edge = tk.connect(nodeA['right'], nodeB['left']) >>> print(new_edge.name) nodeA[right] <-> nodeB[left] 
connect_stack#
- tensorkrowch.connect_stack(edge1, edge2)[source]#
- Same as - connect()but it is first verified that all stacked edges corresponding to both- StackEdgesare the same. That is, this is a redundant operation to re-connect a list of edges that should be already connected. However, this is mandatory, since when stacking two sequences of nodes independently it cannot be inferred that the resultant- StackNodeshad to be connected.- This operation is the same as - StackEdge.connect().
disconnect#
- tensorkrowch.disconnect(edge)[source]#
- Disconnects connected edge, that is, the connected edge is split into two dangling edges, one for each node. - This operation is the same as - Edge.disconnect().
svd#
- tensorkrowch.svd(edge, side='left', rank=None, cum_percentage=None, cutoff=None)[source]#
- Contracts an edge via - contract()and splits it via- split()using- mode = "svd". See- split()for a more complete explanation.- This only works if the nodes connected through the edge are - leafnodes. Otherwise, this will perform the contraction between the- leafnodes that were connected through this edge.- This operation is the same as - svd().- Parameters:
- edge (Edge) – Edge whose nodes are to be contracted and split. 
- side (str, optional) – Indicates the side to which the diagonal matrix \(S\) should be contracted. If “left”, the first resultant node’s tensor will be \(US\), and the other node’s tensor will be \(V^{\dagger}\). If “right”, their tensors will be \(U\) and \(SV^{\dagger}\), respectively. 
- rank (int, optional) – Number of singular values to keep. 
- cum_percentage (float, optional) – - Proportion that should be satisfied between the sum of all singular values kept and the total sum of all singular values. \[\frac{\sum_{i \in \{kept\}}{s_i}}{\sum_{i \in \{all\}}{s_i}} \ge cum\_percentage\]
- cutoff (float, optional) – Quantity that lower bounds singular values in order to be kept. 
 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 20, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> new_edge = nodeA['right'] ^ nodeB['left'] >>> new_nodeA, new_nodeB = tk.svd(new_edge, rank=7) ... >>> new_nodeA.shape torch.Size([10, 7, 100]) - >>> new_nodeB.shape torch.Size([7, 20, 100]) - >>> print(new_nodeA.axes_names) ['left', 'right', 'batch'] - >>> print(new_nodeB.axes_names) ['left', 'right', 'batch'] - Original nodes still exist in the network - >>> assert nodeA.network == new_nodeA.network >>> assert nodeB.network == new_nodeB.network 
svd_#
- tensorkrowch.svd_(edge, side='left', rank=None, cum_percentage=None, cutoff=None)[source]#
- In-place version of - svd().- Contracts an edge in-place via - contract_()and splits it in-place via- split_()using- mode = "svd". See- split()for a more complete explanation.- Following the PyTorch convention, names of functions ended with an underscore indicate in-place operations. - Nodes - resultantfrom this operation use the same names as the original nodes connected by- edge.- This operation is the same as - svd_().- Parameters:
- edge (Edge) – Edge whose nodes are to be contracted and split. 
- side (str, optional) – Indicates the side to which the diagonal matrix \(S\) should be contracted. If “left”, the first resultant node’s tensor will be \(US\), and the other node’s tensor will be \(V^{\dagger}\). If “right”, their tensors will be \(U\) and \(SV^{\dagger}\), respectively. 
- rank (int, optional) – Number of singular values to keep. 
- cum_percentage (float, optional) – - Proportion that should be satisfied between the sum of all singular values kept and the total sum of all singular values. \[\frac{\sum_{i \in \{kept\}}{s_i}}{\sum_{i \in \{all\}}{s_i}} \ge cum\_percentage\]
- cutoff (float, optional) – Quantity that lower bounds singular values in order to be kept. 
 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 20, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> new_edge = nodeA['right'] ^ nodeB['left'] >>> nodeA, nodeB = tk.svd_(new_edge, rank=7) ... >>> nodeA.shape torch.Size([10, 7, 100]) - >>> nodeB.shape torch.Size([7, 20, 100]) - >>> print(nodeA.axes_names) ['left', 'right', 'batch'] - >>> print(nodeB.axes_names) ['left', 'right', 'batch'] 
svdr#
- tensorkrowch.svdr(edge, side='left', rank=None, cum_percentage=None, cutoff=None)[source]#
- Contracts an edge via - contract()and splits it via- split()using- mode = "svdr". See- split()for a more complete explanation.- This only works if the nodes connected through the edge are - leafnodes. Otherwise, this will perform the contraction between the- leafnodes that were connected through this edge.- This operation is the same as - svdr().- Parameters:
- edge (Edge) – Edge whose nodes are to be contracted and split. 
- side (str, optional) – Indicates the side to which the diagonal matrix \(S\) should be contracted. If “left”, the first resultant node’s tensor will be \(US\), and the other node’s tensor will be \(V^{\dagger}\). If “right”, their tensors will be \(U\) and \(SV^{\dagger}\), respectively. 
- rank (int, optional) – Number of singular values to keep. 
- cum_percentage (float, optional) – - Proportion that should be satisfied between the sum of all singular values kept and the total sum of all singular values. \[\frac{\sum_{i \in \{kept\}}{s_i}}{\sum_{i \in \{all\}}{s_i}} \ge cum\_percentage\]
- cutoff (float, optional) – Quantity that lower bounds singular values in order to be kept. 
 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 20, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> new_edge = nodeA['right'] ^ nodeB['left'] >>> new_nodeA, new_nodeB = tk.svdr(new_edge, rank=7) ... >>> new_nodeA.shape torch.Size([10, 7, 100]) - >>> new_nodeB.shape torch.Size([7, 20, 100]) - >>> print(new_nodeA.axes_names) ['left', 'right', 'batch'] - >>> print(new_nodeB.axes_names) ['left', 'right', 'batch'] - Original nodes still exist in the network - >>> assert nodeA.network == new_nodeA.network >>> assert nodeB.network == new_nodeB.network 
svdr_#
- tensorkrowch.svdr_(edge, side='left', rank=None, cum_percentage=None, cutoff=None)[source]#
- In-place version of - svdr().- Contracts an edge in-place via - contract_()and splits it in-place via- split_()using- mode = "svdr". See- split()for a more complete explanation.- Following the PyTorch convention, names of functions ended with an underscore indicate in-place operations. - Nodes - resultantfrom this operation use the same names as the original nodes connected by- edge.- This operation is the same as - svdr_().- Parameters:
- edge (Edge) – Edge whose nodes are to be contracted and split. 
- side (str, optional) – Indicates the side to which the diagonal matrix \(S\) should be contracted. If “left”, the first resultant node’s tensor will be \(US\), and the other node’s tensor will be \(V^{\dagger}\). If “right”, their tensors will be \(U\) and \(SV^{\dagger}\), respectively. 
- rank (int, optional) – Number of singular values to keep. 
- cum_percentage (float, optional) – - Proportion that should be satisfied between the sum of all singular values kept and the total sum of all singular values. \[\frac{\sum_{i \in \{kept\}}{s_i}}{\sum_{i \in \{all\}}{s_i}} \ge cum\_percentage\]
- cutoff (float, optional) – Quantity that lower bounds singular values in order to be kept. 
 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 20, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> new_edge = nodeA['right'] ^ nodeB['left'] >>> nodeA, nodeB = tk.svdr_(new_edge, rank=7) ... >>> nodeA.shape torch.Size([10, 7, 100]) - >>> nodeB.shape torch.Size([7, 20, 100]) - >>> print(nodeA.axes_names) ['left', 'right', 'batch'] - >>> print(nodeB.axes_names) ['left', 'right', 'batch'] 
qr#
- tensorkrowch.qr(edge)[source]#
- Contracts an edge via - contract()and splits it via- split()using- mode = "qr". See- split()for a more complete explanation.- This only works if the nodes connected through the edge are - leafnodes. Otherwise, this will perform the contraction between the- leafnodes that were connected through this edge.- This operation is the same as - qr().- Parameters:
- edge (Edge) – Edge whose nodes are to be contracted and split. 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 20, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> new_edge = nodeA['right'] ^ nodeB['left'] >>> new_nodeA, new_nodeB = tk.qr(new_edge) ... >>> new_nodeA.shape torch.Size([10, 10, 100]) - >>> new_nodeB.shape torch.Size([10, 20, 100]) - >>> print(new_nodeA.axes_names) ['left', 'right', 'batch'] - >>> print(new_nodeB.axes_names) ['left', 'right', 'batch'] - Original nodes still exist in the network - >>> assert nodeA.network == new_nodeA.network >>> assert nodeB.network == new_nodeB.network 
qr_#
- tensorkrowch.qr_(edge)[source]#
- In-place version of - qr().- Contracts an edge in-place via - contract_()and splits it in-place via- split_()using- mode = "qr". See- split()for a more complete explanation.- Following the PyTorch convention, names of functions ended with an underscore indicate in-place operations. - Nodes - resultantfrom this operation use the same names as the original nodes connected by- edge.- This operation is the same as - qr_().- Parameters:
- edge (Edge) – Edge whose nodes are to be contracted and split. 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 20, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> new_edge = nodeA['right'] ^ nodeB['left'] >>> nodeA, nodeB = tk.qr_(new_edge) ... >>> nodeA.shape torch.Size([10, 10, 100]) - >>> nodeB.shape torch.Size([10, 20, 100]) - >>> print(nodeA.axes_names) ['left', 'right', 'batch'] - >>> print(nodeB.axes_names) ['left', 'right', 'batch'] 
rq#
- tensorkrowch.rq(edge)[source]#
- Contracts an edge via - contract()and splits it via- split()using- mode = "rq". See- split()for a more complete explanation.- This only works if the nodes connected through the edge are - leafnodes. Otherwise, this will perform the contraction between the- leafnodes that were connected through this edge.- This operation is the same as - rq().- Parameters:
- edge (Edge) – Edge whose nodes are to be contracted and split. 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 20, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> new_edge = nodeA['right'] ^ nodeB['left'] >>> new_nodeA, new_nodeB = tk.rq(new_edge) ... >>> new_nodeA.shape torch.Size([10, 10, 100]) - >>> new_nodeB.shape torch.Size([10, 20, 100]) - >>> print(new_nodeA.axes_names) ['left', 'right', 'batch'] - >>> print(new_nodeB.axes_names) ['left', 'right', 'batch'] - Original nodes still exist in the network - >>> assert nodeA.network == new_nodeA.network >>> assert nodeB.network == new_nodeB.network 
rq_#
- tensorkrowch.rq_(edge)[source]#
- In-place version of - rq().- Contracts an edge in-place via - contract_()and splits it in-place via- split_()using- mode = "rq". See- split()for a more complete explanation.- Following the PyTorch convention, names of functions ended with an underscore indicate in-place operations. - Nodes - resultantfrom this operation use the same names as the original nodes connected by- edge.- This operation is the same as - rq_().- Parameters:
- edge (Edge) – Edge whose nodes are to be contracted and split. 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 20, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> new_edge = nodeA['right'] ^ nodeB['left'] >>> nodeA, nodeB = tk.rq_(new_edge) ... >>> nodeA.shape torch.Size([10, 10, 100]) - >>> nodeB.shape torch.Size([10, 20, 100]) - >>> print(nodeA.axes_names) ['left', 'right', 'batch'] - >>> print(nodeB.axes_names) ['left', 'right', 'batch'] 
contract#
- tensorkrowch.contract(edge)[source]#
- Contracts the nodes that are connected through the edge. - This only works if the nodes connected through the edge are - leafnodes. Otherwise, this will perform the contraction between the- leafnodes that were connected through this edge.- Nodes - resultantfrom this operation are called- "contract_edges". The node that keeps information about the- Successoris- edge.node1.- This operation is the same as - contract().- Parameters:
- edge (Edge) – Edge that is to be contracted. Batch contraction is automatically performed when both nodes have batch edges with the same names. 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 20), ... axes_names=('one', 'two', 'three'), ... name='nodeA') >>> nodeB = tk.randn(shape=(10, 15, 20), ... axes_names=('one', 'two', 'three'), ... name='nodeB') ... >>> _ = nodeA['one'] ^ nodeB['one'] >>> _ = nodeA['two'] ^ nodeB['two'] >>> _ = nodeA['three'] ^ nodeB['three'] >>> result = tk.contract(nodeA['one']) >>> result.shape torch.Size([15, 20, 15, 20]) 
contract_#
- tensorkrowch.contract_(edge)[source]#
- In-place version of - contract().- Following the PyTorch convention, names of functions ended with an underscore indicate in-place operations. - Nodes - resultantfrom this operation are called- "contract_edges_ip".- This operation is the same as - contract_().- Parameters:
- edge (Edge) – Edges that is to be contracted. Batch contraction is automatically performed when both nodes have batch edges with the same names. 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 20), ... axes_names=('one', 'two', 'three'), ... name='nodeA') >>> nodeB = tk.randn(shape=(10, 15, 20), ... axes_names=('one', 'two', 'three'), ... name='nodeB') ... >>> _ = nodeA['one'] ^ nodeB['one'] >>> _ = nodeA['two'] ^ nodeB['two'] >>> _ = nodeA['three'] ^ nodeB['three'] >>> result = tk.contract_(nodeA['one']) >>> result.shape torch.Size([15, 20, 15, 20]) - nodeAand- nodeBhave been removed from the network.- >>> nodeA.network is None True - >>> nodeB.network is None True - >>> del nodeA >>> del nodeB 
Operation Class#
- class tensorkrowch.Operation(name, check_first, fn_first, fn_next)[source]#
- Class for node operations. - A node operation is made up of two functions, the one that is executed the first time the operation is called and the one that is executed in every other call (with the same arguments). Both functions are usually similar, though the former computes extra things regarding the creation of the - resultantnodes and some auxiliary operations whose result will be the same in every call (e.g. when contracting two nodes, maybe a permutation of the tensors should be first performed; how this permutation is carried out is always the same, though the tensors themselves are different).- Parameters:
- name (str) – Name of the operation. It cannot coincide with another operation’s name. Operation names can be checked via - net.operations.
- check_first (callable) – Function that checks if the operation has been called at least one time. 
- fn_first (callable) – Function that is called the first time the operation is performed. 
- fn_next (callable) – Function that is called the next times the operation is performed. 
 
 
Tensor-like Operations#
permute#
- tensorkrowch.permute(node, axes)[source]#
- Permutes the nodes’ tensor, as well as its axes and edges to match the new shape. - See permute in the PyTorch documentation. - Nodes - resultantfrom this operation are called- "permute". The node that keeps information about the- Successoris- node.- This operation is the same as - permute().- Parameters:
- Return type:
 - Examples - >>> node = tk.randn((2, 5, 7)) >>> result = tk.permute(node, (2, 0, 1)) >>> result.shape torch.Size([7, 2, 5]) 
permute_#
- tensorkrowch.permute_(node, axes)[source]#
- Permutes the nodes’ tensor, as well as its axes and edges to match the new shape (in-place). - Following the PyTorch convention, names of functions ended with an underscore indicate in-place operations. - See permute. - Nodes - resultantfrom this operation use the same name as- node.- This operation is the same as - permute_().- Parameters:
- Return type:
 - Examples - >>> node = tk.randn((2, 5, 7)) >>> node = tk.permute_(node, (2, 0, 1)) >>> node.shape torch.Size([7, 2, 5]) 
tprod#
- tensorkrowch.tprod(node1, node2)[source]#
- Tensor product between two nodes. It can also be performed using the operator - %.- Nodes - resultantfrom this operation are called- "tprod". The node that keeps information about the- Successoris- node1.- Parameters:
- Return type:
 - Examples - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> nodeB = tk.randn((4, 5), network=net) >>> result = nodeA % nodeB >>> result.shape torch.Size([2, 3, 4, 5]) 
mul#
- tensorkrowch.mul(node1, node2)[source]#
- Element-wise product between two nodes. It can also be performed using the operator - *.- It also admits to take as - node2a number or tensor, that will be multiplied by the- node1tensor as- node1.tensor * node2. If this is used like this in the- contract()method of a- TensorNetwork, this will have to be called explicitly to contract the network, rather than relying on its internal call via the- forward().- Nodes - resultantfrom this operation are called- "mul". The node that keeps information about the- Successoris- node1.- Parameters:
- Return type:
 - Examples - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> nodeB = tk.randn((2, 3), network=net) >>> result = nodeA * nodeB >>> result.shape torch.Size([2, 3]) - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> tensorB = torch.randn(2, 3) >>> result = nodeA * tensorB >>> result.shape torch.Size([2, 3]) 
div#
- tensorkrowch.div(node1, node2)[source]#
- Element-wise division between two nodes. It can also be performed using the operator - /.- It also admits to take as - node2a number or tensor, that will divide the- node1tensor as- node1.tensor / node2. If this is used like this in the- contract()method of a- TensorNetwork, this will have to be called explicitly to contract the network, rather than relying on its internal call via the- forward().- Nodes - resultantfrom this operation are called- "div". The node that keeps information about the- Successoris- node1.- Parameters:
- Return type:
 - Examples - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> nodeB = tk.randn((2, 3), network=net) >>> result = nodeA / nodeB >>> result.shape torch.Size([2, 3]) - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> tensorB = torch.randn(2, 3) >>> result = nodeA / tensorB >>> result.shape torch.Size([2, 3]) 
add#
- tensorkrowch.add(node1, node2)[source]#
- Element-wise addition between two nodes. It can also be performed using the operator - +.- It also admits to take as - node2a number or tensor, that will be added to the- node1tensor as- node1.tensor + node2. If this is used like this in the- contract()method of a- TensorNetwork, this will have to be called explicitly to contract the network, rather than relying on its internal call via the- forward().- Nodes - resultantfrom this operation are called- "add". The node that keeps information about the- Successoris- node1.- Parameters:
- Return type:
 - Examples - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> nodeB = tk.randn((2, 3), network=net) >>> result = nodeA + nodeB >>> result.shape torch.Size([2, 3]) - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> tensorB = torch.randn(2, 3) >>> result = nodeA + tensorB >>> result.shape torch.Size([2, 3]) 
sub#
- tensorkrowch.sub(node1, node2)[source]#
- Element-wise subtraction between two nodes. It can also be performed using the operator - -.- It also admits to take as - node2a number or tensor, that will be subtracted from the- node1tensor as- node1.tensor - node2. If this is used like this in the- contract()method of a- TensorNetwork, this will have to be called explicitly to contract the network, rather than relying on its internal call via the- forward().- Nodes - resultantfrom this operation are called- "sub". The node that keeps information about the- Successoris- node1.- Parameters:
- Return type:
 - Examples - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> nodeB = tk.randn((2, 3), network=net) >>> result = nodeA - nodeB >>> result.shape torch.Size([2, 3]) - >>> net = tk.TensorNetwork() >>> nodeA = tk.randn((2, 3), network=net) >>> tensorB = torch.randn(2, 3) >>> result = nodeA - tensorB >>> result.shape torch.Size([2, 3]) 
renormalize#
- tensorkrowch.renormalize(node, p=2, axis=None)[source]#
- Normalizes the node with the specified norm. That is, the tensor of - nodeis divided by its norm.- Different norms can be taken, specifying the argument - p, and accross different dimensions, or node axes, specifying the argument- axis.- See also torch.norm(). - Parameters:
- Return type:
 - Examples - >>> nodeA = tk.randn((3, 3)) >>> renormA = tk.renormalize(nodeA) >>> renormA.norm() tensor(1.) 
conj#
- tensorkrowch.conj(node)[source]#
- Returns a view of the node’s tensor with a flipped conjugate bit. If the node has a non-complex dtype, this function returns a new node with the same tensor. - See conj in the PyTorch documentation. - Examples - >>> nodeA = tk.randn((3, 3), dtype=torch.complex64) >>> conjA = tk.conj(nodeA) >>> conjA.is_conj() True 
Node-like Operations#
split#
- tensorkrowch.split(node, node1_axes, node2_axes, mode='svd', side='left', rank=None, cum_percentage=None, cutoff=None)[source]#
- Splits one node in two via the decomposition specified in - mode. To perform this operation the set of edges has to be split in two sets, corresponding to the edges of the first and second- resultantnodes. Batch edges that don’t appear in any of the lists will be repeated in both nodes, and will appear as the first edges of the- resultantnodes, in the order they appeared in- node.- Having specified the two sets of edges, the node’s tensor is reshaped as a batch matrix, with batch dimensions first, a single input dimension (adding up all edges in the first set) and a single output dimension (adding up all edges in the second set). With this shape, each matrix in the batch is decomposed according to - mode.- “svd”: Singular Value Decomposition \[M = USV^{\dagger}\]- where \(U\) and \(V\) are unitary, and \(S\) is diagonal. 
- “svdr”: Singular Value Decomposition adding Random phases (square diagonal matrices with random 1’s and -1’s) \[M = UR_1SR_2V^{\dagger}\]- where \(U\) and \(V\) are unitary, \(S\) is diagonal, and \(R_1\) and \(R_2\) are square diagonal matrices with random 1’s and -1’s. 
- “qr”: QR decomposition \[M = QR\]- where Q is unitary and R is an upper triangular matrix. 
- “rq”: RQ decomposition \[M = RQ\]- where R is a lower triangular matrix and Q is unitary. 
 - If - modeis “svd” or “svdr”,- sidemust be provided. Besides, at least one of- rank,- cum_percentageand- cutoffis required. If more than one is specified, the resulting rank will be the one that satisfies all conditions.- Since the node is split in two, a new edge appears connecting both nodes. The axis that corresponds to this edge has the name - "split".- Nodes - resultantfrom this operation are called- "split". The node that keeps information about the- Successoris- node.- This operation is the same as - split().- Parameters:
- node (AbstractNode) – Node that is to be split. 
- node1_axes (list[int, str or Axis]) – First set of edges, will appear as the edges of the first (left) resultant node. 
- node2_axes (list[int, str or Axis]) – Second set of edges, will appear as the edges of the second (right) resultant node. 
- mode ({"svd", "svdr", "qr", "rq"}) – Decomposition to be used. 
- side (str, optional) – If - modeis “svd” or “svdr”, indicates the side to which the diagonal matrix \(S\) should be contracted. If “left”, the first resultant node’s tensor will be \(US\), and the other node’s tensor will be \(V^{\dagger}\). If “right”, their tensors will be \(U\) and \(SV^{\dagger}\), respectively.
- rank (int, optional) – Number of singular values to keep. 
- cum_percentage (float, optional) – - Proportion that should be satisfied between the sum of all singular values kept and the total sum of all singular values. \[\frac{\sum_{i \in \{kept\}}{s_i}}{\sum_{i \in \{all\}}{s_i}} \ge cum\_percentage\]
- cutoff (float, optional) – Quantity that lower bounds singular values in order to be kept. 
 
- Return type:
 - Examples - >>> node = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch')) >>> node_left, node_right = tk.split(node, ... ['left'], ['right'], ... mode='svd', ... rank=5) >>> node_left.shape torch.Size([100, 10, 5]) - >>> node_right.shape torch.Size([100, 5, 15]) - >>> node_left['split'] Edge( split_0[split] <-> split_1[split] ) 
split_#
- tensorkrowch.split_(node, node1_axes, node2_axes, mode='svd', side='left', rank=None, cum_percentage=None, cutoff=None)[source]#
- In-place version of - split().- Following the PyTorch convention, names of functions ended with an underscore indicate in-place operations. - Since the node is split in two, a new edge appears connecting both nodes. The axis that corresponds to this edge has the name - "split".- Nodes - resultantfrom this operation are called- "split_ip".- This operation is the same as - split_().- Parameters:
- node (AbstractNode) – Node that is to be split. 
- node1_axes (list[int, str or Axis]) – First set of edges, will appear as the edges of the first (left) resultant node. 
- node2_axes (list[int, str or Axis]) – Second set of edges, will appear as the edges of the second (right) resultant node. 
- mode ({"svd", "svdr", "qr", "rq"}) – Decomposition to be used. 
- side (str, optional) – If - modeis “svd” or “svdr”, indicates the side to which the diagonal matrix \(S\) should be contracted. If “left”, the first resultant node’s tensor will be \(US\), and the other node’s tensor will be \(V^{\dagger}\). If “right”, their tensors will be \(U\) and \(SV^{\dagger}\), respectively.
- rank (int, optional) – Number of singular values to keep. 
- cum_percentage (float, optional) – - Proportion that should be satisfied between the sum of all singular values kept and the total sum of all singular values. \[\frac{\sum_{i \in \{kept\}}{s_i}}{\sum_{i \in \{all\}}{s_i}} \ge cum\_percentage\]
- cutoff (float, optional) – Quantity that lower bounds singular values in order to be kept. 
 
- Return type:
 - Examples - >>> node = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch')) >>> node_left, node_right = tk.split_(node, ... ['left'], ['right'], ... mode='svd', ... rank=5) >>> node_left.shape torch.Size([100, 10, 5]) - >>> node_right.shape torch.Size([100, 5, 15]) - >>> node_left['split'] Edge( split_ip_0[split] <-> split_ip_1[split] ) - nodehas been deleted (removed from the network), but it still exists until is deleted.- >>> node.network is None True - >>> del node 
contract_edges#
- tensorkrowch.contract_edges(edges, node1, node2)[source]#
- Contracts all selected edges between two nodes. - Nodes - resultantfrom this operation are called- "contract_edges". The node that keeps information about the- Successoris- node1.- Parameters:
- edges (list[Edge]) – List of edges that are to be contracted. They must be edges shared between - node1and- node2. Batch contraction is automatically performed when both nodes have batch edges with the same names.
- node1 (AbstractNode) – First node of the contraction. Its non-contracted edges will appear first in the list of inherited edges of the resultant node. 
- node2 (AbstractNode) – Second node of the contraction. Its non-contracted edges will appear last in the list of inherited edges of the resultant node. 
 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 20), ... axes_names=('one', 'two', 'three'), ... name='nodeA') >>> nodeB = tk.randn(shape=(10, 15, 20), ... axes_names=('one', 'two', 'three'), ... name='nodeB') ... >>> _ = nodeA['one'] ^ nodeB['one'] >>> _ = nodeA['two'] ^ nodeB['two'] >>> _ = nodeA['three'] ^ nodeB['three'] >>> result = tk.contract_edges([nodeA['one'], nodeA['three']], ... nodeA, nodeB) >>> result.shape torch.Size([15, 15]) - If - node1and- node2are the same node, the contraction is a trace.- >>> result2 = tk.contract_edges([result['two_0']], result, result) >>> result2.shape torch.Size([]) 
contract_between#
- tensorkrowch.contract_between(node1, node2)[source]#
- Contracts all edges shared between two nodes. Batch contraction is automatically performed when both nodes have batch edges with the same names. It can also be performed using the operator - @.- Nodes - resultantfrom this operation are called- "contract_edges". The node that keeps information about the- Successoris- node1.- This operation is the same as - contract_between().- Parameters:
- node1 (AbstractNode) – First node of the contraction. Its non-contracted edges will appear first in the list of inherited edges of the resultant node. 
- node2 (AbstractNode) – Second node of the contraction. Its non-contracted edges will appear last in the list of inherited edges of the resultant node. 
 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 7, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> _ = nodeA['right'] ^ nodeB['left'] >>> result = tk.contract_between(nodeA, nodeB) >>> result.shape torch.Size([100, 10, 7]) 
contract_between_#
- tensorkrowch.contract_between_(node1, node2)[source]#
- In-place version of - contract_between().- Following the PyTorch convention, names of functions ended with an underscore indicate in-place operations. - Nodes - resultantfrom this operation are called- "contract_edges_ip".- This operation is the same as - contract_between_().- Parameters:
- node1 (AbstractNode) – First node of the contraction. Its non-contracted edges will appear first in the list of inherited edges of the resultant node. 
- node2 (AbstractNode) – Second node of the contraction. Its non-contracted edges will appear last in the list of inherited edges of the resultant node. 
 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 7, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') ... >>> _ = nodeA['right'] ^ nodeB['left'] >>> result = tk.contract_between_(nodeA, nodeB) >>> result.shape torch.Size([100, 10, 7]) - nodeAand- nodeBhave been removed from the network.- >>> nodeA.network is None True - >>> nodeB.network is None True - >>> del nodeA >>> del nodeB 
stack#
- tensorkrowch.stack(nodes)[source]#
- Creates a - StackNodeor- ParamStackNodeby stacking a collection of- Nodesor- ParamNodes, respectively. Restrictions that are applied to the nodes in order to be stackable are the same as in- StackNode.- The stack dimension will be the first one in the - resultantnode.- See - ParamStackNodeand- TensorNetworkto learn how the- auto_stack()mode affects the computation of- stack().- If this operation is used several times with the same input nodes, but their dimensions can change from one call to another, this will lead to undesired behaviour. The network should be - reset(). This situation should be avoided in the- contract()method. Otherwise it will fail in subsequent calls to- contractor- forward()- Nodes - resultantfrom this operation are called- "stack". If this operation returns a- virtual- ParamStackNode, it will be called- "virtual_result_stack". See :class:AbstractNode` to learn about this reserved name. The node that keeps information about the- Successoris- nodes[0], the first stacked node.- Parameters:
- nodes (list[AbstractNode] or tuple[AbstractNode]) – Sequence of nodes that are to be stacked. They must be of the same type, have the same rank and axes names, be in the same tensor network, and have edges with the same types. 
 - Examples - >>> net = tk.TensorNetwork() >>> nodes = [tk.randn(shape=(2, 4, 2), ... axes_names=('left', 'input', 'right'), ... network=net) ... for _ in range(10)] >>> stack_node = tk.stack(nodes) >>> stack_node.shape torch.Size([10, 2, 4, 2]) 
unbind#
- tensorkrowch.unbind(node)[source]#
- Unbinds a - StackNodeor- ParamStackNode, where the first dimension is assumed to be the stack dimension.- If - auto_unbind()is set to- False, each resultant node will store its own tensor. Otherwise, they will have only a reference to the corresponding slice of the- (Param)StackNode.- See - TensorNetworkto learn how the- auto_unbindmode affects the computation of- unbind().- Nodes - resultantfrom this operation are called- "unbind". The node that keeps information about the- Successoris- node.- Parameters:
- node (StackNode or ParamStackNode) – Node that is to be unbound. 
- Return type:
- list[Node] 
 - Examples - >>> net = tk.TensorNetwork() >>> nodes = [tk.randn(shape=(2, 4, 2), ... axes_names=('left', 'input', 'right'), ... network=net) ... for _ in range(10)] >>> data = [tk.randn(shape=(4,), ... axes_names=('feature',), ... network=net) ... for _ in range(10)] ... >>> for i in range(10): ... _ = nodes[i]['input'] ^ data[i]['feature'] ... >>> stack_nodes = tk.stack(nodes) >>> stack_data = tk.stack(data) ... >>> # It is necessary to re-connect stacks >>> _ = stack_nodes['input'] ^ stack_data['feature'] >>> result = tk.unbind(stack_nodes @ stack_data) >>> print(result[0].name) unbind_0 - >>> result[0].axes [Axis( left (0) ), Axis( right (1) )] - >>> result[0].shape torch.Size([2, 2]) 
einsum#
- tensorkrowch.einsum(string, *nodes)[source]#
- Performs einsum contraction based on opt_einsum. This operation facilitates contracting several nodes at once, specifying directly the order of appearance of the resultant edges. Without this operation, several contractions and permutations would be needed. - Since it adapts a tensor operation for nodes, certain nodes’ properties are first checked. Thus, it verifies that all edges are correctly connected and all nodes are in the same network. It also performs batch contraction whenever corresponding edges are batch edges. - Nodes - resultantfrom this operation are called- "einsum". The node that keeps information about the- Successoris- nodes[0], the first node involved in the operation.- Parameters:
- string (str) – - Einsum-like string indicating how the contraction should be performed. It consists of a comma-separated list of inputs and an output separated by an arrow. For instance, the contraction \[T_{j,l} = \sum_{i,k,m}{A_{i,j,k}B_{k,l,m}C_{i,m}}\]- can be expressed as: - string = 'ijk,klm,im->jl' 
- nodes (AbstractNode...) – Nodes that are involved in the contraction. Should appear in the same order as it is specified in the - string. They should either be all- (Param)StackNode’s or none of them be a- (Param)StackNode.
 
- Return type:
 - Examples - >>> nodeA = tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA') >>> nodeB = tk.randn(shape=(15, 7, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB') >>> nodeC = tk.randn(shape=(7, 10, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeC') ... >>> _ = nodeA['right'] ^ nodeB['left'] >>> _ = nodeB['right'] ^ nodeC['left'] >>> _ = nodeC['right'] ^ nodeA['left'] ... >>> result = tk.einsum('ijb,jkb,kib->b', nodeA, nodeB, nodeC) >>> result.shape torch.Size([100]) 
stacked_einsum#
- tensorkrowch.stacked_einsum(string, *nodes_lists)[source]#
- Applies the same - einsum()operation (same- string) to a sequence of groups of nodes (all groups having the same amount of nodes, with the same properties, etc.). That is, it stacks these groups of nodes into a single collection of- StackNodesthat is then contracted via- einsum()(using the stack dimensions as batch), and- unboundafterwards.- Parameters:
- string (str) – - Einsum-like string indicating how the contraction should be performed. It consists of a comma-separated list of inputs and an output separated by an arrow. For instance, the contraction \[T_{j,l} = \sum_{i,k,m}{A_{i,j,k}B_{k,l,m}C_{i,m}}\]- can be expressed as: - string = 'ijk,klm,im->jl' 
- nodes_lists (List[Node or ParamNode]...) – Lists of nodes that are involved in the contraction. Should appear in the same order as it is specified in the - string.
 
- Return type:
- list[Node] 
 - Examples - >>> net = tk.TensorNetwork() >>> nodesA = [tk.randn(shape=(10, 15, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeA', ... network=net) ... for _ in range(10)] >>> nodesB = [tk.randn(shape=(15, 7, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeB', ... network=net) ... for _ in range(10)] >>> nodesC = [tk.randn(shape=(7, 10, 100), ... axes_names=('left', 'right', 'batch'), ... name='nodeC', ... network=net) ... for _ in range(10)] ... >>> for i in range(10): ... _ = nodesA[i]['right'] ^ nodesB[i]['left'] ... _ = nodesB[i]['right'] ^ nodesC[i]['left'] ... _ = nodesC[i]['right'] ^ nodesA[i]['left'] ... >>> result = tk.stacked_einsum('ijb,jkb,kib->b', nodesA, nodesB, nodesC) >>> len(result) 10 - >>> result[0].shape torch.Size([100])