Imagine that a pair of counter-propagating flows (e.g., current, liquid) move oppositely and exchange associated information, such as voltage potential or pressure via some kind of interactions. In principle, the interaction can lead to a stationary condition that these information are only a function of locations and do not change with time. This stationary condition is not as trivial as that for a pair of co-propagating flows, which is the effective average of them. In this post, we will first focus on the interaction itself and lay down some basic algebra to describe this model, then move to the discussion about quantum Hall effect - a very rare and well-known macroscopic quantum phenomenon and the design of algorithms. We will wrap up this post with implementing algorithms in Python and introducing general usage of Python pacakge.


Define an interaction mathematically

To be specific, as of now we will only discuss current flows and their interaction. Other types of flows are out of the scope of this post. At an interaction site (\(t=t_0\)), flows exchange electrons, immediately reach a stationary condition and update their voltage potentials (\(t=t_0^+\)), and move further to the next site (\(t=t_1\) for Flow#1 and \(t=t_2\) for Flow#2). Between sites (\(t_0<t<t_1,t_2\)), the flows remain at an unchanged potential.

Illustrative chart for interactions between flows regardless of their propagation directions

Similar to Markov process, a famous stochastic process, the actual interaction can be represented by an elegant tranmission matrix $M$, in which the diagonal entries stand for the probability of remaining in original status while off-diagonal entries for jumping from one status to another:

\[ M = \begin{bmatrix} m_{00} & m_{01} \\ m_{10} & m_{11} \end{bmatrix} \]

Naturally, all entries should fall between 0 and 1. The stronger the interaction is, the more likely transitions between status can happen, i.e., $m_{01}$ and $m_{10}$ are closer to unity. Specially, this matrix $M$ is required to fullfill: Elements in a row or column should add up to 1. Note that the matrix dimension is equal to the number of flows. At $t=t_0$ ($t=t_0^+$), the voltage potentials of Flow#1 and Flow#2 are denoted as $\mu_1$ ($\mu_1^\prime$) and $\mu_2$($\mu_2^\prime$). Then we arrive at the following equations:

$$ \begin{bmatrix}\mu_1^{\prime}\\ \mu_2^{\prime}\end{bmatrix} =\begin{bmatrix}1-\beta/2&\beta/2\\ \beta/2&1-\beta/2\end{bmatrix} \begin{bmatrix}\mu_1\\ \mu_2\end{bmatrix}=M(\beta)\begin{bmatrix}\mu_1\\ \mu_2\end{bmatrix}, $$ where $$ \beta=\frac{2T_1T_2}{T_1+T_2}\in[0,1]. $$ We have discussed the physical implication of $M(\beta)$ in an academic paper in preparation (L.-X. Wang et al., Selective interaction between counter-propagating edge states in a topological insulator) and will not expand the discussion here.


Why Counter-propagating Flows So Different?

For a single interaction, it is not practical to distinguish co- and counter-propagating flows because their propagation direction can only be determined by the spatial position of the second (later) interaction. Therefore, we can only discuss the subject of interaction between counter-propagating flows when a sequence of interactions are present. For co-propagating flows, the modelling of a sequence of interactions is equivalent to repeatedly multiplying matrix representation of individual interactions from head to end of the sequence. This is because that both outgoing flows from a former interaction would definitely meet again at the later interaction.

Mathematical model of co-propagating flows interacting with each other

However, this does not apply to the counter-propagating situation where outgoing flows move oppositely and never meet again. In this situation, it becomes challenging to keep track of voltage potential at each interaction site. As illustrated below, the left-mover ($\mu^{{0}}_1$) will first interact with the right-mover ($\mu^{{3}}_2$) at $M(\beta^{{0}})$, and then with $\mu^{{2}}_2$ at $M(\beta^{{1}})$, and so on. Finally, this behaviour contributes to an indefinite loop for solving any voltage potential such as $\mu_1^{{1}}$, which is one of the major challenges and makes the counter-propagating situation distinct. We will address this issue very soon.

Mathematical model of counter-propagating flows interacting with each other

Model a Sequence of Interactions

For the co-propagating situation, it is not difficult to understand that the combined effect is a product of individual matrices representing individual interaction. For the counter-propagating situation, a general analytical solution is only possible for a system containing two flows, not applicable to any larger systems. For instance, in a three-flow-system, the matrix representation of a combined effect is not necessarily similar to any individual matrice, which is assured for a two-flow-system. In this regard, a general numerical approach is demanding for dealing with arbitrary number of flows.

As the number of flows increases, the model gets complex fast. First, more flows means more types of interactions (combination number $C^2_m$). Due to the lack of a general analytical solution, the analysis has to be repeatedly done for adding each interaction site and becomes exponentially complex and not solvable by hand or tedious at best for more than a few interactions. To overcome this, we introduce a numerical model consisting of specialized algorithms that is implemented in Python.


Exploring the Problem Through the Lens of Physics

Before going to the model and algorithms, knowing a bit the physics background of this problem may be helpful to understand my motive. You can of course skip this if not interested.

Interacting counter-propagating flows could be physically realized in a quantum Hall system with edge states moving oppositely. Here is a short introduction to quantum Hall effect and its history and significance in the field of condensed matter physics. Noticeably, the quantum Hall effect is one of a few quantum phenomena detectable at a macroscopic scale.

Quantum Hall effect

When a two-dimensional (extremely thin film), highly-conducting (high crystalline quality) semiconductor-like material ( e.g., GaAs heterostructure) is subject to a vertically aligned magnetic field (approx. 6 orders of magnitude stronger than Earth’s magnetic field), a profound and intriguing phenomenon, referred to as quantum Hall effect., appears and has been extensively studied since its birth in 1980. In history, several Noble Prizes have been awarded to related discoveries in 1985 , 1998 , 2016. Aside from its deeper physical implication like topology in physics , a well-celebrated, trustful “edge-state” physical picture for this phenomenon is that, electrons are forced to move alongside the thin film edges in one direction, forming dissipationless “edge states” (electron flows). Without delving into deeper theory, a “magic” force seems to guide the motion of electrons. Nevertheless, this simple picture is accurate enough to be applicable in academic discussions and exciting enough to spur expectations of revolution in the field of electric power transmission if no magnetic field is needed (See quantum spin Hall effect).

Landauer-Büttiker Formalism and Interaction between Edge States

Back to “edge states”, this unique “bullet-like” (ballistic) motion of electrons can be most suitably described by a model developed by Markus Büttiker in early 80s based on Rolf Landauer’s earlier work, later referred to as Landauer-Büttiker formalism . Oftentime we assume no interaction between these edge states. In a typical device with several terminals (a reservior containing numerous electrons), interactions between edge states are in principle not discernible if all edge states move in the same direction (co-propagating). This is because that co-propagating edge states themselves are identical in terms of physical properties like electrochemical potential and any exchange between them would not lead to any observable consequence. In order to detect the interaction, it is necessary to divert co-propagating edge states into different paths, complicating the fabrication process at least. However, for counter-propagating edge states, this interaction is directly measurable even in a standard Hall bar.

Algorithms

This section ‘Algorithms’ is taken from an original academic paper in preparation ‘L.-X. Wang et al., A Full Model For Interacting Counter-propagating Flows’

Before discussing the algorithm, we first introduce a general system constituted by arbitrary number $m$ of states. All possible interactions can be represented in a graph, where vertices are states involved and edges are interactions between states. Here every single interaction only involves two states and can be defined by a $m\times m$ transition matrix. For example, an interaction between states $i$ and $j$ can be described by a transition matrix $\Omega$, where $\Omega_{kk}=1,k\notin{{i,j}}, \Omega_{pq}=\omega_{pq},p,q\in{{i,j}}$. Complex correlations between flows can be represented and formulated via a combination of various interactions.

Squeeze Algorithm

Illustrative diagram of squeeze algorithm
We can now turn to the squeeze algorithm which is divided into three major steps:

  1. Build the relationship between the initial states (orange boxes) and the intermediate states;

  2. Build the relationship between final states (purple boxes) and the intermediate states;

  3. Connect initial states with final states.

For illustration, we start with a system of $m$ states (from $S_1$ to $S_m$, $m_f$ out of $m$ are forward-movers) and two interactions $\Omega^1$ and $\Omega^2$ (above). Notations are listed in the table below.

SymbolExplanation
$\Omega^n$Interaction matrix representing the $n$-th interaction site.
$\Omega_{ij}^n$The entry of matrix $\Omega^n$ in row $i$ and in column $j$
$S^n_k$The $n$th status for the $k$th flow after the interaction $\Omega^{n}$.
$S^{i}_k$Initial state for the $k$th flow.
$S^{f}_k$Final state for the $k$th flow.
$\Omega^{[n]}$The cumulative effect of a sequence of interactions from $\Omega^1$ to $\Omega^n$.
$\Theta^{[n-1]}$Matrix connecting initial states and intermediate states.
$\Theta^{[n-1]}_{ij}$The entry of matrix $\Theta^{[n-1]}$ in row $i$ and in column $j$

First, we construct the intermediate states $S_k^1$ by initial states $S_k^0$ in Eq.1 and 2. Given the opposite propagation directions of counter-movers, we need to treat them differently. $$ \begin{aligned} S_k^{1}\left(k\leq m_f\right)&=\sum_{j=1}^{m_f}\Omega_{kj}^{1}S_j^0+\sum_{j=m_f+1}^m\Omega_{kj}^{1} S_j^{1}, \end{aligned} \tag{1} $$

$$ \begin{aligned} S_k^{1}\left(k>m_f\right)&=\sum_{j=1}^{m_f}\Omega_{kj}^2S_j^{1}+\sum_{j=m_f+1}^m\Omega_{kj}^{2}S_j^{2}. \end{aligned} \tag{2} $$

As seen, Eq.1 and Eq.2 are inter-dependent. Substitute Eq.1 into Eq.2, we further obtain Eq.3. Moving terms containing $S_j^1$ to the left leads to a series of linear equations (cf. Eq.4). Therefore, to find the solution of all intermediate states $S_k^1$, i.e., to build relationoship between initial states and intermediate states, we need to solve these $(m-m_f)$ linear equations simultaneously: $$ \begin{aligned} S_k^{1}(k>m_f)=\sum_{i=1}^{m_f}\sum_{j=m_f+1}^m\Omega_{ki}^2\Omega_{ij}^{1}S_j^{1}+\sum_{j=1}^{m_f}\sum_{ i=1}^{m_f}\Omega_{ki}^2\Omega_{ij}^{1}S_j^0 +\sum_{j=m_f+1}^m\Omega_{kj}^2S_j^2 \end{aligned} \tag{3} $$ $$ \begin{aligned} &\left(1-\sum_{i=1}^{m_f}\Omega_{ki}^2\Omega_{ik}^{1}\right)S_k^{1}-\sum_{j=m_f+1,j\neq k}^{m}\left(\sum_{i=1}^{m_f}\Omega_{ki}^2\Omega_{ij}^{1}\right)S_j^{1}\\ &=\sum_{j=1}^{m_f}\left(\sum_{i=1}^{m_f}\Omega_{ki}^n\Omega_{ij}^{1}\right)S_j^0 +\sum_{j=m_f+1}^m\Omega_{kj}^2S_j^2 \end{aligned} \tag{4} $$ $$ \begin{aligned} \sum_{j=m_f+1}^mp_{kj}S_j^1=\sum_{j=1}^{m_f}q_{kj}S_j^0 +\sum_{j=m_f+1}^ml_{kj}S_j^2 \end{aligned} \tag{5} $$

For clarity, as shown in Eq.5, we reformulate these equations by three sets of parameters $p_{kj}$, $q_{kj}$ and $l_{kj}$, defined in Eq.6. $$ \begin{aligned} p_{kj}\left(k,j>m_f, k\neq j\right)&=-\sum_{i=1}^{m_f}\Omega_{ki}^2\Omega_{ij}^1,\\ p_{kk}\left(k>m_f\right)&=1-\sum_{i=1}^{m_f}\Omega_{ki}^2\Omega_{ik}^1,\\ q_{kj}&=\sum_{i=1}^{m_f}\Omega_{ki}^2\Omega_{ij}^1,\\ l_{kj}&=\Omega_{kj}^2. \end{aligned} \tag{6} $$ These linear equations are also equivalent to the matrix equation (Eq.7). According to Cramer’s rule , the solutions of $S_{j}^1$ can be expressed as Eq.8, in which $det(P_k,b)$ means the determinant of the matrix $P$ with its $k$th column replaced by $b$. Due to the properties of a determinant, $det(P_k,b)$ can be decomposed into the sum of $det(P_k,q_j)S_j^0$ and $det(P_k,l_j)S_j^2$.

$$ \begin{aligned} &\mathcal{P} \begin{pmatrix} S_{m_f+1}^{n-1} \\ S_{m_f+2}^{n-1} \\ \vdots \\ S_{m}^{n-1} \end{pmatrix}=b,\\ \mathcal{P} &= \begin{pmatrix} p_{m_f+1,m_f+1} & p_{m_f+1,m_f+2} & \cdots & p_{m_f+1,m} \\ p_{m_f+2,m_f+1} & p_{m_f+2,m_f+2} & \cdots & p_{m_f+2,m} \\ \vdots & \vdots & \ddots & \vdots \\ m_{m,m_f+1} & m_{m,m_f+2} & \cdots & m_{m,m} \end{pmatrix},\\ b &= \begin{pmatrix} \sum_{j=1}^{m_f}q_{m_f+1,j}S_j^0+\sum_{j=m_f+1}^ml_{m_f+1,j}S_j^2 \\ \sum_{j=1}^{m_f}q_{m_f+2,j}S_j^0+\sum_{j=m_f+1}^ml_{m_f+2,j}S_j^2 \\ \vdots \\ \sum_{j=1}^{m_f}q_{m,j}S_j^0+\sum_{j=m_f+1}^ml_{m,j}S_j^2 \\ \end{pmatrix}. \\ \end{aligned} \tag{7} $$

We can reformulate Eq.8 into Eq.9, where intermediate states of backward-movers are expressed by all initial states $S_j^0(j\leq m_f)$ and $S_j^2(j>m_f)$. Next, we substitute Eq.9 into Eq.1 and obtain Eq.10, which give the intermediate states of forward-movers. Till here we have accomplished the step by connecting initial states $S_j^1$ with intermediate states $S_j^0$ via a set of parameters.

$$ \begin{aligned} S_k^1(k>m_f) &=\frac{det\left(\mathcal{P_k},b\right)}{det\left(\mathcal{P}\right)}, \\ det\left(\mathcal{P_k},b\right) &=\sum_{j=1}^{m_f}det\left(\mathcal{P_k},q_j\right)S_j^0+ \sum_{j=m_f+1}^m det\left(\mathcal{P_k},l_{j}\right)S_j^2, \\ q_j &= [q_{m_f+1,j},q_{m_f+2,j},\cdots,q_{m,j}]^T, \\ l_j &= [l_{m_f+1,j},l_{m_f+2,j},\cdots,l_{m,j}]^T. \\ \end{aligned} \tag{8} $$

$$ S_k^{1}(k>m_f)=\sum_{j=1}^{m_f}\frac{det(\mathcal{P_k},q_{j})}{det(\mathcal{P})}S_j^0+\sum_{j=m_f+1}^m \frac{det(\mathcal{P_k},l_j)}{det(\mathcal{P})}S_j^2 \tag{9} $$

$$ \begin{aligned} S_k^1(k\leq m_f)&=\sum_{j=1}^{m_f}\Omega_{kj}^{1}S_j^0+\sum_{j=m_f+1}^m \sum_{i=1}^{m_f}\Omega_{kj}^{1}\frac{det(\mathcal{P_j},q_{i})}{det(\mathcal{P})}S_i^0\\ &+\sum_{j=m_f+1}^m\sum_{i=m_f+1}^m\Omega_{kj}^{1} \frac{det(\mathcal{P_j},l_{i})}{det(\mathcal{P})}S_i^2\\ &=\sum_{i=1}^{m_f}\left(\Omega_{ki}^{1}+\sum_{j=m_f+1}^m\Omega_{kj}^{1}\frac{det(\mathcal{P_j},q_{i})}{det(\mathcal{P})}\right)S_i^0\\ &+\sum_{i=m_f+1}^m\left(\sum_{j=m_f+1}^m\Omega_{kj}^{1}\frac{det(\mathcal{P_j},l_{i})}{det(\mathcal{P})}\right)S_i^2 \end{aligned} \tag{10} $$

We again pack these parameters into a tranformation matrix $\Theta^{[1]}$ as below:

$$ \begin{aligned} &\Theta_{ki}^{[1]}\left(k,i\leq m_f\right)=\Omega_{ki}^{[1]}+\sum_{j=m_f+1}^m\Omega_{kj}^{[1]}\frac{det(\mathcal{P_j},q_{i})}{det(\mathcal{P})},\\ &\Theta_{ki}^{[1]}\left(k\leq m_f,i> m_f\right)=\sum_{j=m_f+1}^m\Omega_{kj}^{[1]}\frac{det(\mathcal{P_j},l_{i})}{det(\mathcal{P})},\\ &\Theta_{ki}^{[1]}\left(k> m_f,i\leq m_f\right)=\frac{det(\mathcal{P_k},q_{i})}{det(\mathcal{P})},\\ &\Theta_{ki}^{[1]}\left(k,i> m_f\right)=\frac{det(\mathcal{P_k},l_{i})}{det(\mathcal{P})}, \end{aligned} \tag{11} $$

Note that $\Theta^{[1]}$ is not a Markov matrix. By knowing $\Theta^{[1]}$, we write down for the intermediate states: $S_k^1=\sum_{i=1}^{m_f}\Theta_{ki}^{[1]}S_i^0+\sum_{i=m_f+1}^m\Theta_{ki}^{[1]}S_i^{2}$. Then we move on to build a connection between intermediate states and the final states. It is actually straighforward to write down for the final states $S_k^{2}\left(k\leq m_f\right)$ and $S_k^0\left(k>m_f\right)$:

$$ S_k^{2}\left(k\leq m_f\right)=\sum_{j=1}^{m_f}\Omega_{kj}^2S_j^{1}+\sum_{j=m_f+1}^m\Omega_{kj}^{2}S_j^{2}, \tag{12} $$

$$ S_k^0\left(k>m_f\right)=\sum_{j=1}^{m_f}\Omega_{kj}^{[1]}S_j^0+\sum_{j=m_f+1}^m\Omega_{kj}^{[1]}S_j^1. \tag{13} $$

As immediately seen, we may substitute $S_j^1$ in Eq.12, Eq.13 and obtain Eq.14 and Eq.15, separately. Till here, we have accomplished the final step, i.e., connecting the final states directly with the initial states.

$$ \begin{aligned} S_k^{2}\left(k\leq m_f\right)&= \sum_{j=1}^{m_f}\Omega_{kj}^2\left(\sum_{i=1}^{m_f}\Theta_{ji}^{[1]} S_i^0 +\sum_{i=m_f+1}^m\Theta_{ji}^{[1]}S_i^2\right) +\sum_{i=m_f+1}^m\Omega_{ki}^2S_i^2\\ &=\sum_{i=1}^{m_f}\left(\sum_{j=1}^{m_f}\Omega_{kj}^2\Theta_{ji}^{[1]}\right)S_i^0 +\sum_{i=m_f+1}^m\left(\sum_{j=1}^{m_f}\Omega_{kj}^2\Theta_{ji}^{[1]}+\Omega_{ki}^2\right)S_i^2, \end{aligned} \tag{14} $$

$$ \begin{aligned} S_k^{0}\left(k>m_f\right)&= \sum_{i=1}^{m_f}\Omega_{ki}^{[1]}S_i^0 +\sum_{j=m_f+1}^m\Omega_{kj}^{[1]}\left(\sum_{i=1}^{m_f}\Theta_{ji}^{[1]}S_i^0+\sum_{i=m_f+1}^m\Theta_{ji}^{[1]}S_i^2\right)\\ &=\sum_{i=1}^{m_f}\left(\sum_{j=m_f+1}^m\Omega_{kj}^{[1]}\Theta_{ji}^{[1]}+\Omega_{ki}^{[1]}\right)S_i^0 +\sum_{i=m_f+1}^m\left(\sum_{j=m_f+1}^m\Omega_{kj}^{[1]}\Theta_{ji}^{[1]}\right)S_i^2 \end{aligned} \tag{15} $$

We use $\Omega^{[2]}$ to represent this connection between initial states and final states:

$$ \begin{aligned} \Omega_{ki}^{[2]}\left(k,i\leq m_f\right)&=\sum_{j=1}^{m_f}\Omega_{kj}^2\Theta_{ji}^{[1]},\\ \Omega_{ki}^{[2]}\left(k\leq m_f,i>m_f\right)&=\sum_{j=1}^{m_f}\Omega_{kj}^2\Theta_{ji}^{[1]}+\Omega_{ki}^2,\\ \Omega_{ki}^{[2]}\left(k> m_f,i\leq m_f\right)&=\sum_{j=m_f+1}^m\Omega_{kj}^{[1]}\Theta_{ji}^{[1]}+\Omega_{ki}^{[1]},\\ \Omega_{ki}^{[2]}\left(k>m_f,i>m_f\right)&=\sum_{j=m_f+1}^m\Omega_{kj}^{[1]}\Theta_{ji}^{[1]}, \end{aligned} \tag{16} $$

This concludes the calculation for this minimal system with two interactions.

Forward-propagation Algorithm

Illustrative diagram of forward- and backward-propagation processes

Starting from this minimum system, we can increase the size of this system by adding more interactions at the tail of the sequence. In a system with $n$ interactions, to obtain $\Omega^{[3]}$, the previously obtained $\Omega^{[2]}$ can be used as a new input together with $\Omega^3$, a newly added interaction, go into another round of squeeze process until reaching $\Omega^{[n]}$. It is not difficult to see that the final step is to replace $\Omega^1$ and $\Omega^2$ by $\Omega^{[n-1]}$ to $\Omega^{[n]}$ in above derivations. For completeness, we still provide this part of derivations here (cf. Eqs.17 to 36).

Our general principle in the forward-propagation process is to first disregard all the intermediate states and to only focus on calculating $n$ core parameters, i.e., $\Omega^{[n]}$ and $\Theta^{[i]},i=1,2,\cdots,n-1$. Afterward, we pass these parameters to the back-propagation process to calculate all intermediate states.

$$ S_k^{n-1}\left(k\leq m_f\right)=\sum_{j=1}^{m_f}\Omega_{kj}^{[n-1]}S_j^0+\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]} S_j^{n-1}, \tag{17} $$

$$ S_k^{n-1}\left(k>m_f\right)=\sum_{j=1}^{m_f}\Omega_{ki}^n S_j^{n-1}+\sum_{j=m_f+1}^m\Omega_{kj}^{n}S_j^{n},\tag{18} $$


$$ \begin{aligned} S_k^{n-1}\left(k> m_f\right)&=\sum_{i=1}^{m_f}\Omega_{ki}^n\left(\sum_{j=1}^{m_f}\Omega_{ij}^{[n-1]}S_j^0 +\sum_{j=m_f+1}^m\Omega_{ij}^{[n-1]}S_j^{n-1}\right) +\sum_{j=m_f+1}^m\Omega_{kj}^n S_j^n \\ &=\sum_{i=1}^{m_f}\sum_{j=1}^{m_f}\Omega_{ki}^n\Omega_{ij}^{[n-1]}S_j^0 +\sum_{i=1}^{m_f}\sum_{j=m_f+1}^m\Omega_{ki}^n\Omega_{ij}^{[n-1]}S_j^{n-1} +\sum_{j=m_f+1}^m\Omega_{kj}^n S_j^n \end{aligned} \tag{19} $$


$$ \begin{aligned} S_k^{n-1}-\sum_{i=1}^{m_f}\sum_{j=m_f+1}^m\Omega_{ki}^n\Omega_{ij}^{[n-1]}S_j^{n-1} = \sum_{j=1}^{m_f}\sum_{i=1}^{m_f}\Omega_{ki}^n\Omega_{ij}^{[n-1]}S_j^0 +\sum_{j=m_f+1}^m\Omega_{kj}^nS_j^n \end{aligned} \tag{20} $$


$$ \begin{aligned} &\left(1-\sum_{i=1}^{m_f}\Omega_{ki}^n\Omega_{ik}^{[n-1]}\right)S_k^{n-1}-\sum_{j=m_f+1,j\neq k}^{m}\left(\sum_{i=1}^{m_f}\Omega_{ki}^n\Omega_{ij}^{[n-1]}\right)S_j^{n-1} \\ &=\sum_{j=1}^{m_f}\left(\sum_{i=1}^{m_f}\Omega_{ki}^n\Omega_{ij}^{[n-1]}\right)S_j^0+\sum_{j=m_f+1}^m\Omega_{kj}^nS_j^n \end{aligned} \tag{21} $$


$$ q_{kj}=\sum_{i=1}^{m_f}\Omega_{ki}^n\Omega_{ij}^{[n-1]}, l_{kj}=\Omega_{kj}^n \tag{22} $$


$$ \begin{aligned} p_{kj}\left(k,j>m_f, k\neq j\right)&=-\sum_{i=1}^{m_f}\Omega_{ki}^n\Omega_{ij}^{[n-1]},\\ p_{kk}\left(k>m_f\right)&=1-\sum_{i=1}^{m_f}\Omega_{ki}^n\Omega_{ik}^{[n-1]} \end{aligned} \tag{23} $$


$$ \begin{aligned} \sum_{j=m_f+1}^mp_{kj}S_j^{n-1}=\sum_{j=1}^{m_f}q_{kj}S_j^0 +\sum_{j=m_f+1}^ml_{kj}S_j^n \end{aligned} \tag{24} $$


$$ \begin{aligned} \mathcal{P} &= \begin{pmatrix} p_{m_f+1,m_f+1} & p_{m_f+1,m_f+2} & \cdots & p_{m_f+1,m} \\ p_{m_f+2,m_f+1} & p_{m_f+2,m_f+2} & \cdots & p_{m_f+2,m} \\ \vdots & \vdots & \ddots & \vdots \\ m_{m,m_f+1} & m_{m,m_f+2} & \cdots & m_{m,m} \end{pmatrix},\\ b &= \begin{pmatrix} \sum_{j=1}^{m_f}q_{m_f+1,j}S_j^0+\sum_{j=m_f+1}^ml_{m_f+1,j}S_j^n \\ \sum_{j=1}^{m_f}q_{m_f+2,j}S_j^0+\sum_{j=m_f+1}^ml_{m_f+2,j}S_j^n \\ \vdots \\ \sum_{j=1}^{m_f}q_{m,j}S_j^0+\sum_{j=m_f+1}^ml_{m,j}S_j^n \\ \end{pmatrix}. \end{aligned} \tag{25} $$


$$ \begin{aligned} \mathcal{P} \begin{pmatrix} S_{m_f+1}^{n-1} \\ S_{m_f+2}^{n-1} \\ \vdots \\ S_{m}^{n-1} \end{pmatrix}=b \end{aligned} \tag{26} $$


$$ \begin{aligned} &S_{k}^{n-1}(k>m_f)=\frac{det\left(\mathcal{P_k},b\right)}{det\left(\mathcal{P}\right)},\\ &det(\mathcal{P_k},b)=\sum_{j=1}^{m_f}det(\mathcal{P_k},q_{j})S_j^0+\sum_{j=m_f+1}^m det(\mathcal{P_k},l_j)S_j^n,\\ &q_j = [q_{m_f+1,j},q_{m_f+2,j},\cdots,q_{m,j}]^T,\\ &l_j = [l_{m_f+1,j},l_{m_f+2,j},\cdots,l_{m,j}]^T. \end{aligned} \tag{27} $$


$$ S_k^{n-1}(k>m_f)=\sum_{j=1}^{m_f}\frac{det(\mathcal{P_k},q_{j})}{det(\mathcal{P})}S_j^0+\sum_{j=m_f+1}^m\frac{det(\mathcal{P_k},l_{j})}{det(\mathcal{P})}S_j^n \tag{28} $$


$$ \begin{aligned} S_k^{n-1}(k\leq m_f)&=\sum_{j=1}^{m_f}\Omega_{kj}^{[n-1]}S_j^0+\sum_{j=m_f+1}^m\sum_{i=1}^{m_f}\Omega_{kj}^{[n-1]}\frac{det(\mathcal{P_j},q_{i})}{det(\mathcal{P})}S_i^0 \\ &+\sum_{j=m_f+1}^m\sum_{i=m_f+1}^m\Omega_{kj}^{[n-1]}\frac{det(\mathcal{P_j},l_{i})}{det(\mathcal{P})}S_i^n \\ &=\sum_{i=1}^{m_f}\left(\Omega_{ki}^{[n-1]}+\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}\frac{det(\mathcal{P_j},q_{i})}{det(\mathcal{P})}\right)S_i^0 \\ &+\sum_{i=m_f+1}^m\left(\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}\frac{det(\mathcal{P_j},l_{i})}{det(\mathcal{P})}\right)S_i^n \end{aligned} \tag{29} $$


$$ \begin{aligned} \Theta_{ki}^{[n-1]}\left(k,i\leq m_f\right) &= \Omega_{ki}^{[n-1]}+\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}\frac{det(\mathcal{P_j},q_{i})}{det(\mathcal{P})},\\ \Theta_{ki}^{[n-1]}\left(k\leq m_f,i> m_f\right) &= \sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]} \frac{det(\mathcal{P_j},l_{i})}{det(\mathcal{P})},\\ \Theta_{ki}^{[n-1]}\left(k> m_f,i\leq m_f\right) &= \frac {det(\mathcal{P_k},q_{i})}{det(\mathcal{P})},\\ \Theta_{ki}^{[n-1]}\left(k,i> m_f\right) &= \frac{det(\mathcal{P_k},l_{i})}{det(\mathcal{P})}, \end{aligned} \tag{30} $$


$$ S_k^{n}\left(k\leq m_f\right)=\sum_{j=1}^{m_f}\Omega_{kj}^nS_j^{n-1}+\sum_{i=m_f+1}^m\Omega_{ki}^{n}S_i^{n}, \tag{31} $$

$$ S_k^0\left(k>m_f\right)=\sum_{i=1}^{m_f}\Omega_{ki}^{[n-1]}S_i^0+\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}S_j^{n-1}, \tag{32} $$

$$ S_k^{n-1}=\sum_{i=1}^{m_f}\Theta_{ki}^{[n-1]}S_i^0+\sum_{i=m_f+1}^m\Theta_{ki}^{[n-1]}S_i^{n}, \tag{33} $$


$$ \begin{aligned} S_k^{n}\left(k\leq m_f\right)&= \sum_{j=1}^{m_f}\Omega_{kj}^n\left(\sum_{i=1}^{m_f}\Theta_{ji}^{[n-1]}S_i ^0+\sum_{i=m_f+1}^m\Theta_{ji}^{[n-1]}S_i^n\right) +\sum_{i=m_f+1}^m\Omega_{ki}^nS_i^n\\ &=\sum_{i=1}^{m_f}\left(\sum_{j=1}^{m_f}\Omega_{kj}^n\Theta_{ji}^{[n-1]}\right)S_i^0 +\sum_{i=m_f+1}^m\left(\sum_{j=1}^{m_f}\Omega_{kj}^n\Theta_{ji}^{[n-1]}+\Omega_{ki}^n\right)S_i^n, \end{aligned} \tag{34} $$


$$ \begin{aligned} S_k^{0}\left(k>m_f\right)&= \sum_{i=1}^{m_f}\Omega_{ki}^{[n-1]}S_i^0 +\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}\left(\sum_{i=1}^{m_f}\Theta_{ji}^{[n-1]}S_i^0+\sum_{i=m_f+1}^m \Theta_{ji}^{[n-1]}S_i^n\right)\\ &=\sum_{i=1}^{m_f}\left(\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}\Theta_{ji}^{[n-1]}+\Omega_{ki}^{[n-1]}\right) S_i^0\\ &+\sum_{i=m_f+1}^m\left(\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}\Theta_{ji}^{[n-1]}\right)S_i^n \end{aligned} \tag{35} $$


$$ \begin{aligned} \Omega_{ki}^{[n]}\left(k,i\leq m_f\right)&=\sum_{j=1}^{m_f}\Omega_{kj}^n\Theta_{ji}^{[n-1]},\\ \Omega_{ki}^{[n]}\left(k\leq m_f,i>m_f\right)&=\sum_{j=1}^{m_f}\Omega_{kj}^n\Theta_{ji}^{[n-1]}+\Omega_{ki}^n,\\ \Omega_{ki}^{[n]}\left(k> m_f,i\leq m_f\right)&=\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}\Theta_{ji}^{[n-1]}+\Omega_{ki}^{[n-1]},\\ \Omega_{ki}^{[n]}\left(k>m_f,i>m_f\right)&=\sum_{j=m_f+1}^m\Omega_{kj}^{[n-1]}\Theta_{ji}^{[n-1]}, \end{aligned} \tag{36} $$


$$ \begin{aligned} S^f&=[S_1^n,\cdots,S_{m_f}^n,S_{m_f+1}^0,\cdots,S_{m}^0]^T;\\ S^i&=[S_1^0,\cdots,S_{m_f}^0,S_{m_f+1}^n,\cdots,S_{m}^n]^T \end{aligned} \tag{37} $$

Till now, we have derived all transferable core parameters $\Omega^{[n]}$ and $\Theta^{[i]},i=1,\cdots,n-1$. This concludes the forward propagation process.

$$ \begin{aligned} \begin{pmatrix} S_{1}^{n} \\ \vdots \\ S_{m_f}^{n} \\ S_{m_f+1}^{0} \\ \vdots \\ S_m^n \end{pmatrix}=\Omega^{[n]} \begin{pmatrix} S_{1}^{0} \\ \vdots \\ S_{m_f}^{0} \\ S_{m_f+1}^{n} \\ \vdots \\ S_m^n \end{pmatrix} \end{aligned} \tag{38} $$

Backward-propagation Algorithm

As depicted, we can first obtain the final states $S_k^f$ directly by the initial states $S_k^i$ and the transition matrix $\Omega^{[n]}$ (See Eq.38). Moverover, we utililze initial states $S_k^i$ and parameters $\Theta^{[n-1]}$ passed from the forward process to obtain the first set of intermediate states $S_k^{n-1}$: $S_k^{n-1}=\sum_{i=1}^{m_f}\Theta_{ki}^{[n-1]}S_i^0+\sum_{i=m_f+1}^m\Theta_{ki}^{[n-1]}S_i^{n}$. Further, $S_k^{n-1} (k>m_f)$ can be fed into the same process to obtain $S_k^{n-2}$. Generally, we can run the process in Eq.39 backwardly till obtaining $S_k^1$: $$ S_k^{q}=\sum_{i=1}^{m_f}\Theta_{ki}^{[q]}S_i^0+\sum_{i=m_f+1}^m\Theta_{ki}^{[q]}S_i^{q+1}, q = 1,\cdots,n-1 \tag{39} $$

In the end, we utilize $S_k^i$ and $S_k^f$ to construct $S_k^0$ and $S_k^n$:

$$ \begin{aligned} S_k^0(k\leq m_f) &=S_k^i, \\ S_k^0(k>m_f) &=S_k^f, \\ S_k^n(k\leq m_f) &=S_k^f, \\ S_k^n(k>m_f) &=S_k^i. \end{aligned} \tag{40} $$

Till here, we have provided access to all states before and after interactions and thus concluded the backward-propagation process.

Code Implementation

The source code of this open-source project is currently host on Github

Architecture

For python implementation, two classes are designed: System and Edge. In class Edge, a method trans_mat is built to implement the forward-propagation process and to return a matrix representation of combined effect of the interaction sequence represented by this Edge instance. Another method status_check is built to implement the forward- and back- propagation processes together to return all the status along the sequence. To initialize an instance of class system, we need to first initialize several Edge instances as input beforehand. This system class account for a real system that contains not only sequences of interactions but also terminals which separate edges. A method mastermat is designed to return the master matrix equation and another method solve will return its solution - a vector of terminal voltages.

Dynamic Programming for Diverting Edge States

We would like to highlight the implementation of dynamic programming, which allows for diverting edge states into different paths. In a standard Landauer-Büttiker system, all edge states enter each terminal without exception. The electronic measurement is only performed via terminals which measure all edge states at once. Consequently, this measurement result is an average of all edge states in terms of electro-chemical potential.

All flows follow the same path and enter each terminal in a six-terminal system

To extract additional information of edge states, it is advisable to block some of them from entering particular terminals,leading to different paths.

Flows diverted into different paths in a six-terminal system

In other words, this additional controls imposed on edge states may reveal hidden information, not detectable in conventional configuration (standard Landauer-Büttiker system). However, a complex blockage pattern may also cause difficulty in defining the blocked edge state, because that the information carried by these edge states, not detectable by terminals, spread into other edge states via interactions directly or indirectly. An algorithm is developed to resolve this complexity recursively.

Flow chart of dynamic programming

General Usage

The design principle for usage is to facilitate the bottom-up construction of a system from the building block - interaction. Hereby we introduce how to build a system step-by-step with existing infrastructure from package counterfusion.

A single interaction can be defined with a python snippet below:

from counterfusion import interaction_builder
left_stochastic_matrix = interaction_builder(dim=2,id1=0,id2=1,value=[0.1,0.3])
right_stochastic_matrix = left_stochastsic_matrix.T
doubly_stochastic_matrix = interaction_builder(dim=2,id1=0,id2=1,value=0.5)

An edge containing a sequence of interactions can be defined:

from counterfusion import generate_bynumber, Edge
#===============================================================
# General information for the edge - Hyperparameters
totalNumMover = 4
numForwardMover = 2
initStates = [1,1,0.2,0.2]
#===============================================================
# Information of scattering events 
# Interaction parameters
v03 = 0.3
v01 = 0.5
v23 = 0.8
edgeDef = [[0,3,v03,10],[0,1,v01,10],[2,3,v23,10]]
edgeInfo = generate_bynumber(edgeDef)
edge = Edge(edgeInfo,totalNumMover,numForwardMover)

A six-edge (terminal) system can be defined:

from counterfusion import *
# Define a six-terminal system
# C1--M1--C2--M2--C3--M3--C4--M4--C5--M5--C6--M6--C1
# Total number of edge states: 4
# Number of forward-moving edge states: 2 (#0,#1)
# Number of backward-moving edge states: 2 (#2,#3)
#===============================================================
# General information for the system - Hyperparameters
totalNumMover = 4
numForwardMover = 2
zeroVoltTerminal = 3
#===============================================================
# Information of scattering events 
# Interaction parameters
v02 = 0.9
v13 = 0.7
v12 = 0.3
# Define interaction between nodes (contacts)
# C1--M1--C2
edgeDef1 = [[0,2,v02,10],[1,3,v13,10]]
# C2--M2--C3
edgeDef2 = [[0,2,v02,10],[1,2,v12,10]]
# C3--M3--C4
edgeDef3 = [[0,2,v02,10],[1,3,v13,10]]
# C4--M4--C5
edgeDef4 = [[0,2,v02,10],[1,2,v12,10]]
# C5--M5--C6
edgeDef5 = [[0,2,v02,10],[1,3,v13,10]]
# C6--M6--C1
edgeDef6 = [[0,2,v02,10],[1,2,v12,10]]
#================================================================
edgesDef = [edgeDef1,edgeDef2,edgeDef3,edgeDef4,edgeDef5,edgeDef6]
edgesInfo = []
for edgeDef in edgesDef:
    edgesInfo.append(generate_bynumber(edgeDef))
graph = []
for edgeInfo in edgesInfo:
    graph.append(Edge(edgeInfo,totalNumMover,numForwardMover))
nodesCurrent = [1,0,0,-1,0,0]
sys = System(nodesCurrent,graph,numForwardMover,zeroVoltTerminal)

Separate paths of edge states can be realized by editing blockStates:

# The definition of blocking_state should strictly follow this rule: 
# [[index_of_terminal#1,[all blocked states in this terminal]],[[index_of_terminal#2,[all blocked states in this terminal],...]]]
blockStates = [[1,[0]],[0,[2]],[2,[3]],[3,[1]]]
sys = System(nodesCurrent,graph,numForwardMover,zeroVoltTerminal,blockStates)

Lastly, we briefly show how easy it is to build a system from scratch and read the visualization of results in one click.

Setting up a system and reading the visualization of results

Citation

Cited as:

Wang, Lixian.(Nov.2023). A Duo Traveling Reversely. https://lixianphys.github.io/posts/interact_counter_flow/.

Or

@article{wang2023duo,
  title   = "A Duo Traveling Reversely.",
  author  = "Wang, Lixian",
  journal = "lixianphys.github.io",
  year    = "2023",
  month   = "Nov",
  url     = "https://lixianphys.github.io/posts/interact_counter_flow/"
}