Finished the method section.

This commit is contained in:
Janita Willumsen 2023-12-01 12:28:08 +01:00
parent abb13ef2da
commit 33298b4f8f
3 changed files with 113 additions and 115 deletions

View File

@ -91,7 +91,7 @@
% Statistical physics bridge the gap between microscopic and macroscopic world, using microscopic theory we can derive thermodynamic quantities. % Statistical physics bridge the gap between microscopic and macroscopic world, using microscopic theory we can derive thermodynamic quantities.
% \section{Methods}\label{sec:methods} \section{Methods}\label{sec:methods}
% \subsection{The Ising Model}\label{subsec:ising_model} % \subsection{The Ising Model}\label{subsec:ising_model}
% The Ising model in two dimensions consist of a lattice of spins, and can be though of as atoms in a grid. The length of the lattice is given by $L$, and the number of spins within a lattice is given by $N = L^{2}$. When we consider the entire lattice, the spin configuration, is given by $\mathbf{s} = [s_{1}, s_{2}, \dots, s_{N}]$. The total number of spin configurations, or microstates, is $2^{N}$. A given spin $i$ can take one of two possible discrete values $s_{i} \in \{-1, +1\}$, either spin down or spin up. % The Ising model in two dimensions consist of a lattice of spins, and can be though of as atoms in a grid. The length of the lattice is given by $L$, and the number of spins within a lattice is given by $N = L^{2}$. When we consider the entire lattice, the spin configuration, is given by $\mathbf{s} = [s_{1}, s_{2}, \dots, s_{N}]$. The total number of spin configurations, or microstates, is $2^{N}$. A given spin $i$ can take one of two possible discrete values $s_{i} \in \{-1, +1\}$, either spin down or spin up.
@ -236,7 +236,19 @@
% \end{table} % \end{table}
% \subsection{Phase transition and critical temperature}\label{subsec:phase_temp} % \subsection{Phase transition and critical temperature}\label{subsec:phase_temp}
At the critical
temperature the heat capacity $C_{V}$, and the magnetic susceptibility $\chi$ diverge \cite[p. 431]{hj:2015:comp_phys}.
Based on a $2 \times 2$ lattice, we can show that the total
energy is equal to the energy where all spins have the orientation up \cite[p. 426]{hj:2015:comp_phys}.
When a ferromagnetic material is heated, it will change at a macroscopic level.
We can describe
the behavior of the physical system, close to $T_{c}$, using power laws and critical
exponents. For an Ising model of infinite lattice size in two dimensions we have
\begin{align}
\langle |m| \rangle &\propto |T - T_{c}(L = \infty)|^{\beta} \\
C_{V} &\propto |T - T_{c}(L = \infty)|^{-\alpha} \\
\chi &\propto |T - T_{c}(L = \infty)|^{-\gamma}
\end{align}
% \subsection{The Markov chain Monte Carlo method}\label{subsec:mcmc} % \subsection{The Markov chain Monte Carlo method}\label{subsec:mcmc}
% We will generate samples from the Ising model, to approximate the probability distribution of the sample set. We sample using the Markov chain Monte Carlo method. Markov chains consist of a sequence of samples, where the probability of the next sample depend on the probability of the current sample. Whereas the Monte Carlo method, introduces randomness to the sampling to determine statistical quantities \cite{tds:2021:mcmc}. % We will generate samples from the Ising model, to approximate the probability distribution of the sample set. We sample using the Markov chain Monte Carlo method. Markov chains consist of a sequence of samples, where the probability of the next sample depend on the probability of the current sample. Whereas the Monte Carlo method, introduces randomness to the sampling to determine statistical quantities \cite{tds:2021:mcmc}.

View File

@ -52,7 +52,7 @@
\begin{document} \begin{document}
\title{The Ising model} % self-explanatory \title{Exploring Phase Transitions in the Two Dimensional Ising Model} % self-explanatory
\author{Cory Alexander Balaton \& Janita Ovidie Sandtrøen Willumsen \\ \faGithub \, \url{https://github.uio.no/FYS3150-G2-2023/Project-4}} % self-explanatory \author{Cory Alexander Balaton \& Janita Ovidie Sandtrøen Willumsen \\ \faGithub \, \url{https://github.uio.no/FYS3150-G2-2023/Project-4}} % self-explanatory
\date{\today} % self-explanatory \date{\today} % self-explanatory
\noaffiliation % ignore this, but keep it. \noaffiliation % ignore this, but keep it.

View File

@ -6,19 +6,39 @@
The Ising model consist of a lattice of spins, which can be though of as atoms The Ising model consist of a lattice of spins, which can be though of as atoms
in a grid. In two dimensions, the length of the lattice is given by $L$, and the in a grid. In two dimensions, the length of the lattice is given by $L$, and the
number of spins within a lattice is given by $N = L^{2}$. When we consider the number of spins within a lattice is given by $N = L^{2}$. When we consider the
entire lattice, the system spin configuration is denoted as a vector $\mathbf{s} entire lattice, the system spin configuration is represented as a matrix $L \times L$
= [s_{1}, s_{2}, \dots, s_{N}]$. The total number of spin configurations, or \begin{align*}
microstates, is $2^{N}$. \mathbf{s} &= \begin{pmatrix}
s_{1,1} & s_{1,2} & \dots & s_{1,L} \\
s_{2,1} & s_{2,2} & \dots & s_{2,L} \\
\vdots & \vdots & \ddots & \vdots \\
s_{L,1} & s_{L,2} & \dots & s_{L,L}
\end{pmatrix} \ .
\end{align*} % $\mathbf{s} = [s_{1}, s_{2}, \dots, s_{N}]$.
The total number of possible spin configurations, also called microstates, is $2^{N}$.
A given spin $i$ can take one of two possible discrete values $s_{i} \in \{-1, +1\}$, A given spin $i$ can take one of two possible discrete values $s_{i} \in \{-1, +1\}$,
either spin down or spin up. The spins interact with its nearest neighbors, and where $-1$ represent down and $+1$ spin up. The spins interact with its nearest neighbors,
in a two-dimensional lattice each spin has up to four nearest neighbors. However, and in a two-dimensional lattice each spin has up to four nearest neighbors. However,
the model is not restricted to this dimentionality \cite[p. 3]{obermeyer:2020:ising}. the model is not restricted to this dimentionality \cite[p. 3]{obermeyer:2020:ising}.
In our experiment we will use periodic boundary conditions, meaning all spins In our experiment we will use periodic boundary conditions, meaning all spins
have exactly four nearest neighbors. To find the analytical expressions necessary have exactly four nearest neighbors. To find the analytical expressions necessary
for validating our model implementation, we will assume a $2 \times 2$ lattice. for validating our model implementation, we will assume a $2 \times 2$ lattice.
We count the neighboring spins as visualized in Figure \ref{fig:tikz_boundary}, The hamiltonian of the Ising model is given by
\begin{equation}
E(\mathbf{s}) = -J \sum_{\langle k l \rangle}^{N} s_{k} s_{l} - B \sum_{k}^{N} s_{k}\ ,
\label{eq:energy_hamiltonian}
\end{equation}
where $\langle k l \rangle$ denote a spin pair. $J$ is the coupling constant, and
$B$ is the external magnetic field. For simplicity, we will consider the Ising model
where $B = 0$, and find the total system energy given by
\begin{equation}
E(\mathbf{s}) = -J \sum_{\langle k l \rangle}^{N} s_{k} s_{l} \ .
\label{eq:energy_total}
\end{equation}
To avoid counting duplicated, we count the neighboring spins using the pattern
visualized in Figure \ref{fig:tikz_boundary}.
\begin{figure} \begin{figure}
\centering \centering
\begin{tikzpicture} \begin{tikzpicture}
@ -46,25 +66,18 @@ We count the neighboring spins as visualized in Figure \ref{fig:tikz_boundary},
$2 \times 2$ lattice, where periodic boundary conditions are applied.} $2 \times 2$ lattice, where periodic boundary conditions are applied.}
\label{fig:tikz_boundary} \label{fig:tikz_boundary}
\end{figure} \end{figure}
and find the total system energy given by % Eq. \eqref{eq:energy_total} We also find the total magnetization of the system, which is given by
\begin{equation}
E(\mathbf{s}) = -J \sum_{\langle k l \rangle}^{N} s_{k} s_{l} \ .
\label{eq:energy_total}
\end{equation}
$\langle k l \rangle$ denote a pair of spins, and $J$ is the coupling constant.
We also find the total magnetization of the system, which is given by % Eq. \eqref{eq:magnetization_total}
\begin{equation} \begin{equation}
M(\mathbf{s}) = \sum_{i}^{N} s_{i} \ . M(\mathbf{s}) = \sum_{i}^{N} s_{i} \ .
\label{eq:magnetization_total} \label{eq:magnetization_total}
\end{equation} \end{equation}
In addition, we have to consider the state degeneracy, the number of different In addition, we have to consider the state degeneracy, the number of different
microstates sharing the same value of total magnetization. In the case where we microstates sharing the same value of total magnetization. In the case where we
have two spins oriented up the total energy have two possible values, as shown have two spins oriented up, the total energy have two possible values, as shown
in \ref{sec:energy_special}. in Appendix \ref{sec:energy_special}.
% The derivation of the analytical values can be found in Appendix \ref{}
\begin{table}[H] \begin{table}[H]
\centering \centering
\begin{tabular}{cccc} % @{\extracolsep{\fill}} \begin{tabular}{cccc}
\hline \hline
Spins up & $E(\mathbf{s})$ & $M(\mathbf{s})$ & Degeneracy \\ Spins up & $E(\mathbf{s})$ & $M(\mathbf{s})$ & Degeneracy \\
\hline \hline
@ -81,11 +94,9 @@ in \ref{sec:energy_special}.
conditions.} conditions.}
\label{tab:lattice_config} \label{tab:lattice_config}
\end{table} \end{table}
We use the analytical values, found in Table for both for lattices where $L = 2$ We calculate the analytical values for a $2 \times 2$ lattice, found in Table \ref{tab:lattice_config}.
and $L > 2$. However, we scale the total energy and total magnetization by number of spins,
to compare these quantities for lattices where $L > 2$. Energy per spin is given by
However, to compare the quantities for lattices where $L > 2$, we find energy
per spin given by
\begin{equation} \begin{equation}
\epsilon (\mathbf{s}) = \frac{E(\mathbf{s})}{N} \ , \epsilon (\mathbf{s}) = \frac{E(\mathbf{s})}{N} \ ,
\label{eq:energy_spin} \label{eq:energy_spin}
@ -98,9 +109,9 @@ and magnetization per spin given by
\subsection{Statistical mechanics}\label{subsec:statistical_mechanics} \subsection{Statistical mechanics}\label{subsec:statistical_mechanics}
When we study ferromagnetism, we have to consider the probability for a microstate When we study the behavior of the Ising model, we have to consider the probability
$\mathbf{s}$ at a fixed temperature $T$. The probability distribution function of a microstate $\mathbf{s}$ at a fixed temperature $T$. The probability distribution
(pdf) is given by function (pdf) is given by
\begin{equation} \begin{equation}
p(\mathbf{s} \ | \ T) = \frac{1}{Z} \exp^{-\beta E(\mathbf{s})}, p(\mathbf{s} \ | \ T) = \frac{1}{Z} \exp^{-\beta E(\mathbf{s})},
\label{eq:boltzmann_distribution} \label{eq:boltzmann_distribution}
@ -122,15 +133,14 @@ which gives us
\begin{equation*} \begin{equation*}
Z = 4 \cosh (8 \beta J) + 12 \ . Z = 4 \cosh (8 \beta J) + 12 \ .
\end{equation*} \end{equation*}
Using the partition function and Eq. \eqref{eq:boltzmann_distribution}, the pdf Using the partition function and Equation \eqref{eq:boltzmann_distribution}, the
of a microstate at a fixed temperature is given by probability of a microstate at a fixed temperature is given by
\begin{equation} \begin{equation}
p(\mathbf{s} \ | \ T) = \frac{1}{4 \cosh (8 \beta J) + 12} e^{-\beta E(\mathbf{s})} \ . p(\mathbf{s} \ | \ T) = \frac{1}{4 \cosh (8 \beta J) + 12} e^{-\beta E(\mathbf{s})} \ .
\label{eq:pdf} \label{eq:pdf}
\end{equation} \end{equation}
% Add something about why we use the expectation values? % Add something about why we use the expectation values?
We derive the analytical expressions for expectation values in Appendix We find the expected total energy
\ref{sec:expectation_values}. We find the expected total energy
\begin{equation}\label{eq:energy_total_result} \begin{equation}\label{eq:energy_total_result}
\langle E \rangle = -\frac{8J \sinh(8 \beta J)}{\cosh(8 \beta J) + 3} \ , \langle E \rangle = -\frac{8J \sinh(8 \beta J)}{\cosh(8 \beta J) + 3} \ ,
\end{equation} \end{equation}
@ -146,23 +156,25 @@ and the expected magnetization per spin
\begin{equation}\label{eq:magnetization_spin_result} \begin{equation}\label{eq:magnetization_spin_result}
\langle |m| \rangle = \frac{e^{8 \beta J} + 1}{2( \cosh(8 \beta J) + 3)} \ . \langle |m| \rangle = \frac{e^{8 \beta J} + 1}{2( \cosh(8 \beta J) + 3)} \ .
\end{equation} \end{equation}
We derive the analytical expressions for expectation values in Appendix
\ref{sec:expectation_values}.
We also need to determine the heat capacity We also need to determine the heat capacity
\begin{equation} \begin{equation}
C_{V} = \frac{1}{k_{B} T^{2}} (\mathbb{E}(E^{2}) - [\mathbb{E}(E)]^{2}) \ , C_{V} = \frac{1}{k_{B} T^{2}} (\mathbb{E}(E^{2}) - [\mathbb{E}(E)]^{2}) \ ,
\label{eq:heat_capacity} \label{eq:heat_capacity}
\end{equation} \end{equation}
and the susceptibility and the magnetic susceptibility
\begin{align*} \begin{align*}
\chi = \frac{1}{k_{\text{B} T}} (\mathbb{E}(M^{2}) - [\mathbb{E}(M)]^{2}) \ . \chi = \frac{1}{k_{\text{B} T}} (\mathbb{E}(M^{2}) - [\mathbb{E}(M)]^{2}) \ .
\label{eq:susceptibility} \label{eq:susceptibility}
\end{align*} \end{align*}
In Appendix \ref{sec:heat_susceptibility} we derive expressions for the heat
In Appendix \ref{sec:heat_susceptibility} we derive expressions for capacity and the susceptibility, and find the heat capacity
the heat capacity and the susceptibility, given by
\begin{align*} \begin{align*}
\frac{C_{V}}{N} &= \frac{64J^{2} }{N k_{B} T^{2}} \bigg( \frac{3\cosh(8 \beta J) + 1}{(\cosh(8 \beta J) + 3)^{2}} \bigg) \ , \frac{C_{V}}{N} &= \frac{64J^{2} }{N k_{B} T^{2}} \bigg( \frac{3\cosh(8 \beta J) + 1}{(\cosh(8 \beta J) + 3)^{2}} \bigg) \ ,
\end{align*} \end{align*}
and magnetic susceptibility
\begin{align*} \begin{align*}
\frac{\chi}{N} &= \frac{4}{N k_{B} T} \bigg( \frac{3e^{8 \beta J} + e^{-8 \beta J} + 3}{(\cosh(8 \beta J) + 3)^{2}} \bigg) \ . \frac{\chi}{N} &= \frac{4}{N k_{B} T} \bigg( \frac{3e^{8 \beta J} + e^{-8 \beta J} + 3}{(\cosh(8 \beta J) + 3)^{2}} \bigg) \ .
\end{align*} \end{align*}
@ -190,26 +202,26 @@ Boltzmann constant we derive the remaining units, which can be found in Table
\subsection{Phase transition and critical temperature}\label{subsec:phase_critical} \subsection{Phase transition and critical temperature}\label{subsec:phase_critical}
$\boldsymbol{Draft}$ We consider the Ising model in two dimensions, with no external external magnetic
% P9 critical temperature field. At temperatures below the critical temperature $T_{c}$, the Ising model will
% Add something about critical temperature in ferromagnets magnetize spontaneous. When increasing the temperature of the external field,
At temperatures below the critical temperature $T_{c}$, the Ising model will the Ising model transition from an ordered to an unordered phase. The spins become
magnetize spontaneous. Based on a $2 \times 2$ lattice, we can show that the total more correlated, and we can measure the discontinous behavior as an increase in
energy is equal to the energy where all spins have the orientation up \cite[p. 426]{hj:2015:comp_phys}. correlation length $\xi (T)$ \cite[p. 432]{hj:2015:comp_phys}. At $T_{c}$, the
correlation length is proportional with the lattice size, resulting in the critical
temperature scaling relation
\begin{equation}
T_{c}(L) - T_{c}(L = \infty) = aL^{-1} \ ,
\end{equation}
where $a$ is a constant. For the Ising model in two dimensions, with an lattice of
infinite size, the critical temperature is
\begin{equation}
T_{c}(L = \infty) = \frac{2}{\ln (1 + \sqrt{2})} J/k_{B} \approx 2.269 J/k_{B}
\end{equation}
This result was found analytically by Lars Onsager in 1944. We can estimate the
critical temperature of an infinite lattice, using finite lattices critical temperature
and linear regression.
When a ferromagnetic material is heated, it will change at a macroscopic level.
When increasing the temperature of the external field, the Ising model move from
an ordered to an unordered phase. At the critical temperature the heat capacity
$C_{V}$, and the magnetic susceptibility $\chi$ diverge \cite[p. 431]{hj:2015:comp_phys}.
We can describe the behavior of the physical system close to the critical temperature
using power laws and critical exponents. For an Ising model of infinite lattice size
in two dimensions we have
\begin{align}
\langle |m| \rangle &\propto |T - T_{c}(L = \infty)|^{\beta} \\
C_{V} &\propto |T - T_{c}(L = \infty)|^{-\alpha} \\
\chi &\propto |T - T_{c}(L = \infty)|^{-\gamma}
\end{align}
\subsection{The Markov chain Monte Carlo method}\label{subsec:mcmc_method} \subsection{The Markov chain Monte Carlo method}\label{subsec:mcmc_method}
Markov chains consist of a sequence of samples, where the probability of the next Markov chains consist of a sequence of samples, where the probability of the next
@ -225,20 +237,25 @@ until the model reaches an equilibrium state. However, generating new random sta
require ergodicity and detailed balance. A Markov chain is ergoditic when all system require ergodicity and detailed balance. A Markov chain is ergoditic when all system
states can be reached at every current state, whereas detaild balance implies no states can be reached at every current state, whereas detaild balance implies no
net flux of probability. To satisfy these criterias we use the Metropolis-Hastings net flux of probability. To satisfy these criterias we use the Metropolis-Hastings
algorithm, found in Figure \ref{algo:metropolis}, to implement the MCMC method. algorithm, found in Figure \ref{algo:metropolis}, to generate samples of microstates.
The Markov process will reach an equilibrium, reflecting the state of a real system. One Monte Carlo cycle consist of changing the initial configuration of the lattice,
% Add something about burn-in time by randomly flipping a spin. When a spin is flipped, the change in energy is evaluated as
At the point of equilibrium we start sampling, as the distribution of the set of
samples will tend toward the actual distribution.
% Add something about spin flipping
At each step of flipping a spin, the change in energy is evaluated as
\begin{align*} \begin{align*}
\Delta E &= E_{\text{after}} - E_{\text{before}} \ . \Delta E &= E_{\text{after}} - E_{\text{before}} \ ,
\end{align*} \end{align*}
and accepted if $\Delta E \leq 0$. However, if $\Delta E > 0$ we have to compute
the Boltzmann factor
\begin{equation}
p(\mathbf{s} | T) = e^{-\beta \Delta E}
\label{eq:boltzmann_factor}
\end{equation}
Since the total system energy only takes three different values, the change in Since the total system energy only takes three different values, the change in
energy can take $3^{2}$ values. However, there are only five distinct values energy can take $3^{2}$ values. However, there are only five distinct values
$\Delta E = \{-16J, -8J, 0, 8J, 16J\}$, we derive these values in Appendix \ref{sec:delta_energy}. $\Delta E = \{-16J, -8J, 0, 8J, 16J\}$, we derive these values in Appendix \ref{sec:delta_energy}.
We can avoid computing the Boltzmann factor, by using a look up table (LUT).
We use $\Delta E$ as an index to access the resulting value of the exponential function,
in an array.
\begin{figure}[H] \begin{figure}[H]
\begin{algorithm}[H] \begin{algorithm}[H]
\caption{Metropolis-Hastings Algorithm} \caption{Metropolis-Hastings Algorithm}
@ -262,66 +279,35 @@ $\Delta E = \{-16J, -8J, 0, 8J, 16J\}$, we derive these values in Appendix \ref{
\end{algorithmic} \end{algorithmic}
\end{algorithm} \end{algorithm}
\end{figure} \end{figure}
We can avoid computing the Boltzmann factor The Markov process reach an equilibrium after a certain number of Monte Carlo cycles,
\begin{equation} where the system state is reflecting the state of a real system. After this burn-in time,
p(\mathbf{s} | T) = e^{-\beta \Delta E} given by number of Monte Carlo cycles, we can start sampling microstates.
\label{eq:boltzmann_factor} The probability distribution of the samples will tend toward the actual probability
\end{equation} distribution of the system.
at every spin flip, by using a look up table (LUT) with the possible values. We use
the change in energy $\Delta E$ as a key for the resulting value of the exponential
function in a hash map.
\subsection{Implementation and testing}\label{subsec:implementation_test} \subsection{Implementation and testing}\label{subsec:implementation_test}
$\boldsymbol{Draft}$ We implemented a test suite, and compared the numerical estimates to the analytical
% P3 boundary condition and if-tests results from \ref{subsec:statistical_mechanics}. In addition, we set a tolerance to
To avoid the overhead of if-tests, and take advantage of the parallelization, we verify convergence, and a maximum number of Monte Carlo cycles to avoid longer
define an index for every edge case. That is, for a spin at a given boundary index runtimes during implementation.
we use a pre-set index (...), to avoid if-tests and reduce overhead and runtime.
% P4 testing, validation
% P7 parallelization
10 MPI processes
- 10 threads per process
= 100 threads total
First we divide the temperature range, to give each MPI process a set of temperatures
to work with. 2.1-2-4 divided into 40 steps, which gives us a step size of 0.0075.
Not a lot of downtime for the threads We avoid the overhead of if-tests, and take advantage of the parallelization, by
defining a pattern to access all neighboring spins. Using a $L \times 2$ matrix,
containing all indices to the neighbors, and pre-defined constants, we find the
indices of the neighboring spins. The first column contains the indics for neighbors
to the left and up, and the second column right and down.
However, when the temperature is close to the critical point, we observe an increase We parallelize our code using both a message passing interface (OpenMPI) and
in expected energy and a decrease in magnetization. Suggesting a higher energy and multi-threading (OpenMP). First, we divide the temperatures into smaller ranges,
a loss of magnetization close to the critical temperature. and each MPI process receives a set of temperatures. Every MPI process spawn a
set of threads, which initialize an Ising model and performs the Metropolis-Hastings
% We did not set the seed for the random number generator, which resulted in algorithm.
% different numerical estimates each time we ran the model. However, all expectation
% values are calculated using the same data. The burn-in time varied each time.
% We see a burn-in time t = 5000-10000 MC cycles. However, this changed between runs.
We decided with a burn-in time parallelization trade-off. That is, we set the
burn-in time lower in favor of sampling. To take advantage of the parallelization
and not to waste computational resources. The argument to discard samples generated
during the burn-in time is ... Increasing number of samples outweigh the ...
It is worth mentioning that the time (number of MC cycles) necessary to get a
good numerical estimate, compared to the analytical result, foreshadowing the
burn-in time.
Markov chain starting point can differ, resulting in different simulation. By
discarding the first samples, the ones generated before system equilibrium we can
get an estimate closer to the real solution. Since we want to estimate expectation
values at a given temperature, the samples should represent the system at that
temperature.
Depending on number of samples used in numerical estimates, using the samples
generated during burn-in can in high bias and high variance if the ratio is skewed.
However, if most samples are generated after burn-in the effect is not as visible.
Can't remove randomness by starting around equilibrium, since samples are generated
using several ising models we need to sample using the same conditions that is
system state equilibrium.
\subsection{Tools}\label{subsec:tools} \subsection{Tools}\label{subsec:tools}
The Ising model and MCMC methods are implemented in C++, and parallelized using The Ising model and MCMC methods are implemented in C++, and parallelized using
both \verb|OpenMPI| \cite[text]{gabriel:2004:open_mpi} and \verb|OpenMP| \cite{openmp:2018}. We used the Python library both \verb|OpenMPI| \cite{gabriel:2004:open_mpi} and \verb|OpenMP| \cite{openmp:2018}. We used the Python library
\verb|matplotlib| \cite{hunter:2007:matplotlib} to produce all the plots, and \verb|matplotlib| \cite{hunter:2007:matplotlib} to produce all the plots, and
\verb|seaborn| \cite{waskom:2021:seaborn} to set the theme in the figures. In \verb|seaborn| \cite{waskom:2021:seaborn} to set the theme in the figures. In
addition, we used \verb|Scalasca| \cite{scalasca} and \verb|Score-P| \cite{scorep} addition, we used \verb|Scalasca| \cite{scalasca} and \verb|Score-P| \cite{scorep}