Merge branch 'janitaws/latex' into coryab/code

This commit is contained in:
Cory Balaton 2023-12-05 18:23:26 +01:00
commit f50a55c764
No known key found for this signature in database
GPG Key ID: 3E5FCEBFD80F432B
53 changed files with 80594 additions and 343 deletions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
latex/images/profiling.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -77,28 +77,32 @@
\clearpage
\newpage
% Appendix
\subfile{sections/appendices}
\clearpage
\onecolumngrid
\bibliographystyle{unsrtnat}
\bibliography{references}
% Appendix
\subfile{sections/appendices}
\end{document}
% Method
% Problem 1: OK
% Problem 2: OK
% Problem 3: OK - missing theory of phase transition
% Problem 4: validate numerical results using the analytical results
% Problem 7: division of workload mpi and openmp
% Problem 9: rewrite theoretical background of phase transition and critical temperature
% Problem 3: OK
% Problem 4: OK
% Problem 9: OK
% Results
% Problem 5: burn-in time
% Problem 6: estimate probability distribution based on histogram
% Problem 6: OK
% Problem 7: OK
% Problem 8: investigate phase transitions, and estimate critical temperature based on plot
% Problem 9: use critical temperature to estimate the ctritical temperature of infinite lattice
%
%
% Appendices
% Problem 0: move in front of references, and change back to double column

View File

@ -6,16 +6,16 @@
temperature, when undergoing phase transition. To generate spin configurations,
we used the Metropolis-Hastings algorithm, which applies a Markov chain Monte
Carlo sampling method. We determined the time of equilibrium to be approximately
3000 Monte Carlo cycles, and used the following samples to find the probability
$5000$ Monte Carlo cycles, and used the following samples to find the probability
distribution at temperature $T_{1} = 1.0 J / k_{B}$, and $T_{2} = 2.4 J / k_{B}$.
For $T_{1}$ the mean energy per spin is $\langle \epsilon \rangle \approx -1.9969 J$,
with a variation $\text{Var} (\epsilon) = 0.0001$. And for $T_{2}$, close to the critical
with a variance $\text{Var} (\epsilon) = 0.0001$. And for $T_{2}$, close to the critical
temperature, the mean energy per spin is $\langle \epsilon \rangle \approx -1.2370 J$,
with a variation $\text{Var} (\epsilon) = 0.0203$. In addition, we estimated
with a variance $\text{Var} (\epsilon) = 0.0203$. In addition, we estimated
the expected energy and magnetization per spin, the heat capacity and magnetic
susceptibility. We estimate the critical temperatures of finite lattice sizes,
susceptibility. We have estimated the critical temperatures of finite lattice sizes,
and used these values to approximate the critical temperature of a lattice of
infinite size. Using linear regression, we estimated the critical temperature
to be $T_{c}(L = \infty) \approx 2.2695 J/k_{B}$.
to be $T_{c}^{*}(L = \infty) \approx 2.2693 J/k_{B}$.
\end{abstract}
\end{document}

View File

@ -40,12 +40,15 @@ the spin up as visualized in Figure \ref{fig:tikz_neighbor}.
Using the values estimated for the $2 \times 2$ case, found in \ref{tab:lattice_config},
we find the partition function
\begin{align*}
Z &= 1 \cdot e^{-\beta (-8J)} + 4 \cdot e^{-\beta (0)} + 4 \cdot e^{-\beta (0)} + 2 \cdot e^{-\beta (8J)} \\
& \quad + 4 \cdot e^{-\beta (0)} 1 \cdot e^{-\beta (-8J)} \\
Z &= 1 \cdot e^{-\beta (-8J)} + 4 \cdot e^{-\beta (0)} + 4 \cdot e^{-\beta (0)} \\
& \quad + 2 \cdot e^{-\beta (8J)} + 4 \cdot e^{-\beta (0)} 1 \cdot e^{-\beta (-8J)} \\
&= 2e^{8 \beta J} + 2e^{-8 \beta J} + 12.
\end{align*}
We rewrite the expression using the identity $\cosh(8 \beta J) = 1/2 \big( e^{8 \beta J} + e^{-8 \beta J})$,
and get
We rewrite the expression using the identity
\begin{align*}
\cosh(8 \beta J) &= 1/2 \big( e^{8 \beta J} + e^{-8 \beta J})
\end{align*}
and find
\begin{align*}
z &= 4 \cosh (8 \beta J) + 12 \ .
\end{align*}
@ -55,12 +58,10 @@ and get
For a linear function of a stochastic random variable $X$, with a known probability
distribution, the expected value of $x$ is given by
\begin{align*}
\mathbb{E}(aX + b) &= a \cdot \mathbb{E}(X) + b & \text{\cite[p. 131]{springer:2012:modernstat}}
\langle aX + b \rangle &= a \cdot \langle X \rangle + b & \text{\cite[p. 131]{springer:2012:modernstat}}
\end{align*}
In our case the discrete random variable is the spin configuration, and we want
to find the expected value of the function $E(\mathbf{s})$. Continuing, we will
use the notation $\langle E \rangle$ for the expectation value of a given variable,
in this case $E$.
to find the expected value of the function $E(\mathbf{s})$.
Both energy per spin and magnetization per spin are functions of $\mathbf{s}$.
In addition, the number of spins is given as a constant for each lattice. We can
@ -75,9 +76,10 @@ The same applies to magnetization per spin
\begin{align*}
\langle |m| \rangle = \frac{1}{N} \sum_{i=1}^{N} |M(s_{i})| p(s_{i} \ | \ T) \ .
\end{align*}
Continuing with the expectation values for a $2 \times 2$ lattice, excluding the terms which give zero we get
Continuing with the expectation values for a $2 \times 2$ lattice, excluding the terms which result in zero, we get
\begin{align*}
\langle E \rangle &= (-8J) \cdot \frac{1}{Z} e^{8 \beta J} + 2 \cdot (8J) \cdot \frac{1}{Z} e^{-8 \beta J} + (-8J) \cdot \frac{1}{Z} e^{8 \beta J} \\
\langle E \rangle &= (-8J) \cdot \frac{1}{Z} e^{8 \beta J} + 2 \cdot (8J) \cdot \frac{1}{Z} e^{-8 \beta J} \\
& \quad + (-8J) \cdot \frac{1}{Z} e^{8 \beta J} \\
&= \frac{16J}{Z} \big(e^{-8 \beta J} - e^{8 \beta J}) \\
&= -\frac{32J \sinh(8 \beta J)}{4(\cosh(8 \beta J) + 3)} \\
&= -\frac{8J \sinh(8 \beta J)}{\cosh(8 \beta J) + 3} \ ,
@ -90,9 +92,10 @@ and
&= \frac{4 (2e^{8 \beta J} + 4)}{4(\cosh(8 \beta J) + 3)} \\
&= \frac{2(e^{8 \beta J} + 2)}{\cosh(8 \beta J) + 3} \ .
\end{align*}
The squared function
The squared energy and magnetization functions are then
\begin{align*}
\langle E^{2} \rangle &= (-8J)^{2} \cdot \frac{1}{Z} e^{8 \beta J} + 2 \cdot (8J)^{2} \cdot \frac{1}{Z} e^{-8 \beta J} + (-8J)^{2} \cdot \frac{1}{Z} e^{8 \beta J} \\
\langle E^{2} \rangle &= (-8J)^{2} \cdot \frac{1}{Z} e^{8 \beta J} + 2 \cdot (8J)^{2} \cdot \frac{1}{Z} e^{-8 \beta J} \\
& \quad + (-8J)^{2} \cdot \frac{1}{Z} e^{8 \beta J} \\
&= \frac{128J^{2}}{Z} \big(e^{8 \beta J} + e^{-8 \beta J} \big) \\
&= \frac{128J^{2} \cosh(8 \beta J)}{4(\cosh(8 \beta J) + 3)} \\
&= \frac{64J^{2} \cosh(8 \beta J)}{\cosh(8 \beta J) + 3} \ ,
@ -116,6 +119,7 @@ and
&= \frac{4(e^{8 \beta J} + 2)^{2}}{(\cosh(8 \beta J) + 3)^{2}} \ .
\end{align*}
\section{Heat capacity and magnetic susceptibility}\label{sec:heat_susceptibility}
To find the heat capacity in Eq. \ref{eq:heat_capacity}, we normalize to heat
capacity per spin
@ -123,7 +127,7 @@ capacity per spin
\frac{C_{V}}{N} &= \frac{1}{N} \frac{1}{k_{B} T^{2}} (\mathbb{E}(E^{2}) - [\mathbb{E}(E)]^{2}) \\
&= \frac{1}{N k_{B} T^{2}} \mathbb{V}(E) \ .
\end{align*}
Using Eq. \eqref{eq:susceptibility}, we find he susceptibility per spin
Using Equation \eqref{eq:susceptibility}, we find the susceptibility per spin
\begin{align*}
\frac{\chi}{N} &= \frac{1}{N} \frac{1}{k_{B} T} (\mathbb{E}(M^{2}) - [\mathbb{E}(M)]^{2}) \\
&= \frac{1}{N k_{B} T} \mathbb{V}(M) \ .
@ -149,8 +153,9 @@ and the variance of the total magnetization is given by
\mathbb{V}(M) &= \mathbb{E}(M^{2}) - [\mathbb{E}(|M|)]^{2} \\
&= \frac{8e^{8 \beta J} + 8}{\cosh(8 \beta J) + 3} - \frac{4(e^{8 \beta J} + 2)^{2}}{(\cosh(8 \beta J) + 3)^{2}} \\
&= \frac{(8(e^{8 \beta J} + 1)) \cdot (\cosh(8 \beta J) + 3) - 4(e^{8 \beta J} + 2)^{2}}{(\cosh(8 \beta J) + 3)^{2}} \\
&= \frac{4(e^{8 \beta J} + 1) \cdot (e^{8 \beta J} + e^{-8 \beta J}) + 24(e^{8 \beta J} + 1) - 4(e^{8 \beta J} + 1)^{2}}{(\cosh(8 \beta J) + 3)^{2}} \\
&= \frac{4e^{2(8 \beta J)} + 4e^{8 \beta J} 4e^{0} + 4e^{-8 \beta J} 24e^{8 \beta J} + 24 - 4e^{2(8 \beta J)} - 16e^{8 \beta J} - 16}{(\cosh(8 \beta J) + 3)^{2}} \\
&= \frac{4(e^{8 \beta J} + 1) \cdot (e^{8 \beta J} + e^{-8 \beta J})}{(\cosh(8 \beta J) + 3)^{2}} \\
& \quad + \frac{24(e^{8 \beta J} + 1) - 4(e^{8 \beta J} + 1)^{2}}{(\cosh(8 \beta J) + 3)^{2}}\\
% &= \frac{4e^{2(8 \beta J)} + 4e^{8 \beta J} 4e^{0} + 4e^{-8 \beta J} 24e^{8 \beta J} + 24 - 4e^{2(8 \beta J)} - 16e^{8 \beta J} - 16}{(\cosh(8 \beta J) + 3)^{2}} \\
&= \frac{4(3e^{8 \beta J} + e^{-8 \beta J} + 3)}{(\cosh(8 \beta J) + 3)^{2}} \ .
\end{align*}
We find the heat capacity
@ -180,34 +185,43 @@ $\Delta E = E_{\text{after}} - E_{\text{before}}$. We find the $3^{2}$ values as
where the five distinct values are $\Delta E = \{-16J, -8J, 0, 8J, 16J\}$.
\section{Extra}\label{sec:extra_results}
We increased number of MC cycles to 10 million
\begin{figure}
\section{Additional results}\label{sec:additional_results}
We also did the phase transition experiment using 1 million MC cycles. In Figure \ref{fig:phase_energy_1M}
we show expected energy per spin, and in Figure \ref{fig:phase_magnetization_1M}
expected magnetization per spin.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/10M/energy.pdf}
\caption{$\langle \epsilon \rangle$ for $T \in [2.1, 2.4]$, 10000000 MC cycles.}
\label{fig:phase_energy_10M}
\end{figure}
\begin{figure}
\caption{Expected energy per spin $\langle \epsilon \rangle$ for temperatures $T \in [2.1, 2.4]$, and $10^{6}$ MC cycles.}
\label{fig:phase_energy_1M}
\end{figure} %
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/10M/magnetization.pdf}
\caption{$\langle |m| \rangle$ for $T \in [2.1, 2.4]$, 10000000 MC cycles.}
\label{fig:phase_magnetization_10M}
\end{figure}
\begin{figure}
\caption{Expected magnetization per spin $\langle |m| \rangle$ for temperatures $T \in [2.1, 2.4]$, and $10^{6}$ MC cycles.}
\label{fig:phase_magnetization_1M}
\end{figure} %
In Figure \ref{fig:phase_heat_1M} we show heat capacity, and in Figure \ref{fig:phase_susceptibility_1M}
the magnetic susceptibility.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/10M/heat_capacity.pdf}
\caption{$C_{V}$ for $T \in [2.1, 2.4]$, 10000000 MC cycles.}
\label{fig:phase_heat_10M}
\end{figure}
\begin{figure}
\caption{Heat capacity $C_{V}$ for temperatures $T \in [2.1, 2.4]$, and $10^{6}$ MC cycles.}
\label{fig:phase_heat_1M}
\end{figure} %
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/10M/susceptibility.pdf}
\caption{$\chi$ for $T \in [2.1, 2.4]$, 10000000 MC cycles.}
\label{fig:phase_susceptibility_10M}
\caption{Magnetic susceptibility $\chi$ for temperatures $T \in [2.1, 2.4]$, and $10^{6}$ MC cycles.}
\label{fig:phase_susceptibility_1M}
\end{figure}
Result of profiling using Score-P in Figure \ref{fig:scorep_assessment}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/profiling.pdf}
\caption{Score-P assessment of parallel efficiency.}
\label{fig:scorep_assessment}
\end{figure}
\end{document}

View File

@ -11,7 +11,7 @@ of threads we observed a speed-up in runtime.
We initialized the lattices using both ordered and unordered spin configuration,
and started sampling after the system reached an equilibrium. The number of Monte
Carlo cycles necessary to reach a system equilibrium, referred to as burn-in time,
was estimated to be 3000 cycles. We found that excluding the samples generated
was estimated to be $5000$ cycles. We found that excluding the samples generated
during the burn-in time, improves the estimated expectation value of energy and
magnetization, in addition to the heat capacity and susceptibility, when samples
are sparse. However, when we increase number of samples, excluding the burn-in
@ -19,17 +19,17 @@ samples does not affect the estimated values.
Continuing, we used the generated samples to compute energy per spin $\langle \epsilon \rangle$,
magnetization per spin $\langle |m| \rangle$, heat capacity $C_{V}$, and $\chi$.
In addition, we estimated the probability distribution for temperature $T_{1} = 1.0 J / k_{B}$,
In addition, we estimated the probability distribution for temperatures $T_{1} = 1.0 J / k_{B}$,
and $T_{2} = 2.4 J / k_{B}$. We found that for $T_{1}$ the expected mean energy
per spin is $\langle \epsilon \rangle \approx -1.9969 J$, with a variation $\text{Var} (\epsilon) = 0.0001$.
per spin is $\langle \epsilon \rangle \approx -1.9969 J$, with a variance $\text{Var} (\epsilon) = 0.0001$.
And for $T_{2}$, the mean energy per spin is $\langle \epsilon \rangle \approx -1.2370 J$,
with a variation $\text{Var} (\epsilon) = 0.0203$.
with a variance $\text{Var} (\epsilon) = 0.0203$.
We estimated the expected energy and magnetization per spin, in addition to the
heat capacity and susceptibility for lattices of size $L = {20, 40, 60, 80, 100}$.
We observed a phase transition in the temperature range $T \in [2.1, 2.4] J / k_{B}$.
Using the values from the finite lattices, we approximated the critical temperature
of a lattice of infinite size. Using linear regression, we numerically estimated
$T_{c}(L = \infty) \approx 2.2695 J/k_{B}$ which is close to the analytical solution
$T_{C}(L = \infty) \approx 2.269 J/k_{B}$ Lars Onsager found in 1944.
$T_{c}^{*}(L = \infty) \approx 2.2693 J/k_{B}$ which is close to the analytical solution
$T_{c}(L = \infty) \approx 2.269 J/k_{B}$ Lars Onsager found in 1944.
\end{document}

View File

@ -7,9 +7,9 @@ magnetic property of these materials is utilized in devices such as compasses
for navigation, in computer hard drives to store data \cite{yale:2023:hdd}, and
MRI machines to produce detailed anatomical images.
Magnetic materials can be classified based on their specific magnetic behavior,
and one of the groups consist of ferromagnetic materials. These materials are
easily magnetized, and when exposed to a strong magnetic field they can become
Magnetic materials care classified based on their specific magnetic behavior,
and one of the groups consists of ferromagnetic materials. These materials are
easily magnetized, and when exposed to a strong magnetic field, they become
saturated. In addition, when these materials are heated to a critical point, they
lose their magnetic property \cite{britannica:2023:ferromagnetism}.
@ -23,7 +23,7 @@ exposed to temperatures near a critical point. In addition, we will numerically
estimate the critical temperature where the system experience phase transition.
In Section \ref{sec:methods}, we will present the theoretical background for
this experiment, as well as algorithms and tools used in the implementation.
Continuing with Section \ref{sec:results}, where we present our results and
discuss our findings. Lastly, we conclude our findings in Section \ref{sec:conclusion}.
this experiment, as well as the algorithms and tools used in the implementation.
Continuing with Section \ref{sec:results}, we will present our results and
discuss our findings. Lastly, we will conclude our findings in Section \ref{sec:conclusion}.
\end{document}

View File

@ -3,8 +3,8 @@
\begin{document}
\section{Methods}\label{sec:methods}
\subsection{The Ising model}\label{subsec:ising_model}
The Ising model consist of a lattice of spins, which can be though of as atoms
in a grid. In two dimensions, the length of the lattice is given by $L$, and the
The Ising model consists of a lattice of spins, which can be thought of as atoms
in a grid. In two dimensions, the length of a lattice is given by $L$, and the
number of spins within a lattice is given by $N = L^{2}$. When we consider the
entire lattice, the system spin configuration is represented as a matrix $L \times L$
\begin{align*}
@ -15,30 +15,30 @@ entire lattice, the system spin configuration is represented as a matrix $L \tim
s_{L,1} & s_{L,2} & \dots & s_{L,L}
\end{pmatrix} \ .
\end{align*} % $\mathbf{s} = [s_{1}, s_{2}, \dots, s_{N}]$.
The total number of possible spin configurations, also called microstates, is $2^{N}$.
The total number of possible spin configurations, also called microstates, is
$|\mathbf{s}| = 2^{N}$.
A given spin $i$ can take one of two possible discrete values $s_{i} \in \{-1, +1\}$,
where $-1$ represent down and $+1$ spin up. The spins interact with its nearest neighbors,
and in a two-dimensional lattice each spin has up to four nearest neighbors. However,
the model is not restricted to this dimentionality \cite[p. 3]{obermeyer:2020:ising}.
In our experiment we will use periodic boundary conditions, meaning all spins
have exactly four nearest neighbors. To find the analytical expressions necessary
for validating our model implementation, we will assume a $2 \times 2$ lattice.
A given spin $i$ can take one of two possible discrete values $s_{i} \in \{-1, +1\}$.
The spins interact with its nearest neighbors, and in a two-dimensional lattice
each spin has up to four nearest neighbors. In our experiments we will use periodic
boundary conditions, resulting in all spins having exactly four nearest neighbors.
To find the analytical expressions necessary for validating our model implementation,
we assume a $2 \times 2$ lattice.
The hamiltonian of the Ising model is given by
\begin{equation}
E(\mathbf{s}) = -J \sum_{\langle k l \rangle}^{N} s_{k} s_{l} - B \sum_{k}^{N} s_{k}\ ,
\label{eq:energy_hamiltonian}
\end{equation}
where $\langle k l \rangle$ denote a spin pair. $J$ is the coupling constant, and
$B$ is the external magnetic field. For simplicity, we will consider the Ising model
where $\langle k l \rangle$ denotes a spin pair. $J$ is the coupling constant, and
$B$ is the external magnetic field. For simplicity, we consider the Ising model
where $B = 0$, and find the total system energy given by
\begin{equation}
E(\mathbf{s}) = -J \sum_{\langle k l \rangle}^{N} s_{k} s_{l} \ .
\label{eq:energy_total}
\end{equation}
To avoid counting duplicated, we count the neighboring spins using the pattern
visualized in Figure \ref{fig:tikz_boundary}.
We count the neighboring spins using the pattern visualized in Figure \ref{fig:tikz_boundary},
to avoid counting a given spin pair several times.
\begin{figure}
\centering
\begin{tikzpicture}
@ -89,13 +89,13 @@ in Appendix \ref{sec:energy_special}.
0 & -8J & -4 & 1 \\
\hline
\end{tabular}
\caption{Values of total energy, total magnetization, and degeneracy for the
\caption{Values of the total energy, total magnetization, and degeneracy for the
possible states of a system for a $2 \times 2$ lattice, with periodic boundary
conditions.}
\label{tab:lattice_config}
\end{table}
We calculate the analytical values for a $2 \times 2$ lattice, found in Table \ref{tab:lattice_config}.
However, we scale the total energy and total magnetization by number of spins,
However, we scale the total energy and total magnetization by the number of spins,
to compare these quantities for lattices where $L > 2$. Energy per spin is given by
\begin{equation}
\epsilon (\mathbf{s}) = \frac{E(\mathbf{s})}{N} \ ,
@ -165,10 +165,10 @@ We also need to determine the heat capacity
\label{eq:heat_capacity}
\end{equation}
and the magnetic susceptibility
\begin{align*}
\begin{equation}
\chi = \frac{1}{k_{\text{B} T}} (\mathbb{E}(M^{2}) - [\mathbb{E}(M)]^{2}) \ .
\label{eq:susceptibility}
\end{align*}
\end{equation}
In Appendix \ref{sec:heat_susceptibility} we derive expressions for the heat
capacity and the susceptibility, and find the heat capacity
\begin{align*}
@ -196,7 +196,7 @@ Boltzmann constant we derive the remaining units, which can be found in Table
$[\chi]$ & $1 / J$ & \\
\hline
\end{tabular}
\caption{Values of total energy, total magnetization, and degeneracy for the possible states of a system for a $2 \times 2$ lattice, with periodic boundary conditions.}
\caption{Values of the total energy, total magnetization, and degeneracy for the possible states of a system for a $2 \times 2$ lattice, with periodic boundary conditions.}
\label{tab:units}
\end{table}
@ -204,28 +204,30 @@ Boltzmann constant we derive the remaining units, which can be found in Table
\subsection{Phase transition and critical temperature}\label{subsec:phase_critical}
We consider the Ising model in two dimensions, with no external external magnetic
field. At temperatures below the critical temperature $T_{c}$, the Ising model will
magnetize spontaneous. When increasing the temperature of the external field,
the Ising model transition from an ordered to an unordered phase. The spins become
more correlated, and we can measure the discontinous behavior as an increase in
magnetize spontaneously. When increasing the temperature of the external field,
the Ising model transitions from an ordered to an unordered phase. The spins become
more correlated, and we can measure the discontinuous behavior as an increase in
correlation length $\xi (T)$ \cite[p. 432]{hj:2015:comp_phys}. At $T_{c}$, the
correlation length is proportional with the lattice size, resulting in the critical
correlation length is proportional to the lattice size, resulting in the critical
temperature scaling relation
\begin{equation}
T_{c}(L) - T_{c}(L = \infty) = aL^{-1} \ ,
\label{eq:critical_infinite}
\end{equation}
where $a$ is a constant. For the Ising model in two dimensions, with an lattice of
infinite size, the critical temperature is
where $a$ is a constant. For the Ising model in two dimensions, with a lattice of
infinite size, the analytical solution of the critical temperature is
\begin{equation}
T_{c}(L = \infty) = \frac{2}{\ln (1 + \sqrt{2})} J/k_{B} \approx 2.269 J/k_{B}
\label{eq:critical_solution}
\end{equation}
This result was found analytically by Lars Onsager in 1944. We can estimate the
critical temperature of an infinite lattice, using finite lattices critical temperature
and linear regression.
This result was found analytically by Lars Onsager in 1944 \cite{onsager:1944}. We can estimate the
critical temperature of an infinite lattice, using the critical temperature of
finite lattices of different sizes and linear regression.
\subsection{The Markov chain Monte Carlo method}\label{subsec:mcmc_method}
Markov chains consist of a sequence of samples, where the probability of the next
sample depend on the probability of the current sample. Whereas the Monte Carlo
sample depends on the probability of the current sample. Whereas the Monte Carlo
method introduces randomness to the sampling, which allows us to approximate
statistical quantities \cite{tds:2021:mcmc} without using the pdf directly. We
generate samples of spin configuration, to approximate $\langle \epsilon \rangle$,
@ -234,13 +236,13 @@ $\langle |m| \rangle$, $C_{V}$, and $\chi$, using the Markov chain Monte Carlo
The Ising model is initialized in a random state, and the Markov chain is evolved
until the model reaches an equilibrium state. However, generating new random states
require ergodicity and detailed balance. A Markov chain is ergoditic when all system
states can be reached at every current state, whereas detaild balance implies no
net flux of probability. To satisfy these criterias we use the Metropolis-Hastings
algorithm, found in Figure \ref{algo:metropolis}, to generate samples of microstates.
requires ergodicity and detailed balance. A Markov chain is ergoditic when all system
states can be reached at every current state, whereas detailed balance implies no
net flux of probability. To satisfy these criteria we use the Metropolis-Hastings
algorithm, found in Algorithm \ref{algo:metropolis}, to generate samples of microstates.
One Monte Carlo cycle consist of changing the initial configuration of the lattice,
by randomly flipping a spin. When a spin is flipped, the change in energy is evaluated as
One Monte Carlo cycle consists of $N$ attempted spin flips. When a spin is flipped,
the change in energy is evaluated as
\begin{align*}
\Delta E &= E_{\text{after}} - E_{\text{before}} \ ,
\end{align*}
@ -256,7 +258,6 @@ $\Delta E = \{-16J, -8J, 0, 8J, 16J\}$, we derive these values in Appendix \ref{
We can avoid computing the Boltzmann factor, by using a look up table (LUT).
We use $\Delta E$ as an index to access the resulting value of the exponential function,
in an array.
\begin{figure}[H]
\begin{algorithm}[H]
\caption{Metropolis-Hastings Algorithm}
\label{algo:metropolis}
@ -278,45 +279,45 @@ in an array.
\EndProcedure
\end{algorithmic}
\end{algorithm}
\end{figure}
The Markov process reach an equilibrium after a certain number of Monte Carlo cycles,
where the system state is reflecting the state of a real system. After this burn-in time,
given by number of Monte Carlo cycles, we can start sampling microstates.
The probability distribution of the samples will tend toward the actual probability
distribution of the system.
The Markov process reaches an equilibrium after a certain number of Monte Carlo cycles,
called burn-in time. After the burn-in time, the system state reflects the state of a real system,
and we can start sampling microstates. The probability distribution of the samples
will tend toward the actual probability distribution of the system.
\subsection{Implementation and testing}\label{subsec:implementation_test}
We implemented a test suite, and compared the numerical estimates to the analytical
results from \ref{subsec:statistical_mechanics}. In addition, we set a tolerance to
verify convergence, and a maximum number of Monte Carlo cycles to avoid longer
runtimes during implementation.
results from Section \ref{subsec:statistical_mechanics}. In addition, we set a
tolerance to verify convergence, and a maximum number of Monte Carlo cycles to
avoid potentially having the program run indefinitely.
We used a pattern to access all neighboring spins, where all indices of the neighboring
spins are put in an $L \times 2$ matrix. The indices are accessed using pre-defined
constants, where the first column contain the indices for neighbors to the left
and up, and the second column right and down. This method avoids the use of if-tests,
and takes advantage of the parallel optimization.
so we can take advantage of the compiler optimization.
We parallelize our code using both a message passing interface (OpenMPI) and
multi-threading (OpenMP). First, we divide the temperatures into smaller ranges,
and each MPI process receives a set of temperatures. Every MPI process spawn a
set of threads, which initialize an Ising model and performs the Metropolis-Hastings
algorithm. We limit the number of times threads are spawned and joined, by using
single parallel regions, reducing parallel overhead. We used Fox \footnote{Technical
specifications for Fox can be found at \url{https://www.uio.no/english/services/it/research/platforms/edu-research/help/fox/system-overview.md}},
a high-performance computing cluster, to run our program.
We parallelized our code using both message passing interface (OpenMPI) and
multi-threading (OpenMP). First, we divided the temperatures into smaller sub-ranges,
and each MPI process received a sub-range of temperatures. Every MPI initialize a
parallel region with a set of threads, which then initializes an Ising model and
performs the Metropolis-Hastings algorithm. We limited the number of times threads
are spawned and joined, by using single parallel regions, reducing parallel overhead.
We used Fox \footnote{Technical specifications for Fox can be found at \url{https://www.uio.no/english/services/it/research/platforms/edu-research/help/fox/system-overview.md}},
a high-performance computing cluster, to run our program for $1$ and $10$ million
Monte Carlo cycles when analyzing phase transitions.
\subsection{Tools}\label{subsec:tools}
The Ising model and MCMC methods are implemented in C++, and parallelized using
both \verb|OpenMPI| \cite{gabriel:2004:open_mpi} and \verb|OpenMP| \cite{openmp:2018}. We used the Python library
\verb|matplotlib| \cite{hunter:2007:matplotlib} to produce all the plots, and
\verb|seaborn| \cite{waskom:2021:seaborn} to set the theme in the figures. In
addition, we used \verb|Scalasca| \cite{scalasca} and \verb|Score-P| \cite{scorep}
to profile and optimize our implementation.
% Add profiler
both \verb|OpenMPI| \cite{gabriel:2004:open_mpi} and \verb|OpenMP| \cite{openmp:2018}.
We use the Python library \verb|matplotlib| \cite{hunter:2007:matplotlib} to produce
all the plots, \verb|seaborn| \cite{waskom:2021:seaborn} to set the theme in the
figures. We also use a linear regression method and the normal distribution method
from the Python library \verb|SciPy|. To optimize our implementation, we used a
profiling tool \verb|Score-P| \cite{scorep}.
\end{document}

View File

@ -2,180 +2,192 @@
\begin{document}
\section{Results}\label{sec:results}
% 2.1-2-4 divided into 40 steps, which gives us a step size of 0.0075.
% 10 MPI processes
% - 10 threads per process
% = 100 threads total
% Not a lot of downtime for the threads
However, when the temperature is close to the critical point, we observe an increase
in expected energy and a decrease in magnetization. Suggesting a higher energy and
a loss of magnetization close to the critical temperature.
% We did not set the seed for the random number generator, which resulted in
% different numerical estimates each time we ran the model. However, all expectation
% values are calculated using the same data. The burn-in time varied each time.
% We see a burn-in time t = 5000-10000 MC cycles. However, this changed between runs.
We decided with a burn-in time parallelization trade-off. That is, we set the
burn-in time lower in favor of sampling. To take advantage of the parallelization
and not to waste computational resources. The argument to discard samples generated
during the burn-in time is ... Increasing number of samples outweigh the ...
parallelize using MPI. We generated
samples for the temperature range $T \in [2.1, 2.4]$. Using Fox we generated both
1 million samples and 10 million samples.
It is worth mentioning that the time (number of MC cycles) necessary to get a
good numerical estimate, compared to the analytical result, foreshadowing the
burn-in time.
Markov chain starting point can differ, resulting in different simulation. By
discarding the first samples, the ones generated before system equilibrium we can
get an estimate closer to the real solution. Since we want to estimate expectation
values at a given temperature, the samples should represent the system at that
temperature.
Depending on number of samples used in numerical estimates, using the samples
generated during burn-in can in high bias and high variance if the ratio is skewed.
However, if most samples are generated after burn-in the effect is not as visible.
Can't remove randomness by starting around equilibrium, since samples are generated
using several ising models we need to sample using the same conditions that is
system state equilibrium.
\subsection{Burn-in time}\label{subsec:burnin_time}
$\boldsymbol{Draft}$
We start with a lattice where $L = 20$, to study the burn-in time, that is the
number of Monte Carlo cycles necessary for the system to reach an equilibrium.
We consider two different temperatures $T_{1} = 1.0 J/k_{B}$ and $T_{2} = 2.4 J/k_{B}$,
where $T_{2}$ is close to the critical temperature. We can use the correlation
time $\tau \approx L^{d + z}$ to determine time, where $d$ is the dimensionality
of the system and $z = 2.1665 \pm 0.0012$ \footnote{This value was determined by
Nightingale and Blöte for the Metropolis algorithm.}
% Need to include a section of Onsager's analytical result.
We show the numerical estimates for temperature $T_{1}$ of $\langle \epsilon \rangle$
in Figure \ref{fig:burn_in_energy_1_0} and $\langle |m| \rangle$ in Figure
\ref{fig:burn_in_magnetization_1_0}. For temperature $T_{2}$, the numercal estimate
of $\langle \epsilon \rangle$ is shown in Figure \ref{fig:burn_in_energy_2_4} and
$\langle |m| \rangle$ in Figure \ref{fig:burn_in_magnetization_2_4}. The lattice
is initialized in both an ordered and an unordered state. We observe that for
$T_{1}$ there is no change in either expectation value with increasing number of
Monte Carlo cycles, when we start with an ordered state. As for the unordered
lattice, we observe a change for the first 5000 MC cycles, where it stabilizes.
The approximated expected energy is $-2$ and expected magnetization is $1.0$,
which is to be expected for temperature 0f $1.0$. T is below the critical and the
pdf using $T = 1.0$ result in
\begin{align*}
p(s|T=1.0) &= \frac{1}{e^{-\beta \sum E(s)}} e^{-\beta E(s)} \\
&= \frac{1}{e^{-(1/k_{B}) \sum E(s)}} e^{-(1/k_{B}) E(s)} \ .
\end{align*}
We started with a lattice size $L = 20$ and considered the temperatures $T_{1} = 1.0 J/k_{B}$,
and $T_{2} = 2.4 J/k_{B}$, where $T_{2}$ is close to the analytical critical
temperature given by Equation \eqref{eq:critical_solution}.
To determine the burn-in time, we used the numerical estimates of energy
per spin for $T_{1}$ in Figure \ref{fig:burn_in_energy_1_0}, and $T_{2}$ in Figure
\ref{fig:burn_in_energy_2_4}. We also considered the estimates of magnetization
per spin for $T_{1}$ in Figure \ref{fig:burn_in_magnetization_1_0}, and for $T_{2}$
in Figure \ref{fig:burn_in_energy_2_4}.
% Not sure about the relevance of the paragraph below
% We used random numbers to set the initial state and the index of spin to flip, and
% observed different graphs ... However, when we set the seed of the random number
% engine, we could reproduce the results at every run. It is possible to determine
% the burn-in time analytically, using the correlation time given by $\tau \approx L^{d + z}$.
% Where $d$ is the dimensionality of the system and $z = 2.1665 \pm 0.0012$
% \footnote{This value was determined by Nightingale and Blöte for the Metropolis algorithm.}.
% However, when we increased number of Monte Carlo cycles the sampled generated during
% the burn-in time did not affect the mean value (...). % Should add result showing this
The lattice was initialized in an ordered and an unordered state, for both temperatures. We observed
no change in expectation value of energy or magnetization for $T_{1}$, when we
initialized the lattice in an ordered state. As for the unordered initialized
lattice, we first observed a change in expectation values, and a stabilization around
$5000$ Monte Carlo cycles. The expected energy per spin is $\langle \epsilon \rangle = -2$
and the expected magnetization per spin is $\langle |m| \rangle = 1.0$. % add something about what is expected for $T_{1}$ ?
For $T_{2}$ we observed a change in expectation values for both the ordered and the
unordered lattice.
% \begin{align*}
% p(s|T=1.0) &= \frac{1}{e^{-\beta \sum E(s)}} e^{-\beta E(s)} \\
% &= \frac{1}{e^{-(1/k_{B}) \sum E(s)}} e^{-(1/k_{B}) E(s)} \ .
% \end{align*}
For $T_{2}$ we observe an increase in expected energy per spin $\langle \epsilon \rangle \approx -1.23$,
and a decrease in expected magnetization per spin $\langle |m| \rangle \approx 0.46$.
% Burn-in figures
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/burn_in_time_magnetization_1_0.pdf}
\caption{$\langle |m| \rangle$ as a function of time, for $T = 1.0 J / k_{B}$}
\caption{Magnetization per spin $\langle |m| \rangle$ as a function of time $t$ given by Monte Carlo cycles, for $T = 1.0 J / k_{B}$}
\label{fig:burn_in_magnetization_1_0}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/burn_in_time_energy_1_0.pdf}
\caption{$\langle \epsilon \rangle$ as a function of time, for $T = 1.0 J / k_{B}$}
\caption{Energy per spin $\langle \epsilon \rangle$ as a function of time $t$ given by Monte Carlo cycles, for $T = 1.0 J / k_{B}$}
\label{fig:burn_in_energy_1_0}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/burn_in_time_magnetization_2_4.pdf}
\caption{$\langle |m| \rangle$ as a function of time, for $T = 2.4 J / k_{B}$}
\caption{Magnetization per spin $\langle |m| \rangle$ as a function of time $t$ given by Monte Carlo cycles, for $T = 2.4 J / k_{B}$}
\label{fig:burn_in_magnetization_2_4}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/burn_in_time_energy_2_4.pdf}
\caption{$\langle \epsilon \rangle$ as a function of time, for $T = 2.4 J / k_{B}$}
\caption{Energy per spin $\langle \epsilon \rangle$ as a function of time $t$ given by Monte Carlo cycles, for $T = 2.4 J / k_{B}$}
\label{fig:burn_in_energy_2_4}
\end{figure}
\subsection{Probability distribution}\label{subsec:probability_distribution}
$\boldsymbol{Draft}$
% Histogram figures
We use the estimated burn-in time to set starting time for sampling, then generate
samples to plot in a histogram for $T_{1}$ in Figure \ref{fig:histogram_1_0} and
$T_{2}$ in Figure \ref{fig:histogram_2_4}. For $T_{1}$ we can see that most samples
have the expected value $-2$, we have a distribution with low variance.
We used the estimated burn-in time of $5000$ Monte Carlo cycles as starting time, and generated
samples. To visualize the distribution of energy per spin $\epsilon$, we used histograms
with a bin size $0.02$. In Figure \ref{fig:histogram_1_0} we show the distribution
for $T_{1}$. Where the resulting expectation
value of energy per spin is $\langle \epsilon \rangle = -1.9969$, with a low variance
of Var$(\epsilon) = 0.0001$. %
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/pd_estimate_1_0.pdf}
\caption{Histogram $T = 1.0 J / k_{B}$}
\caption{Distribution of values of energy per spin, when temperature is $T = 1.0 J / k_{B}$}
\label{fig:histogram_1_0}
\end{figure}
\end{figure} %
In Figure \ref{fig:histogram_2_4}, for $T_{2}$, the samples of energy per spin is
centered around the expectation value $\langle \epsilon \rangle = -1.2370$. %
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/pd_estimate_2_4.pdf}
\caption{Histogram $T = 2.4 J / k_{B}$}
\caption{Distribution of values of energy per spin, when temperature is $T = 2.4 J / k_{B}$}
\label{fig:histogram_2_4}
\end{figure}
\end{figure} %
However, we observed a higher variance of Var$(\epsilon) = 0.0203$. When the temperature
increased, the system moved from an ordered to an unordered state. The change in
system state, or phase transition, indicates the temperature is close to a critical
point.
\subsection{Phase transition}\label{subsec:phase_transition}
$\boldsymbol{Draft}$
% Phase transition figures
We continued investigating the behavior of the system around the critical temperature.
First, we generated $10$ million samples of spin configurations, per temperature, for lattices of
size $L \in \{20, 40, 60, 80, 100\}$, and temperatures $T \in [2.1, 2.4]$. We divided the
temperature range into $40$ steps, with an equal step size of $0.0075$. The samples
were generated in parallel, where the program allocated $4$ sequential temperatures to $10$
MPI processes. Each process was set to spawn $10$ threads, resulting in a total of
$100$ threads working in parallel. We include results for $1$ million MC cycles
in Appendix \ref{sec:additional_results}
To evaluate the performance of the parallelization, we used a profiler. The
assessment output can be found in Appendix \ref{sec:additional_results} in Figure
\ref{fig:scorep_assessment}. The assessment shows a lower score for the MPI load
balance, compared to the OpenMP load balance. The master process gatheres all the
data using blocking communication, resulting in the other processes waiting. This
results in one process, the master, having to work more. The OpenMP load balance
score is very good, suggesting that the threads are not left idle for long time periods.
In Figure \ref{fig:phase_energy_10M}, for the larger lattices we observe a sharper
increase in $\langle \epsilon \rangle$ in the temperature range $T \in [2.25, 2.35]$. %]
\begin{figure}
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/energy.pdf}
\caption{$\langle \epsilon \rangle$ for $T \in [2.1, 2.4]$, 1000000 MC cycles.}
\label{fig:phase_energy}
\end{figure}
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/10M/energy.pdf}
\caption{Expected energy per spin $\langle \epsilon \rangle$ for temperatures $T \in [2.1, 2.4]$, and $10^7$ MC cycles.}
\label{fig:phase_energy_10M}
\end{figure} %
We observe a decrease in $\langle |m| \rangle$ for the same temperature range in
Figure \ref{fig:phase_magnetization_10M}, suggesting that the system moves from an ordered
magnetized state to a state of no net magnetization. The system energy increases,
however, there is a loss of magnetization close to the critical temperature.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/magnetization.pdf}
\caption{$\langle |m| \rangle$ for $T \in [2.1, 2.4]$, 1000000 MC cycles.}
\label{fig:phase_magnetization}
\end{figure}
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/10M/magnetization.pdf}
\caption{Expected magnetization per spin $\langle |m| \rangle$ for temperatures $T \in [2.1, 2.4]$, and $10^7$ MC cycles.}
\label{fig:phase_magnetization_10M}
\end{figure} %
In Figure \ref{fig:phase_heat_10M}, we observe an increase in heat capacity in the
temperature range $T \in [2.25, 2.35]$. In addition, we observe a sharper peak
value of heat capacity when the lattice size increase.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/heat_capacity.pdf}
\caption{$C_{V}$ for $T \in [2.1, 2.4]$, 1000000 MC cycles.}
\label{fig:phase_heat}
\end{figure}
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/10M/heat_capacity.pdf}
\caption{Heat capacity $C_{V}$ for temperatures $T \in [2.1, 2.4]$, and $10^7$ MC cycles.}
\label{fig:phase_heat_10M}
\end{figure} %
The magnetic susceptibility in Figure \ref{fig:phase_susceptibility_10M}, shows
the sharp peak in the same temperature range as that of the heat capacity. Since the
shape of the curve for both heat capacity and the magnetic susceptibility become
sharper when we increase lattice size, we are moving closer to the critical temperature.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/susceptibility.pdf}
\caption{$\chi$ for $T \in [2.1, 2.4]$, 1000000 MC cycles.}
\label{fig:phase_susceptibility}
\end{figure}
We include results for 10 million MC cycles in Appendix \ref{sec:extra_results}
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/10M/susceptibility.pdf}
\caption{Magnetic susceptibility $\chi$ for temperatures $T \in [2.1, 2.4]$, and $10^7$ MC cycles.}
\label{fig:phase_susceptibility_10M}
\end{figure} %
% Add something about the correlation length?
\subsection{Critical temperature}\label{subsec:critical_temperature}
$\boldsymbol{Draft}$
We use the critical temperatures found in previous section, in addition to the
scaling relation in Equation \eqref{eq:critical_intinite}
\begin{equation}
T_{c} - T_{c}(L = \infty) = \alpha L^{-1}
\label{eq:critical_intinite}
\end{equation}
to estimate the critical temperature for a lattize of infinte size. We also
compared the estimate with the analytical solution
\begin{equation}
T_{c}(L = \infty) = \frac{2}{\ln (1 + \sqrt{2})} J/k_{B} \approx 2.269 J/k_{B}
\end{equation}
using linear regression. In Figure \ref{fig:linreg} we find the critical
temperatures as function of the inverse lattice size. When the lattice size increase
toward infinity, $1/L$ goes toward zero, we find the intercept which gives us an
estimated value of the critical temperature for a lattice of infinite size.
% Critical temp regression figure
Based on the heat capacity (Figure \ref{fig:phase_heat_10M}) and susceptibility
(Figure \ref{fig:phase_susceptibility_10M}), we estimated the critical temperatures
of lattices of size $L \in \{20, 40, 60, 80, 100\}$ found in Table \ref{tab:critical_temperatures}.
% Tc wide 10M = 2.37, 2.325, 2.3025, 2.295, 2.2875
\begin{table}[H]
\centering
\begin{tabular}{cc} % @{\extracolsep{\fill}}
\hline
$L$ & $T_{c}(L)$ \\
\hline
$20$ & $2.37 J / k_{B}$ \\
$40$ & $2.325 J / k_{B}$ \\
$60$ & $2.3025 J / k_{B}$ \\
$80$ & $2.295 J / k_{B}$ \\
$100$ & $2.2875 J / k_{B}$ \\
\hline
\end{tabular}
\caption{Estimated critical temperatures for lattices $L \times L$, where $L$ denote the lattice size.}
\label{tab:critical_temperatures}
\end{table}
We used the critical temperatures of finite lattices and the scaling relation in
Equation \eqref{eq:critical_infinite}, Section \ref{subsec:phase_critical}, to estimate
the critical temperature of a lattice of infinite size. In Figure \ref{fig:linreg_10M},
we plot the critical temperatures $T_{c}(L)$ of the inverse lattice size $1/L$. When
the lattice size increase toward infinity, $1/L$ approaches zero. %
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/linreg.pdf}
\caption{Linear regression, where $\beta_{0}$ is the intercept $T_{c}(L = \infty)$ and $\beta_{1}$ is the slope.}
\label{fig:linreg}
\caption{Linear regression, where $\beta_{0}$ is the intercept approximating $T_{c}(L = \infty)$, and $\beta_{1}$ is the slope.}
\label{fig:linreg_10M}
\end{figure}
We used linear regression to find the intercept $\beta_{0}$, which gives us an estimated value
of the critical temperature for a lattice of infinite size. The estimated critical temperature
is $T_{c}^{*}(L = \infty) \approx 2.2693 J/k_{B}$. We also compared the
estimate with the analytical solution, the relative error of our estimate is
\begin{equation*}
\text{Relative error} = \frac{T_{c}^{*} - T_{c}}{T_{c}} \approx 5.05405 \cdot 10^{-5} J/k_{B}
\end{equation*}
\end{document}

View File

@ -17,8 +17,10 @@ plt.rcParams.update(params)
def plot_1_0():
files = [
"data/hp/burn_in_time/unordered_1_0_1421110368.txt",
"data/hp/burn_in_time/ordered_1_0_611577739.txt",
# "data/hp/burn_in_time/unordered_1_0_1421110368.txt",
# "data/hp/burn_in_time/ordered_1_0_611577739.txt",
"data/hp/burn_in_time/unordered_1_0.txt",
"data/hp/burn_in_time/ordered_1_0.txt",
]
labels = [
"Unordered",
@ -48,13 +50,13 @@ def plot_1_0():
ax1.plot(t, energy, label=f"{label}", color=color)
ax2.plot(t, magnetization, label=f"{label}", color=color)
ax1.set_xlabel("t")
ax1.set_ylabel(r"$\langle \epsilon \rangle$")
ax1.set_xlabel("t (MC cycles)")
ax1.set_ylabel(r"$\langle \epsilon \rangle$ $(J)$")
ax1.legend(title="Initial state", loc="upper right")
fig1.savefig("./latex/images/burn_in_time_energy_1_0.pdf", bbox_inches="tight")
ax2.set_ylabel(r"$\langle |m| \rangle$")
ax2.set_xlabel("t")
ax2.set_ylabel(r"$\langle |m| \rangle$ (unitless)")
ax2.set_xlabel("t (MC cycles)")
ax2.legend(title="Initial state", loc="upper right")
fig2.savefig(
"./latex/images/burn_in_time_magnetization_1_0.pdf", bbox_inches="tight"
@ -63,8 +65,10 @@ def plot_1_0():
def plot_2_4():
files = [
"data/hp/burn_in_time/unordered_2_4_1212892317.txt",
"data/hp/burn_in_time/ordered_2_4_2408603856.txt",
# "data/hp/burn_in_time/unordered_2_4_1212892317.txt",
# "data/hp/burn_in_time/ordered_2_4_2408603856.txt",
"data/hp/burn_in_time/unordered_2_4.txt",
"data/hp/burn_in_time/ordered_2_4.txt",
]
labels = [
"Unordered",
@ -94,13 +98,13 @@ def plot_2_4():
ax1.plot(t, energy, label=f"{label}", color=color)
ax2.plot(t, magnetization, label=f"{label}", color=color)
ax1.set_xlabel("t")
ax1.set_ylabel(r"$\langle \epsilon \rangle$")
ax1.set_xlabel("t (MC cycles)")
ax1.set_ylabel(r"$\langle \epsilon \rangle$ $(J)$")
ax1.legend(title="Initial state", loc="upper right")
fig1.savefig("./latex/images/burn_in_time_energy_2_4.pdf", bbox_inches="tight")
ax2.set_ylabel(r"$\langle |m| \rangle$")
ax2.set_xlabel("t")
ax2.set_ylabel(r"$\langle |m| \rangle$ (unitless)")
ax2.set_xlabel("t (MC cycles)")
ax2.legend(title="Initial state", loc="upper right")
fig2.savefig(
"./latex/images/burn_in_time_magnetization_2_4.pdf", bbox_inches="tight"

View File

@ -40,7 +40,7 @@ def plot(infile, outfile):
ec="white",
lw=0.2,
)
ax1.set_xlabel(r"$\epsilon$")
ax1.set_xlabel(r"$\epsilon$ $(J)$")
ax1.set_ylabel("Density")
mu, sigma = np.mean(eps), np.std(eps)

View File

@ -4,6 +4,20 @@ from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import linregress
import seaborn as sns
sns.set_theme()
params = {
"font.family": "Serif",
"font.serif": "Roman",
"text.usetex": True,
"axes.titlesize": "large",
"axes.labelsize": "large",
"xtick.labelsize": "large",
"ytick.labelsize": "large",
"legend.fontsize": "medium",
}
plt.rcParams.update(params)
def plot_phase_transition_alt(indir, outdir):
@ -65,20 +79,40 @@ def plot_phase_transition_alt(indir, outdir):
x = np.linspace(0, 1 / 20, 1001)
regression = linregress(inv_L, Tc)
f = lambda x: regression[0] * x + regression[1]
stats = (
f"$\\beta_{0}$ = {regression[1]:.4f}\n"
f"$\\beta_{1}$ = {regression[0]:.4f}"
)
bbox = dict(boxstyle="round", pad=0.3, fc="white", ec="white", alpha=0.5)
ax5.text(
0.6, 0.6, stats, bbox=bbox, transform=ax1.transAxes, ha="right", va="center"
)
ax5.scatter(inv_L, Tc)
ax5.plot(x, f(x), label=f"m = {regression[0]}, i = {regression[1]}")
ax5.plot(x, f(x))
figure1.legend()
figure2.legend()
figure3.legend()
figure4.legend()
figure5.legend()
ax1.set_xlabel(r"T $(J/k_{B})$")
ax1.set_ylabel(r"$\langle \epsilon \rangle$ $(J)$")
ax1.legend(title="Lattice size", loc="upper right")
figure1.savefig(Path(outdir, "energy.pdf"))
figure2.savefig(Path(outdir, "magnetization.pdf"))
figure3.savefig(Path(outdir, "heat_capacity.pdf"))
figure4.savefig(Path(outdir, "susceptibility.pdf"))
figure5.savefig(Path(outdir, "linreg.pdf"))
ax2.set_xlabel(r"T $(J/k_{B})$")
ax2.set_ylabel(f"$\\langle |m| \\rangle$ (unitless)")
ax2.legend(title="Lattice size", loc="upper right")
ax3.set_xlabel(r"T $(J/k_{B})$")
ax3.set_ylabel(r"$C_{V}$ $(k_{B})$")
ax3.legend(title="Lattice size", loc="upper right")
ax4.set_xlabel(r"T $(J/k_{B})$")
ax4.set_ylabel(r"$\chi$ $(1/J)$")
ax4.legend(title="Lattice size", loc="upper right")
# ax5.legend()
figure1.savefig(Path(outdir, "energy.pdf"), bbox_inches="tight")
figure2.savefig(Path(outdir, "magnetization.pdf"), bbox_inches="tight")
figure3.savefig(Path(outdir, "heat_capacity.pdf"), bbox_inches="tight")
figure4.savefig(Path(outdir, "susceptibility.pdf"), bbox_inches="tight")
figure5.savefig(Path(outdir, "linreg.pdf"), bbox_inches="tight")
plt.close(figure1)
plt.close(figure2)
@ -99,11 +133,11 @@ def plot_phase_transition(indir, outdir):
"size_100.txt",
]
labels = [
"L = 20",
"L = 40",
"L = 60",
"L = 80",
"L = 100",
"20",
"40",
"60",
"80",
"100",
]
figure1, ax1 = plt.subplots()
@ -151,20 +185,40 @@ def plot_phase_transition(indir, outdir):
x = np.linspace(0, 1 / 20, 1001)
regression = linregress(inv_L, Tc)
f = lambda x: regression[0] * x + regression[1]
stats = (
f"$\\beta_{0}$ = {regression[1]:.4f}\n"
f"$\\beta_{1}$ = {regression[0]:.4f}"
)
bbox = dict(boxstyle="round", pad=0.3, fc="white", ec="white", alpha=0.5)
ax5.text(
0.6, 0.6, stats, bbox=bbox, transform=ax1.transAxes, ha="right", va="center"
)
ax5.scatter(inv_L, Tc)
ax5.plot(x, f(x), label=f"m = {regression[0]}, i = {regression[1]}")
ax5.plot(x, f(x))
figure1.legend()
figure2.legend()
figure3.legend()
figure4.legend()
figure5.legend()
ax1.set_xlabel(r"T $(J/k_{B})$")
ax1.set_ylabel(r"$\langle \epsilon \rangle$ $(J)$")
ax1.legend(title="Lattice size", loc="upper right")
figure1.savefig(Path(outdir, "energy.pdf"))
figure2.savefig(Path(outdir, "magnetization.pdf"))
figure3.savefig(Path(outdir, "heat_capacity.pdf"))
figure4.savefig(Path(outdir, "susceptibility.pdf"))
figure5.savefig(Path(outdir, "linreg.pdf"))
ax2.set_xlabel(r"T $(J/k_{B})$")
ax2.set_ylabel(r"$\langle |m| \rangle$ (unitless)")
ax2.legend(title="Lattice size", loc="upper right")
ax3.set_xlabel(r"T $(J/k_{B})$")
ax3.set_ylabel(r"$C_{V}$ $(k_{B})$")
ax3.legend(title="Lattice size", loc="upper right")
ax4.set_xlabel(r"T $(J/k_{B})$")
ax4.set_ylabel(r"$\chi$ $(1/J)$")
ax4.legend(title="Lattice size", loc="upper right")
# print(Tc)
figure1.savefig(Path(outdir, "energy.pdf"), bbox_inches="tight")
figure2.savefig(Path(outdir, "magnetization.pdf"), bbox_inches="tight")
figure3.savefig(Path(outdir, "heat_capacity.pdf"), bbox_inches="tight")
figure4.savefig(Path(outdir, "susceptibility.pdf"), bbox_inches="tight")
figure5.savefig(Path(outdir, "linreg.pdf"), bbox_inches="tight")
plt.close(figure1)
plt.close(figure2)
@ -174,7 +228,7 @@ def plot_phase_transition(indir, outdir):
if __name__ == "__main__":
plot_phase_transition_alt(
plot_phase_transition(
"data/fox/phase_transition/wide/10M/",
"latex/images/phase_transition/fox/wide/10M/",
)

View File

@ -2,6 +2,20 @@ from os import makedirs
from pathlib import Path
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme()
params = {
"font.family": "Serif",
"font.serif": "Roman",
"text.usetex": True,
"axes.titlesize": "large",
"axes.labelsize": "large",
"xtick.labelsize": "large",
"ytick.labelsize": "large",
"legend.fontsize": "medium",
}
plt.rcParams.update(params)
def plot(indir, outdir):
@ -13,8 +27,12 @@ def plot(indir, outdir):
"burn_in.txt",
]
labels = [
"Without burn-in time",
"With burn-in time",
"Without",
"With",
]
colors = [
"darkred",
"seagreen",
]
figure1, ax1 = plt.subplots()
@ -22,7 +40,7 @@ def plot(indir, outdir):
figure3, ax3 = plt.subplots()
figure4, ax4 = plt.subplots()
for file, label in zip(files, labels):
for file, label, color in zip(files, labels, colors):
t = []
e = []
m = []
@ -39,20 +57,31 @@ def plot(indir, outdir):
CV.append(float(l[3]))
X.append(float(l[4]))
ax1.plot(t, e, label=label)
ax2.plot(t, m, label=label)
ax3.plot(t, CV, label=label)
ax4.plot(t, X, label=label)
ax1.plot(t, e, label=label, color=color)
ax2.plot(t, m, label=label, color=color)
ax3.plot(t, CV, label=label, color=color)
ax4.plot(t, X, label=label, color=color)
figure1.legend()
figure2.legend()
figure3.legend()
figure4.legend()
ax1.set_xlabel(r"T $(J/k_{B})$")
ax1.set_ylabel(r"$\langle \epsilon \rangle$ $(J)$")
ax1.legend(title="Burn-in time", loc="upper right")
figure1.savefig(Path(outdir, "energy.pdf"))
figure2.savefig(Path(outdir, "magnetization.pdf"))
figure3.savefig(Path(outdir, "heat_capacity.pdf"))
figure4.savefig(Path(outdir, "susceptibility.pdf"))
ax2.set_xlabel(r"T $(J/k_{B})$")
ax2.set_ylabel(r"$\langle |m| \rangle$ $(unitless)$")
ax2.legend(title="Burn-in time", loc="upper right")
ax3.set_xlabel(r"T $(J/k_{B})$")
ax3.set_ylabel(r"$C_{V}$ $(k_{B})$")
ax3.legend(title="Burn-in time", loc="upper right")
ax4.set_xlabel(r"T $(J/k_{B})$")
ax4.set_ylabel(r"$\chi$ $(1/J)$")
ax4.legend(title="Burn-in time", loc="upper right")
figure1.savefig(Path(outdir, "energy.pdf"), bbox_inches="tight")
figure2.savefig(Path(outdir, "magnetization.pdf"), bbox_inches="tight")
figure3.savefig(Path(outdir, "heat_capacity.pdf"), bbox_inches="tight")
figure4.savefig(Path(outdir, "susceptibility.pdf"), bbox_inches="tight")
plt.close(figure1)
plt.close(figure2)

View File

@ -51,9 +51,9 @@ def plot_timing(indir, outdir):
ax1.set_xlabel(xlabel)
ax1.set_ylabel("time (seconds)")
figure1.legend()
ax1.legend()
figure1.savefig(Path(outdir, outfile))
figure1.savefig(Path(outdir, outfile), bbox_inches="tight")
plt.close(figure1)

134
review.md Normal file
View File

@ -0,0 +1,134 @@
# PR review
As GitHub does not support adding comments where things haven't changed,
the review will be done here. This is a hassle, but it is the only solution I
have as of now.
## Format
The square brackets are meant to be check boxes, and the numbers are line numbers in a specific file.
## ./latex/ising_model.tex
Nothing
## ./latex/sections/abstract.tex
[x] 12: variation => variance
[x] 14: variation => variance
[x] 16: "We estimate" => "We have estimated" as everything else is in past tense, and the rest of the sentence is in past tense as well.
## ./latex/sections/introduction.tex
[x] 10: "Magnetic materials can be classified" => "Magnetic materials are classified". I changed this, since they are actually classified
[x] 11: consist => consists. "One of the groups" is singular.
[x] 12: Is the saturation optional, or will they become saturated when exposed to a magnetic field?
- janitaws: Valid point, they do become saturated so maybe rephrase
- coryab: Just remove "can", and put a comma after "field" and it should be good
[x] 26: "as well as algorithms ..." => "as well as the algorithms ..."
[x] 28: "we conclude ..." => "we will conclude ..."
## ./latex/sections/methods.tex
[x] 6: consist => consists. "The Ising model" is singular.
[x] 6: though => thought
[x] 7: "the length of the lattice ..." => "the length of a lattice ...". We're not talking about a specific lattice.
[x] 26: remove "will", as we are assuming a 2x2 now and not later in the paper.
[x] 33: denote => denotes. $\langle k l \rangle$ is singular.
[x] 34: Change to present tense, as we are considering it in this section and not later in the paper.
- janitaws: I think I've fixed it, however, tense is apparently not my jam these days XP
- coryab: Looks good! I'll mark it as done.
[x] 92: "Values of total energy ..." => "Values of the total energy ..."
[x] 98: "by number of spins ..." => "by the number of spins ..."
[x] 162: Why are the equations for heat capacity and susceptibility in different environments?
- janitaws: Well, that is an excellent question! That I do not have an answer to, but I changed it so that both are in equation environment.
[x] 208: transition => transitions. "the Ising model" is singular.
- janitaws: The singular thing is really something I should have a look into!
[x] 209: discontinous => discontinuous
[x] 211: with => to
[x] 217: "with an lattice" => "with a lattice"
[x] 223: Should perhaps create a reference to his paper?
- janitaws: I agree, I had already added a reference but had forgotten to cite it in the report. I did not include a page number, don't think it is necessary since its what the paper finding?
[x] 224: "using finite lattices ..." => "using the finite lattices'". We are referring to specific lattices and "lattices" should be possessive. I would rewrite the second part to be "using the critical temperature of finite lattices of different sizes and linear regression."
- janitaws: I agree, it does make more sense
[x] 230: depend => depends. "the next sample" is singular.
[x] 239: require => requires. "generating new random states" is singular.
[x] 240: detaild => detailed
[x] 241: criterias => criteria. the plural of criterion is criteria.
[x] 242: Figure => Algorithm, since we can remove the figure environment enclosing the algorithm environment. More details are specified in the next comment below.
[x] 244: A Monte Carlo cycle consists of attempts of flipping a spin, and not a single attempt to flip a spin.
- janitaws: Not sure how to rephrase the sentence to make it correct, any specific suggestions?
- coryab: It depends. If we want to describe a Monte Carlo cycle, then something like "One Monte Carlo cycle consists of $N$ attempted spin flips." would work. If we want to give a word to a single attempt at a spin flip, then maybe "A single iteration of the Metropolis-Hastings algorithm consists of attempting to flip a spin in the lattice chosen randomly."?
- janitaws: I think the first suggestion makes more sense, since the intention was to explain. I'll change it and mark it done.
[x] 261: Instead of having the algorithm inside a figure, you can use the H specifier in the algorithm environment (which you already do). You can read more about it [here](https://tex.stackexchange.com/questions/231191/algorithm-in-revtex4-1). I have already tested that it works, and it makes it so we can actually refer to it as an algorithm and not a figure.
- janitaws: This might have fallen through the cracks for me since I just copy-pasted from metropolis.tex
- coryab: Yeah, now that I think about it, I copied this directly from the example which had this hacky solution. RevTex is honestly really bad imo.
[x] 284: reach => reaches. "The Markov process" is singular.
[x] 283: I changed the paragraph to include a better introduction of the concept burn-in time, do you think the "flow" is better now?
- coryab: Looks good!
[x] 295: implementation => execution. Implementation is the writing of the code, while the runtime is during execution of the program. I would also say to avoid potentially having the program run endlessly.
[x] 292: Do you have a good way of including "I would also say to avoid potentially having the program run endlessly" in the sentence?
- coryab: Replace the sentence with "In addition, we set a tolerance to verify convergence, and a maximum number of Monte Carlo cycles to avoid potentially having the program run indefinitely."?
[x] 297: Change of tense. We have used present tense for most of this section, so we should continue with that.
- janitaws: Added this into a new point, since the line numbers have shifted now
- coryab: Ok, will mark this as OK.
[x] 294: So for the paragraph starting with "We used a pattern..." should be rewritten to present tense?
- coryab: After reading it a bit better, it makes more sense to have the implementation part in past tense, so I'll mark it as OK.
[x] 300: Lack of if-tests does not hinder parallelization, but it can hinder compiler optimizations.
- janitaws: Suggested change "... avoids the use of if-tests, so we can take advantage of the compiler optimization."
- coryab: Yes, that works great!
[x] 318: used => use. Keep to the present tense for consistency.
[x] 320: used => use.
[x] 320: "the Python ..." => "the Python ...". Double space.
[x] Included the normal distribution method from SciPy as well
[x] 321: used => use.
[x] 321: profiler => profiling.
[x] 321: Remove Scalasca as we only use Score-p now. (Easier to deal with).
## ./latex/sections/results.tex
[x] 15: Should have a line break to indicate new paragraph.
[ ] 15: It might be relevant, but it's a lot more work. we have all the tools necessary to produce the plots, so it depends on how much work we can do.
- janitaws: I suggest we leave it out for now, I'll comment it out in case we want to use it
- coryab: Sounds good! I'll leave this as undone for now until we decide fully on what to do.
[x] 89: onordered => unordered
[x] 97: More accurately, we generated 10M samples per temperature point.
- janitaws: True, so "...$10$ million samples of spin configurations for lattices..." => "...$10$ million samples of spin configurations, per temperature, for lattices..."
- coryab: That sounds good!
[x] 77: Approve change "Histogram $T = 1.0 J / k_{B}$" => "Distribution of values of energy per spin, when temperature is $T = 1.0 J / k_{B}$"
[x] 85: Approve change "Histogram $T = 2.4 J / k_{B}$" => "Distribution of values of energy per spin, when temperature is $T = 1.0 J / k_{B}$"
[x] 100: Should be that the program allocated, so that the reader understands that it does the division automatically.
I agree!
[x] 101: thread => threads
[ ] 105: We used a profiler. And it's not just for ensuring load balance among threads, it's also for each process, and we use it to see that data transfer is minimal.
- janitaws: So instead of "We ran a profiler" it should be "We used a profiler"? Could you suggest a way of formulating the sentence so that the info is correct?
- coryab: Yes to the first point. "To check if the parallelization was optimal, we used a profiler, which found that the program was efficient. The thing that scored slightly less was the MPI load balance, which is most likely because the master process gathered all data using blocking communication, which made the other processes wait, in addition to having one process work more."
- janitaws: If we simply state that the program was efficient we should include some sort of evidence supporting the statement, since the view of efficiency is subjective. Maybe we could include percentage here to support it, the profile gave a number right?
- janitaws: made a change to the entire paragraph, including reference to figure in appendix
[ ] 106: was => were
- janitaws: My previous comment might make this unnecessary?
- coryab: If you use my suggestion, then this can be checked off at the same time.
[x] 113: Included description of variables in caption
[x] 116: deacrese => decrease
[x] 117: "suggesting the system ..." => "suggesting that the system ...".
[x] 118: increase => increases. "The system energy" is singular.
[x] 127: observed => observe.
[x] 128: "capacity the" => "capacity when the"
[x] 135: show => shows.
[x] 136: "Since shape ..." => "Since the shape"
## ./latex/sections/conclusion.tex
[x] 14: Didn't we arrive at 5000?
- janitaws: Haha, ofc we did!
[x] 22: temperature => temperatures
[x] 24: variation => variance
[x] 26: variation => variance
## ./latex/sections/appendices.tex
[x] 95: "The squared energy" => "The squared energy and magnetization functions are then"
- janitaws: I agree, that sound a lot better! I just ended up wtiting something somewhat descriptive in lack of words.
[x] 130: he => the
- janitaws: Also, I changed the Eq. => to Equation
[x] 188: General, Included captions for figures in section

View File

@ -32,7 +32,8 @@ void usage(std::string filename)
int main(int argc, char **argv)
{
// Command options
struct option long_options[] = {{"help", 0, 0, 0}, {NULL, 0, NULL, 0}};
struct option long_options[] = {
{"help", 0, 0, 0}, {NULL, 0, NULL, 0}};
int option_index = -1;
int c;
@ -56,11 +57,17 @@ int main(int argc, char **argv)
usage(argv[0]);
}
}
// Check that the number of arguments is at least 8.
// Check that the number of arguments is at least 6.
if (argc < 6) {
usage(argv[0]);
}
bool ordered = false;
if (argc == 7) {
ordered = true;
}
// Timing variables
double t0, t1;
t0 = omp_get_wtime();
@ -70,7 +77,13 @@ int main(int argc, char **argv)
int L = atoi(argv[2]), cycles = atoi(argv[3]), burn_in_time = atoi(argv[4]);
std::string outfile = argv[5];
if (ordered) {
DEBUG("Hello");
montecarlo::progression(temp, L, cycles, 1, outfile, burn_in_time);
}
else {
montecarlo::progression(temp, L, cycles, outfile, burn_in_time);
}
t1 = omp_get_wtime();

View File

@ -23,13 +23,6 @@ void progression(double T, int L, int cycles, const std::string filename,
std::string directory = utils::dirname(filename);
std::ofstream ofile;
// Create random engine using the mersenne twister
std::random_device rd;
uint32_t rd_sample = rd();
std::mt19937 engine(rd_sample);
std::cout << "Seed: " << rd_sample << std::endl;
IsingModel ising(L, T);
// Create path and open file
@ -61,13 +54,6 @@ void progression(double T, int L, int cycles, int value,
std::string directory = utils::dirname(filename);
std::ofstream ofile;
// Create random engine using the mersenne twister
std::random_device rd;
uint32_t rd_sample = rd();
std::mt19937 engine(rd());
std::cout << "Seed: " << rd_sample << std::endl;
IsingModel ising(L, T, value);
for (size_t i = 0; i < burn_in_time; i++) {