Updated method section.

This commit is contained in:
Janita Willumsen 2023-12-01 13:54:59 +01:00
parent 33298b4f8f
commit 064bf61ffb
2 changed files with 51 additions and 6 deletions

View File

@ -292,17 +292,21 @@ results from \ref{subsec:statistical_mechanics}. In addition, we set a tolerance
verify convergence, and a maximum number of Monte Carlo cycles to avoid longer verify convergence, and a maximum number of Monte Carlo cycles to avoid longer
runtimes during implementation. runtimes during implementation.
We avoid the overhead of if-tests, and take advantage of the parallelization, by We used a pattern to access all neighboring spins, where all indices of the neighboring
defining a pattern to access all neighboring spins. Using a $L \times 2$ matrix, spins are put in an $L \times 2$ matrix. The indices are accessed using pre-defined
containing all indices to the neighbors, and pre-defined constants, we find the constants, where the first column contain the indices for neighbors to the left
indices of the neighboring spins. The first column contains the indics for neighbors and up, and the second column right and down. This method avoids the use of if-tests,
to the left and up, and the second column right and down. and takes advantage of the parallel optimization.
We parallelize our code using both a message passing interface (OpenMPI) and We parallelize our code using both a message passing interface (OpenMPI) and
multi-threading (OpenMP). First, we divide the temperatures into smaller ranges, multi-threading (OpenMP). First, we divide the temperatures into smaller ranges,
and each MPI process receives a set of temperatures. Every MPI process spawn a and each MPI process receives a set of temperatures. Every MPI process spawn a
set of threads, which initialize an Ising model and performs the Metropolis-Hastings set of threads, which initialize an Ising model and performs the Metropolis-Hastings
algorithm. algorithm. We limit the number of times threads are spawned and joined, by using
single parallel regions, reducing parallel overhead. We used Fox \footnote{Technical
specifications for Fox can be found at \url{https://www.uio.no/english/services/it/research/platforms/edu-research/help/fox/system-overview.md}},
a high-performance computing cluster, to run our program.
\subsection{Tools}\label{subsec:tools} \subsection{Tools}\label{subsec:tools}

View File

@ -2,6 +2,47 @@
\begin{document} \begin{document}
\section{Results}\label{sec:results} \section{Results}\label{sec:results}
% 2.1-2-4 divided into 40 steps, which gives us a step size of 0.0075.
% 10 MPI processes
% - 10 threads per process
% = 100 threads total
% Not a lot of downtime for the threads
However, when the temperature is close to the critical point, we observe an increase
in expected energy and a decrease in magnetization. Suggesting a higher energy and
a loss of magnetization close to the critical temperature.
% We did not set the seed for the random number generator, which resulted in
% different numerical estimates each time we ran the model. However, all expectation
% values are calculated using the same data. The burn-in time varied each time.
% We see a burn-in time t = 5000-10000 MC cycles. However, this changed between runs.
We decided with a burn-in time parallelization trade-off. That is, we set the
burn-in time lower in favor of sampling. To take advantage of the parallelization
and not to waste computational resources. The argument to discard samples generated
during the burn-in time is ... Increasing number of samples outweigh the ...
parallelize using MPI. We generated
samples for the temperature range $T \in [2.1, 2.4]$. Using Fox we generated both
1 million samples and 10 million samples.
It is worth mentioning that the time (number of MC cycles) necessary to get a
good numerical estimate, compared to the analytical result, foreshadowing the
burn-in time.
Markov chain starting point can differ, resulting in different simulation. By
discarding the first samples, the ones generated before system equilibrium we can
get an estimate closer to the real solution. Since we want to estimate expectation
values at a given temperature, the samples should represent the system at that
temperature.
Depending on number of samples used in numerical estimates, using the samples
generated during burn-in can in high bias and high variance if the ratio is skewed.
However, if most samples are generated after burn-in the effect is not as visible.
Can't remove randomness by starting around equilibrium, since samples are generated
using several ising models we need to sample using the same conditions that is
system state equilibrium.
\subsection{Burn-in time}\label{subsec:burnin_time} \subsection{Burn-in time}\label{subsec:burnin_time}
$\boldsymbol{Draft}$ $\boldsymbol{Draft}$
We start with a lattice where $L = 20$, to study the burn-in time, that is the We start with a lattice where $L = 20$, to study the burn-in time, that is the