170 lines
11 KiB
TeX
170 lines
11 KiB
TeX
\documentclass[../ising_model.tex]{subfiles}
|
|
|
|
\begin{document}
|
|
\section{Results}\label{sec:results}
|
|
We continue with a lattice where $L = 20$, to study the burn-in time, that is the number of Monte Carlo cycles necessary for the system to reach an equilibrium. We consider two different temperatures $T_{1} = 1.0 J/k_{B}$ and $T_{2} = 2.4 J/k_{B}$, where $T_{2}$ is close to the critical temperature. We can use the correlation time $\tau \approx L^{d + z}$ to determine time, where $d$ is the dimensionality of the system and $z = 2.1665 \pm 0.0012$ \footnote{This value was determined by Nightingale and Blöte for the Metropolis algorithm.}
|
|
% Need to include a section of Onsager's analytical result.
|
|
We show the numerical estimates for temperature $T_{1}$ of $\langle \epsilon \rangle$ in Figure \ref{fig:burn_in_energy_1_0} and $\langle |m| \rangle$ in Figure \ref{fig:burn_in_magnetization_1_0}. For temperature $T_{2}$, the numercal estimate of $\langle \epsilon \rangle$ is shown in Figure \ref{fig:burn_in_energy_2_4} and $\langle |m| \rangle$ in Figure \ref{fig:burn_in_magnetization_2_4}. The lattice is initialized in both an ordered and an unordered state. We observe that for $T_{1}$ there is no change in either expectation value with increasing number of Monte Carlo cycles, when we start with an ordered state. As for the unordered lattice, we observe a change for the first 5000 MC cycles, where it stabilizes. The approximated expected energy is $-2$ and expected magnetization is $1.0$, which is to be expected for temperature 0f $1.0$. T is below the critical and the pdf using $T = 1.0$ result in
|
|
\begin{align*}
|
|
p(s|T=1.0) &= \frac{1}{e^{-\beta \sum E(s)}} e^{-\beta E(s)} \\
|
|
&= = \frac{1}{e^{-(1/k_{B}) \sum E(s)}} e^{-(1/k_{B}) E(s)}
|
|
\end{align*}
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/burn_in_time_magnetization_1_0.pdf}
|
|
\caption{$\langle |m| \rangle$ as a function of time, for $T = 1.0 J / k_{B}$}
|
|
\label{fig:burn_in_magnetization_1_0}
|
|
\end{figure}
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/burn_in_time_energy_1_0.pdf}
|
|
\caption{$\langle \epsilon \rangle$ as a function of time, for $T = 1.0 J / k_{B}$}
|
|
\label{fig:burn_in_energy_1_0}
|
|
\end{figure}
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/burn_in_time_magnetization_2_4.pdf}
|
|
\caption{$\langle |m| \rangle$ as a function of time, for $T = 2.4 J / k_{B}$}
|
|
\label{fig:burn_in_magnetization_2_4}
|
|
\end{figure}
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/burn_in_time_energy_2_4.pdf}
|
|
\caption{$\langle \epsilon \rangle$ as a function of time, for $T = 2.4 J / k_{B}$}
|
|
\label{fig:burn_in_energy_2_4}
|
|
\end{figure}
|
|
|
|
|
|
However, when the temperature is close to the critical point, we observe an increase in expected energy and a decrease in magnetization. Suggesting a higher energy and a loss of magnetization close to the critical temperature.
|
|
|
|
% We did not set the seed for the random number generator, which resulted in different numerical estimates each time we ran the model. However, all expectation values are calculated using the same data. The burn-in time varied each time. We see a burn-in time t = 5000-10000 MC cycles. However, this changed between runs.
|
|
We decided with a burn-in time parallelization trade-off. That is, we set the burn-in time lower in favor of sampling. To take advantage of the parallelization and not to waste computational resources. The argument to discard samples generated during the burn-in time is ... Increasing number of samples outweigh the ...
|
|
|
|
It is worth mentioning that the time (number of MC cycles) necessary to get a good numerical estimate, compared to the analytical result, foreshadowing the burn-in time.
|
|
|
|
Markov chain starting point can differ, resulting in different simulation. By discarding the first samples, the ones generated before system equilibrium we can get an estimate closer to the real solution. Since we want to estimate expectation values at a given temperature, the samples should represent the system at that temperature.
|
|
|
|
Depending on number of samples used in numerical estimates, using the samples generated during burn-in can in high bias and high variance if the ratio is skewed. However, if most samples are generated after burn-in the effect is not as visible. Can't remove randomness by starting around equilibrium, since samples are generated using several ising models we need to sample using the same conditions that is system state equilibrium.
|
|
|
|
We use the estimated burn-in time to set starting time for sampling, then generate samples to plot in a histogram for $T_{1}$ in Figure \ref{fig:histogram_1_0} and $T_{2}$ in Figure \ref{fig:histogram_2_4}. For $T_{1}$ we can see that most samples have the expected value $-2$, we have a distribution with low variance.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/pd_estimate_1_0.pdf}
|
|
\caption{Histogram $T = 1.0 J / k_{B}$}
|
|
\label{fig:histogram_1_0}
|
|
\end{figure}
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/pd_estimate_2_4.pdf}
|
|
\caption{Histogram $T = 2.4 J / k_{B}$}
|
|
\label{fig:histogram_2_4}
|
|
\end{figure}
|
|
|
|
For $T_{2}$ the samples are distributed across a wider range of energy values, the expected value is at approx -1.2, corresponding to what we observed the value at equilibrium in Figure \ref{fig:burn_in_energy_2_4}. In addition, the distribution has a higher variance for temperature close to critical. We also observe that the shape of sample distribution closely resemble the normal distribution.
|
|
|
|
For a $L \times L$ lattice with $N = L^2$ spins and $M = 2^{N}$ microstates, the variance scale as Var$(L) = \frac{1}{\sqrt{2^{L \times L}}}$.
|
|
|
|
The spin correlation function = covariance, degree of correlation between spins.
|
|
|
|
We started parallel implementation using OpenMP, limiting number of threads to ensure the program did not hog the entire cpu. We made sure the workload was divided equally between workers (threads) to ensure balanced workload. In addition, we created a look-up-table for values of energy shift to limit repeated calculations and parallel overhead of if-statements. We limit the number of times threads are spawned and joined, by using single parallel regions, reducing parallel overhead.
|
|
|
|
To improve the efficiancy we divided the range of temperatures between MPI processes, parallelizing the simulation further. Each mpi process then spawn a number of threads which initialize their own Ising model, running through burn-in time and generate samples.
|
|
|
|
We used Fox \footnote{Technical specifications for Fox can be found at \url{https://www.uio.no/english/services/it/research/platforms/edu-research/help/fox/system-overview.md}}, a high-performance computing cluster, to parallelize using MPI. We generated samples for the temperature range $T \in [2.1, 2.4]$. Using Fox we generated both 1 million samples and 10 million samples.
|
|
|
|
The physical behavior of physical quantities like magnetization, heat capasity etc. near the critical temperature can be characterized using power law (p.431 lecture).
|
|
|
|
We observe an ordered phase at low temperatures, meaning we have ferromagnetism. The Metropolis algorithm near critical temperature
|
|
|
|
The energy is given by the first derivative of the thermodynamic potental
|
|
\begin{align*}
|
|
F &= \langle E \rangle - TS = -kT \log Z & \text{$log Z = -F/kT = -F\beta$}
|
|
\end{align*}
|
|
Specific heat is the second derivative
|
|
|
|
Timing the code using both serial, OpenMP, and MPI+OpenMP we observe a speed up in Table \ref{tab:timing}. The speed-up of runtime is found using strong-scaling
|
|
\begin{align*}
|
|
time &= burn-in + parallel/workers
|
|
\end{align*}
|
|
We are not dividing the work of simulating a single Ising model between threads, as there is dependency between the steps. We can use Amdahl's law \cite[p. 124]{hager:2011:hpc} to calculate application speed-up, which simplifies as
|
|
\begin{align*}
|
|
S_f &= \frac{s + p}{s + p/N} \\
|
|
app speedup &= \frac{serial runtime}{parallel runtime}
|
|
\end{align*}
|
|
|
|
When the speed-up was satisfactory, we investigated the phase transition for lattices of size $L = 20, 40, 60, 80, 100$. The result of $\langle \epsilon \rangle$ can be found in Figure \ref{fig:phase_energy}, $\langle |m| \rangle$ in Figure \ref{fig:phase_magnetization}, $C_{V}$ in Figure \ref{fig:phase_heat}, and $\chi$ in Figure \ref{fig:phase_susceptibility}.
|
|
% Add observation around critical temp
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/energy.pdf}
|
|
\caption{$\langle \epsilon \rangle$ for $T \in [2.1, 2.4]$, 1000000 MC cycles.}
|
|
\label{fig:phase_energy}
|
|
\end{figure}
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/magnetization.pdf}
|
|
\caption{$\langle |m| \rangle$ for $T \in [2.1, 2.4]$, 1000000 MC cycles.}
|
|
\label{fig:phase_magnetization}
|
|
\end{figure}
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/heat_capacity.pdf}
|
|
\caption{$C_{V}$ for $T \in [2.1, 2.4]$, 1000000 MC cycles.}
|
|
\label{fig:phase_heat}
|
|
\end{figure}
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/susceptibility.pdf}
|
|
\caption{$\chi$ for $T \in [2.1, 2.4]$, 1000000 MC cycles.}
|
|
\label{fig:phase_susceptibility}
|
|
\end{figure}
|
|
|
|
We include results using 10 million MC cycles in Appendix \ref{sec:extra}
|
|
|
|
We use the critical temperatures found studying the phase transition, in addition to the scaling relation
|
|
\begin{equation}
|
|
T_{c} - T_{c}(L = \infty) = \alpha L^{-1}
|
|
\end{equation}
|
|
to estimate the critical temperatur for a lattize of infinte size. We also compare the estimate with the analytical solution
|
|
\begin{equation}
|
|
T_{c}(L = \infty) = \frac{2}{\ln (1 + \sqrt{2})} J/k_{B} \approx 2.269 J/k_{B}
|
|
\end{equation}
|
|
using linear regression. In Figure \ref{fig:linreg} we show the critical temperatures as function of the inverse lattice size. As size goes toward infinity we reach inv L = 0 and find the intercept which is equal to the estimated critical of L=infty.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=\linewidth]{../images/phase_transition/fox/wide/1M/linreg.pdf}
|
|
\caption{Linear regression, where $\beta_{0}$ is the intercept $T_{c}(L = \infty)$ and $\beta_{1}$ is the slope.}
|
|
\label{fig:linreg}
|
|
\end{figure}
|
|
|
|
|
|
\subsection{Draft}
|
|
|
|
% Test that the numerical stuff gets close to the analytical
|
|
% - validate implementation a given number of times, find average number of cycles
|
|
|
|
% Initial state ordered vs random
|
|
% - start ordered both at up=1 and random
|
|
|
|
% burn-in time
|
|
% - burn-in 5000 vs 10000 mc cycles, only mention how a longer burn-in time can in some cases result in better cv and x
|
|
|
|
% probability using histogram of samples
|
|
% - Comparing the histogram of $T_1 = 1.0$ and $T_2 = 2.4$. For $T_1 = 1.0$ we see a lower acceptance of flips, resulting in a low variance. The histogram looks more like an exponential function
|
|
|
|
% parallelization
|
|
% - temp mpi
|
|
% - each mpi process spawn thread that make own ising model
|
|
% - each ising model goes through burn in time (no benefit)
|
|
% - sampling done in parallel
|
|
% - mcmc sampling openmp
|
|
|
|
% phase transition
|
|
% critical temperature
|
|
\end{document}
|