Go to file
2023-12-05 19:33:52 +01:00
data Fix mistake 2023-12-05 19:02:44 +01:00
docs Update docs 2023-12-05 19:32:43 +01:00
doxygen-awesome Add doxygen-awesome 2023-11-17 12:51:35 +01:00
include Update documentation 2023-12-04 22:23:55 +01:00
latex Fix caption 2023-12-05 19:30:49 +01:00
python_scripts Tidy up 2023-12-05 19:22:22 +01:00
slurm_scripts Adjust script to match program 2023-12-04 12:39:31 +01:00
src Fix mistake 2023-12-05 19:02:44 +01:00
.clang-format Modify formatting 2023-10-31 10:47:31 +01:00
.clangd Remove inclusion of openmpi on local machine 2023-11-08 17:32:50 +01:00
.gitignore Update gitignore 2023-12-04 22:25:21 +01:00
Doxyfile Update README 2023-12-05 19:32:32 +01:00
Makefile Fix some small bugs and add executables 2023-12-04 12:39:01 +01:00
README.md Update README 2023-12-05 19:32:32 +01:00
requirements.txt Add Python libs 2023-12-03 13:32:48 +01:00
review.md Merge branch 'janitaws/latex' of github.uio.no:FYS3150-G2-2023/Project-4 into janitaws/latex 2023-12-05 15:37:30 +01:00

Ising Model

Repo

Documentation

Requirements

Operating systems

  • Linux
    • Has been tested on Fedora 38
    • Will most likely work on other Linux distributions
  • MacOS
    • Will most likely not work due to the use of getopt, which is GNU specific.
  • Windows
    • Will most likely not work

Tools

Libraries

Compiling

The commands shown here should be run from the root of this project.

Normal binaries

Compiling regular binaries is as easy as running this command:

    make

The binaries will then be inside the bin directory.

Profiling binaries

If you want to profile the programs (specifically the MPI program), then run this command

    make profile

The binaries will then be inside the prof directory.

Debugging binaries

If you want to debug the code, then use this command:

    make debug

The binaries will then be inside the debug directory.

Running programs

C++ binaries

To run any of the programs, just use the following command:

    ./<bin|prof|debug>/<program-name> <args>

If you need help with any of the programs, you can use the -h or --help flag to show you how to use the programs. Here is an example:

    ./bin/main --help

Python scripts

Install libraries

Before running the scripts, make sure that all libraries are installed. Using pip, you can install all requirements like this:

    pip install -r requirements.txt

This recursively install all the packages that are listed in requirements.txt.

Running scripts

For the Python scripts, run them from the root of the project:

    python python_scripts/<script-name>

If you have any problems running the scripts, you might have to run this instead:

    python3 python_scripts/<script-name>

Batch system

For the phase_transition_mpi program, there are scripts in the slurm_scripts directory that come along with it. This is to be able to run it on a batch system using Slurm if you have access to one. The only program that should be executed by the user is the slurm_scripts/execute.script script. You can see how to use this script by doing:

    ./slurm_scripts/execute.script --help

This is the recommended way of using the program. Here is a table using different parameters on the Fox cluster:

Lattice size Samples Processes threads Time (seconds)
20 1e7 10 10 133.735
40 1e7 10 10 814.126
60 1e7 10 10 2575.33

If you happen to have such a system available to you, then you should clone this repo on that system, then compile the MPI program like this:

    make bin/phase_transition_mpi

After compiling, you can schedule it by using the slurm_scripts/execute.script:

    ./slurm_scripts/execute.script <parameters>

Performance

This section aims to give an idea to the time it takes for the phase transition program to run so that you know a bit what to expect if you decide to run it for yourself.

CPU

The times mentioned here are times achieved on a computer with these specifications:

  • CPU model
    • Intel i7-9850H
  • Threads
    • 12
  • Clock speed
    • 4.6GHz

Times

Note that all times here are recorded using the OpenMP implementation of the MCMC algorithm.

lattice size points samples burn-in time time (seconds)
20 20 100000 0 3.20
20 40 100000 0 6.17
20 80 100000 0 12.11
lattice size points samples burn-in time time (seconds)
20 20 100000 0 3.20
40 20 100000 0 11.91
80 20 100000 0 47.88
lattice size points samples burn-in time time (seconds)
20 20 100000 0 3.20
20 20 1000000 0 29.95
20 20 10000000 0 305.849
lattice size points samples burn-in time time (seconds)
20 20 100000 0 3.20
20 20 100000 5000 4.93
20 20 100000 10000 6.58

We can see that changing the number of points, samples and burn-in time changes the time in a linear fashion, while changing the size of the lattice changes the time in a square fashion.

Credits

The Doxygen theme used here is doxygen-awesome-css.