Finish problem 5 and 6 with plot

This commit is contained in:
Janita Willumsen 2023-09-25 13:14:33 +02:00
parent b4ab7e9f53
commit 7a6f93f9ba
2 changed files with 41 additions and 2 deletions

View File

@ -1 +1,20 @@
\section*{Problem 5}
\section*{Problem 5}
\subsection*{a)}
We used the Jacobi's rotation method to solve $\boldsymbol{A} \vec{v} = \lambda \vec{v}$, for $\boldsymbol{A}_{(N \cross N)}$ with $N \in [5, 100]$,
and increased the matrix size by $3$ rows and columns for every new matrix generated. The number of similarity transformations performed for a tridiagonal matrix
of is presented in Figure \ref{fig:transform}. We chose to run the program using dense matrices of same size as the tridiagonal matrices, to compare the scaling data.
What we see is that the number of similarity transformations necessary to solve the system is proportional to the matrix size.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{images/transform.pdf}
\caption{Similarity transformations performed as a function of matrix size (N), data is presented in a logarithmic scale.}
\label{fig:transform}
\end{figure}
\subsection*{b)}
For both the tridiagonal and dense matrices we are checking off-diagonal elements above the main diagonal, since these are symmetric matrices.
The max value is found at index $(k,l)$ and for every rotation of the matrix, we update the remaining elements along row $k$ and $l$. This can lead to an increased
value of off-diagonal elements, that previously were close to zero, and extra rotations has to be performed due to these elements. Which suggest that the
number of similarity transformations perfomed on a matrix does not depend on its initial number of non-zero elements, making the Jacobi's rotation algorithm as
computationally expensive for both dense and tridiagonal matrices of size $N \cross N$.

View File

@ -1 +1,21 @@
\section*{Problem 6}
\section*{Problem 6}
\subsection*{a)}
The plot in Figure \ref{fig:eigenvector_6} is showing the discretization of $\hat{x}$ with $n=6$.
The eigenvectors and corresponding analytical eigenvectors have a complete overlap suggesting the implementation of the algorithm is correct.
We have included the boundary points for each vector to show a complete solution.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{images/eigenvector_6.pdf}
\caption{The plot is showing the elements of eigenvector $\vec{v}_{1}, \vec{v}_{2}, \vec{v}_{3}$, corresponding to the three lowest eigenvalues of matrix $\boldsymbol{A} (6 \cross 6)$, against the position $\hat{x}$. The analytical eigenvectors $\vec{v}^{(1)}, \vec{v}^{(2)}, \vec{v}^{(3)}$ are also included in the plot.}
\label{fig:eigenvector_6}
\end{figure}
\subsection*{b)}
For the discretization with $n=100$ the solution is visually close to a continous curve, with a complete overlap of the analytical eigenvectors, presented in Figure \ref{fig:eigenvector_100}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{images/eigenvector_100.pdf}
\caption{The plot is showing the elements of eigenvector $\vec{v}_{1}, \vec{v}_{2}, \vec{v}_{3}$, corresponding to the three lowest eigenvalues of matrix $\boldsymbol{A} (100 \cross 100)$, against the position $\hat{x}$. The analytical eigenvectors $\vec{v}^{(1)}, \vec{v}^{(2)}, \vec{v}^{(3)}$ are also included in the plot.}
\label{fig:eigenvector_100}
\end{figure}