Introduction to Computational Quantum Mechanics
Introduction to data structures and numerical methods for computational quantum mechanics.
Data Structures and Numerical Methods
1. The Computational Framework
Quantum mechanics becomes a computational discipline when we recognize that the abstract Hilbert space and operators acting on it can be represented by finite-dimensional vector spaces and matrices. The transition from theory to computation requires discretization and truncation.
1.1 From Hilbert Space to Linear Algebra
The fundamental correspondence is:
- Quantum state
→ Complex vector - Observable
→ Hermitian matrix - Time evolution
→ Unitary matrix
The dimension depends on how we discretize the problem. The key computational question is: how large must be for acceptable accuracy?
2. State Representation
2.1 Discrete Basis States
For a system with discrete quantum numbers (e.g., spin, angular momentum, particle in a box), states are naturally represented as vectors. Consider a spin-1/2 particle:
A general state becomes:
Data structure: A one-dimensional array of complex numbers with length .
# Julia example
psi = ComplexF64[α, β]
// Rust example using ndarray
use ndarray::Array1;
use num_complex::Complex64;
let psi: Array1<Complex64> = array![Complex64::new(α, 0.0),
Complex64::new(β, 0.0)];
2.2 Continuous Basis States: Grid Representation
For continuous systems (e.g., particle in a potential), we discretize position space on a grid:
where . The wavefunction is sampled:
Normalization condition:
Data structure: Same as discrete case, but indices represent spatial grid points.
2.3 Multi-Particle States: Tensor Products
For distinguishable particles, each described by an -dimensional basis, the combined system has dimension . The state is represented as a tensor:
Data structure: Multi-dimensional array of size ( times).
# For 2 particles, each with N states
psi = Array{ComplexF64}(undef, N, N)
Storage issue: This scales exponentially (), limiting exact simulations to small (typically for ).
3. Operator Representation
3.1 Matrix Representation
An operator in basis is represented by matrix:
Acting on state gives:
This is matrix-vector multiplication: .
Data structure: Two-dimensional array requires storage.
3.2 Common Operators
Pauli matrices (spin-1/2):
Position operator (diagonal in position basis):
Momentum operator (finite difference approximation):
Matrix form (three-diagonal):
1 & \text{if } j = i+1 \\
-1 & \text{if } j = i-1 \\
0 & \text{otherwise}
\end{cases}$$
**Hamiltonian** for particle in potential $V(x)$:
$$\hat{H} = \frac{\hat{p}^2}{2m} + V(\hat{x})$$
$$\mathbf{H}_{ij} = \frac{\hbar^2}{2m\Delta x^2}\begin{cases}
-2 & \text{if } i = j \\
1 & \text{if } |i-j| = 1 \\
0 & \text{otherwise}
\end{cases} + V(x_i)\delta_{ij}$$
This is a **tridiagonal matrix** for the kinetic term plus a **diagonal matrix** for the potential.
### 4. Sparse Matrix Representations
Many quantum mechanical operators are **sparse** (most elements are zero). Storing them as dense $N \times N$ arrays wastes memory and computational time.
#### 4.1 Storage Formats
**Coordinate (COO) format:**
Store non-zero elements as triplets $(i, j, \mathbf{A}_{ij})$.
**Compressed Sparse Row (CSR) format:**
- `values`: array of non-zero values
- `col_indices`: column index of each value
- `row_ptr`: where each row starts in `values`
**Compressed Sparse Column (CSC) format:**
Similar to CSR but compressed by columns.
```julia
using SparseArrays
# Create sparse Hamiltonian
N = 1000
diag_vals = fill(2.0, N)
off_diag_vals = fill(-1.0, N-1)
H = spdiagm(0 => diag_vals, 1 => off_diag_vals, -1 => off_diag_vals)
```
```rust
use sprs::{CsMat, TriMat};
// Build sparse matrix in triplet format
let mut triplets = TriMat::new((N, N));
for i in 0..N {
triplets.add_triplet(i, i, 2.0);
if i < N-1 {
triplets.add_triplet(i, i+1, -1.0);
triplets.add_triplet(i+1, i, -1.0);
}
}
let H = triplets.to_csr();
```
**Computational advantage:** Sparse matrix-vector multiply is $O(n_z)$ where $n_z$ is number of non-zeros, versus $O(N^2)$ for dense.
### 5. Key Computational Tasks
#### 5.1 Eigenvalue Problems: Finding Energy Levels
The time-independent Schrödinger equation is an eigenvalue problem:
$$\hat{H}|\psi_n\rangle = E_n |\psi_n\rangle \quad \Rightarrow \quad \mathbf{H}\mathbf{v}_n = E_n \mathbf{v}_n$$
**Goal:** Find eigenvalues $E_n$ and eigenvectors $\mathbf{v}_n$.
**Algorithms:**
1. **Full diagonalization** (for small $N \lesssim 10^3$):
- LAPACK routines: `zheev` (dense Hermitian)
- Returns all eigenvalues and eigenvectors
- Complexity: $O(N^3)$
2. **Power method** (for ground state):
```
v = random_vector()
for iter in 1:max_iter
v = H * v
v = v / norm(v)
end
E_0 = dot(v, H * v)
```
Finds lowest eigenvalue by iteration. Complexity: $O(N^2)$ per iteration for dense, $O(n_z)$ for sparse.
3. **Inverse iteration** (for specific eigenvalue near $\sigma$):
Solve $(H - \sigma I)^{-1}v$ iteratively. Finds eigenvalue closest to $\sigma$.
4. **Lanczos algorithm** (for sparse matrices, few eigenvalues):
- Builds tridiagonal approximation to $\mathbf{H}$ via Krylov subspace
- Complexity: $O(n_z k)$ for $k$ eigenvalues
- Very efficient for ground state and low-lying excited states
```julia
using Arpack
# Find lowest 10 eigenvalues
E, psi = eigs(H, nev=10, which=:SR) # SR = smallest real
```
5. **Jacobi-Davidson** (for interior eigenvalues):
- Combines Davidson method with correction equation
- Good for eigenvalues in middle of spectrum
#### 5.2 Time Evolution: Solving the TDSE
The time-dependent Schrödinger equation:
$$i\hbar \frac{\partial}{\partial t}|\psi(t)\rangle = \hat{H}|\psi(t)\rangle$$
Formal solution:
$$|\psi(t)\rangle = e^{-i\hat{H}t/\hbar}|\psi(0)\rangle \quad \Rightarrow \quad \mathbf{v}(t) = e^{-i\mathbf{H}t/\hbar}\mathbf{v}(0)$$
**Challenge:** Computing matrix exponential $e^{-i\mathbf{H}\Delta t/\hbar}$ for time step $\Delta t$.
**Algorithms:**
1. **Spectral method** (if eigenstates known):
$$\mathbf{H} = \sum_n E_n |\mathbf{v}_n\rangle\langle\mathbf{v}_n| \quad \Rightarrow \quad e^{-i\mathbf{H}\Delta t/\hbar} = \sum_n e^{-iE_n\Delta t/\hbar}|\mathbf{v}_n\rangle\langle\mathbf{v}_n|$$
```julia
# After diagonalization: H = V * Diagonal(E) * V'
U = V * Diagonal(exp.(-1im * E * Δt / ℏ)) * V'
psi_new = U * psi
```
Exact for time-independent $H$, but requires full diagonalization ($O(N^3)$).
2. **Crank-Nicolson method** (implicit, second-order):
$$\mathbf{v}(t+\Delta t) = \frac{1 - i\mathbf{H}\Delta t/(2\hbar)}{1 + i\mathbf{H}\Delta t/(2\hbar)}\mathbf{v}(t)$$
Requires solving linear system at each step. Unconditionally stable and unitary.
3. **Split-operator method** (for $H = T + V$):
$$e^{-i\mathbf{H}\Delta t/\hbar} \approx e^{-i\mathbf{V}\Delta t/(2\hbar)} e^{-i\mathbf{T}\Delta t/\hbar} e^{-i\mathbf{V}\Delta t/(2\hbar)} + O(\Delta t^3)$$
- Apply $e^{-i\mathbf{V}\Delta t/(2\hbar)}$ in position space (diagonal, trivial)
- FFT to momentum space
- Apply $e^{-i\mathbf{T}\Delta t/\hbar}$ in momentum space (diagonal)
- Inverse FFT back to position space
- Apply $e^{-i\mathbf{V}\Delta t/(2\hbar)}$ again
Very efficient: $O(N\log N)$ per step due to FFT.
4. **Krylov subspace methods** (for sparse $\mathbf{H}$):
Approximate $e^{-i\mathbf{H}\Delta t/\hbar}\mathbf{v}$ using Krylov subspace $\text{span}\{\mathbf{v}, \mathbf{H}\mathbf{v}, \mathbf{H}^2\mathbf{v}, \ldots\}$.
- Build small matrix representation in Krylov basis
- Exponentiate small matrix (cheap)
- Project back to full space
Complexity: $O(n_z m)$ for Krylov dimension $m \ll N$.
5. **Runge-Kutta methods** (explicit, higher-order):
Standard RK4 for ODEs:
$$\frac{d\mathbf{v}}{dt} = -\frac{i}{\hbar}\mathbf{H}\mathbf{v}$$
But doesn't preserve norm unless $\Delta t$ is very small. Use **symplectic integrators** instead.
#### 5.3 Expectation Values
For observable $\hat{A}$, expectation value is:
$$\langle A \rangle = \langle\psi|\hat{A}|\psi\rangle = \mathbf{v}^\dagger \mathbf{A} \mathbf{v}$$
**Computation:**
```julia
expectation = dot(psi, A * psi) # or: psi' * A * psi
```
For sparse $\mathbf{A}$: $O(n_z)$ complexity.
**Position expectation:** $\langle x \rangle = \sum_j x_j |\mathbf{v}_j|^2 \Delta x$
**Momentum expectation:** Requires computing $\langle\psi|\hat{p}|\psi\rangle$ using finite difference or FFT to momentum space.
### 6. Multi-Dimensional Systems
#### 6.1 Tensor Product Spaces
For particle in 3D, state is $\psi(x,y,z)$. On grid with $N_x \times N_y \times N_z$ points:
**Data structure:** 3D array with indices $(i,j,k)$ for $\psi(x_i, y_j, z_k)$.
```julia
psi = Array{ComplexF64}(undef, Nx, Ny, Nz)
```
**Hamiltonian:**
$$\mathbf{H} = \mathbf{H}_x \otimes \mathbf{I}_y \otimes \mathbf{I}_z + \mathbf{I}_x \otimes \mathbf{H}_y \otimes \mathbf{I}_z + \mathbf{I}_x \otimes \mathbf{I}_y \otimes \mathbf{H}_z + \mathbf{V}$$
where $\mathbf{H}_x$ is 1D Hamiltonian in $x$ direction, etc.
**Matrix-free approach:** Don't form full $\mathbf{H}$ (size $N_xN_yN_z \times N_xN_yN_z$). Instead, apply operators sequentially:
```julia
function apply_H!(result, psi, Hx, Hy, Hz, V)
# Apply Hx ⊗ I ⊗ I
for k in 1:Nz, j in 1:Ny
result[:, j, k] = Hx * psi[:, j, k]
end
# Add I ⊗ Hy ⊗ I
for k in 1:Nz, i in 1:Nx
result[i, :, k] += Hy * psi[i, :, k]
end
# Add I ⊗ I ⊗ Hz
for j in 1:Ny, i in 1:Nx
result[i, j, :] += Hz * psi[i, j, :]
end
# Add potential
result .+= V .* psi
end
```
This approach scales as $O(N_x^2 N_y N_z + N_x N_y^2 N_z + N_x N_y N_z^2)$ instead of $O((N_xN_yN_z)^2)$.
#### 6.2 Reduced Dimensionality: Separation of Variables
For separable Hamiltonians:
$$\hat{H} = \hat{H}_1 \otimes \mathbf{I}_2 + \mathbf{I}_1 \otimes \hat{H}_2$$
Eigenstates factor:
$$|\psi\rangle = |\phi_1\rangle \otimes |\phi_2\rangle$$
Solve each subsystem independently:
- Diagonalize $\mathbf{H}_1$ (size $N_1 \times N_1$): $O(N_1^3)$
- Diagonalize $\mathbf{H}_2$ (size $N_2 \times N_2$): $O(N_2^3)$
Total: $O(N_1^3 + N_2^3) \ll O((N_1N_2)^3)$
### 7. Practical Considerations
#### 7.1 Numerical Stability
**Normalization:** After time evolution, $\|\mathbf{v}\|$ should remain 1. Monitor:
$$\epsilon(t) = \big| \|\mathbf{v}(t)\|^2 - 1 \big|$$
Renormalize if drift exceeds tolerance: $\mathbf{v} \leftarrow \mathbf{v}/\|\mathbf{v}\|$
**Energy conservation:** For time-independent $H$, $\langle H \rangle$ should be constant. Monitor:
$$\Delta E(t) = \big| \langle\psi(t)|H|\psi(t)\rangle - E(0) \big|$$
#### 7.2 Convergence Tests
**Grid convergence:** Double $N$ and check observables change by less than desired tolerance.
**Time step convergence:** Halve $\Delta t$ and verify results converge.
#### 7.3 Memory Management
For large systems, memory is the limiting factor:
- **Dense matrix:** $N = 10^4 \Rightarrow 16N^2 = 1.6$ GB (complex doubles)
- **Sparse matrix:** Only $n_z \times 24$ bytes (value + two indices)
- **State vector:** $N = 10^6 \Rightarrow 16N = 16$ MB
**Strategy:** Use sparse matrices, matrix-free operators, and iterative solvers.
### 8. Summary of Algorithmic Complexity
| Task | Dense | Sparse | Method |
|------|-------|--------|--------|
| Store operator | $O(N^2)$ | $O(n_z)$ | CSR/CSC |
| Apply operator | $O(N^2)$ | $O(n_z)$ | Matrix-vector |
| Full diagonalization | $O(N^3)$ | — | LAPACK |
| Ground state | $O(kN^2)$ | $O(kn_z)$ | Power/Lanczos |
| Few eigenvalues | $O(N^3)$ | $O(kmn_z)$ | Lanczos/Arnoldi |
| Time step (spectral) | $O(N^2)$ | $O(N^2)$ | Precompute $U$ |
| Time step (Krylov) | $O(mN^2)$ | $O(mn_z)$ | Iterative |
| Time step (split-op) | $O(N\log N)$ | — | FFT-based |
| Expectation value | $O(N^2)$ | $O(n_z)$ | $\mathbf{v}^\dagger\mathbf{A}\mathbf{v}$ |
where $k$ = number of iterations, $m$ = Krylov dimension, $n_z$ = number of non-zeros.
### 9. Next Steps
This framework enables computation of:
- Bound state energies and wavefunctions
- Scattering problems (time-dependent wavepackets)
- Response to external fields
- Quantum dynamics and coherence
- Many-body systems (with tensor networks)
The key insight: **quantum mechanics reduces to linear algebra on complex vectors and matrices**. The challenge is scaling to large $N$ while maintaining accuracy and physical properties (unitarity, Hermiticity, norm conservation).