An objective-oriented approach to finite difference based eigensolving for Maxwell's equations. Eigensolving is more nuanced than it initially might seem. What eigenvalue are you solving for, frequency or k? How do you formulate the problem for each? FDFD offers an interesting point of flexibility in eigenvalue-solving compared to traditional methods such as PWEM. In PWEM, you are explicitly converting real space to k-space. In that regards, you cannot solve for a k-eigenvalue.
Advantage 1 FDFD can solve the
Advantage 1.5 In the same vein as 1, solving for dispersive media
Advantage 2 Varying levels of abstraction on the dependence of k This allows us to solve eigen-problems that pwem cannot, such as 3d waveguides.
Advantage 3 No k-space discretization artifacts (i.e. Gibbs Phenomenon)
TM polarization: Hz, Ex, Ey (H field out of plane) TE polarization: Ez, Hx, Hy (E field out of plane)
1D simulations are along the x axis. 2D simulations are along the x and y axes
Because eigensolving on sparse FDFD matrices can be tricky, the eigen classes do not implement any solvers. Instead, you will have access to the final operator and you will have the responsibility of setting up the sparse eigensolving problem as you desire. typically though, eigensolving requires you to put in a guess of omega
In this type of problem, there is no explicit resolution of the wavevector k, we only solve for the eigenvalue
In this problem, we can specify real
In this eigenproblem, we are looking modes perpendicular to the surface of the 1D Bragg Mirror
Example below is a metal conductor waveguide.
these contain python scripts meant to be run from the command line (vs the Jupyter notebooks). Among these are parallel implementations for obtaining bandsolvers using python's multiprocessing module
Note that python uses 'C'-contiguous ordering of its n>1 dimensional arrays. I will be using 'F' ordering of the arrays (which is what MatLab) uses. That means when you reshape a flat eigenvector, you should do np.reshape(flat_array, new_dim, ordering = 'F'), otherwise your result will look messed up.
Since in general, reciprocity is not broken for electrodynamic systems, we expect symmetry in the matrix. Note that the matrix should not be hermitian (might make loss into gain). However, when implementing things like a PML, the matrix may come out unsymmetric. Solving an unsymmetric vs a symmetric matrix differs significantly so it is to our advantage to have a symmetric PML.
scipy implements eigs and eigsh which are wrappers of ARPACK and implements an Arnoldi iterative eigenvalue solver. However, another commonly used one is the Jacobi-Davidson which scipy does not have support for. One of the examples shows how to use SLEPc for solving this eigenvalue problem.
- Implementing PMLs and PECs in a universal way across all eigensolvers. I'd like to implement everything as left and right preconditioners, but that might not be viable so I'm using workarounds at present.
- correct bloch boundary