MPB Introduction

From AbInitio

(Difference between revisions)
Jump to: navigation, search
Revision as of 16:09, 3 April 2006 (edit)
Stevenj (Talk | contribs)
(Frequency-Domain vs. Time-Domain)
← Previous diff
Revision as of 16:09, 3 April 2006 (edit)
Stevenj (Talk | contribs)
(Frequency-Domain vs. Time-Domain)
Next diff →
Line 18: Line 18:
There are two common computational approaches to studying dielectric structures such as photonic crystals: frequency-domain and time-domain. We feel that each has its own place in a researcher's toolbox, and each has unique advantages and disadvantages. '''MPB is frequency-domain'''; that is, it does a direct computation of the eigenstates and eigenvalues of Maxwell's equations (using a planewave basis). Each field computed has a definite frequency. In contrast, time-domain techniques (as in our [[Meep]] software) iterate Maxwell's equations in time; the computed fields have a definite time (at each time step) but not a definite frequency ''per se''. It seems worthwhile to say a few words about each method, and to explain why we want a frequency-domain code. (Our group has both time-domain and frequency-domain software, and we use both techniques extensively.) There are two common computational approaches to studying dielectric structures such as photonic crystals: frequency-domain and time-domain. We feel that each has its own place in a researcher's toolbox, and each has unique advantages and disadvantages. '''MPB is frequency-domain'''; that is, it does a direct computation of the eigenstates and eigenvalues of Maxwell's equations (using a planewave basis). Each field computed has a definite frequency. In contrast, time-domain techniques (as in our [[Meep]] software) iterate Maxwell's equations in time; the computed fields have a definite time (at each time step) but not a definite frequency ''per se''. It seems worthwhile to say a few words about each method, and to explain why we want a frequency-domain code. (Our group has both time-domain and frequency-domain software, and we use both techniques extensively.)
-Time-domain methods are well-suited to computing things that involve evolution of the fields, such as transmission and resonance decay-time calculations. They can also be used to calculate band structures and for finding resonant modes, by looking for peaks in the Fourier transform of the system response to some input. The main advantage of this is that you get all the frequencies (peaks) at once from a calculation involving propagation of a single field. There are several disadvantages to this technique, however. First, it is hard to be confident that you have found all of the states--you may have coupled weakly to some state by accident, or two states may be close in frequency and appear as a single peak; this is especially problematic in higher-order resonant-cavity and waveguide calculations. Second, in the Fourier transform, the frequency resolution is inversely related to the simulation time; to get 10 times the resolution you must run your simulation 10 times as long (although matters are improved by using more sophisticated signal-processing methods such as [[Harminv]]'s). Third, the time-step size must be proportional to the spatial-grid size for numerical stability; thus, if you double the spatial resolution, you must double the number of time steps (the length of your simulation), even if you are looking at states with the same frequency as before. Fourth, you only get the frequencies of the states; to get the eigenstates themselves (so that you can see what the modes look like and do calculations with them), you must run the simulation again, once for each state that you want, and for a time inversely proportional to the frequency-spacing between adjacent states (i.e. a long time for closely-spaced states).+Time-domain methods are well-suited to computing things that involve evolution of the fields, such as transmission and resonance decay-time calculations. They can also be used to calculate band structures and for finding resonant modes, by looking for peaks in the Fourier transform of the system response to some input. The main advantage of this is that you get all the frequencies (peaks) at once from a calculation involving propagation of a single field. There are several disadvantages to this technique, however. First, it is hard to be confident that you have found all of the states—you may have coupled weakly to some state by accident, or two states may be close in frequency and appear as a single peak; this is especially problematic in higher-order resonant-cavity and waveguide calculations. Second, in the Fourier transform, the frequency resolution is inversely related to the simulation time; to get 10 times the resolution you must run your simulation 10 times as long (although matters are improved by using more sophisticated signal-processing methods such as [[Harminv]]'s). Third, the time-step size must be proportional to the spatial-grid size for numerical stability; thus, if you double the spatial resolution, you must double the number of time steps (the length of your simulation), even if you are looking at states with the same frequency as before. Fourth, you only get the frequencies of the states; to get the eigenstates themselves (so that you can see what the modes look like and do calculations with them), you must run the simulation again, once for each state that you want, and for a time inversely proportional to the frequency-spacing between adjacent states (i.e. a long time for closely-spaced states).
In contrast, frequency-domain methods like those in MPB are in many ways better-suited to calculating band structures and eigenstates. (Here, we consider the case of an ''iterative'' eigensolver like the one in MPB, which iteratively improves approximate eigenstates. Dense solvers, which factorize the matrix directly, are impractical for large problems because of the huge size of the matrix, and because they compute many more eigenvectors than are desired. In iterative methods, the operator is only applied to individual vectors and is never itself computed explicitly.) First, you don't have to worry about missing states—even closely-spaced modes will appear as two eigenvalues in the result. Second, the error in the frequency in an iterative eigensolver typically decays exponentially with the number of iterations, so the number of iterations is logarithmic in the desired tolerance. Third, the number of iterations typically remains almost constant even as you increase the resolution (the work for each iteration increases, of course, but that happens in time-domain too). Fourth, you get both the frequencies and the eigenstates at the same time, so you can look at the modes immediately (even closely-spaced ones). In contrast, frequency-domain methods like those in MPB are in many ways better-suited to calculating band structures and eigenstates. (Here, we consider the case of an ''iterative'' eigensolver like the one in MPB, which iteratively improves approximate eigenstates. Dense solvers, which factorize the matrix directly, are impractical for large problems because of the huge size of the matrix, and because they compute many more eigenvectors than are desired. In iterative methods, the operator is only applied to individual vectors and is never itself computed explicitly.) First, you don't have to worry about missing states—even closely-spaced modes will appear as two eigenvalues in the result. Second, the error in the frequency in an iterative eigensolver typically decays exponentially with the number of iterations, so the number of iterations is logarithmic in the desired tolerance. Third, the number of iterations typically remains almost constant even as you increase the resolution (the work for each iteration increases, of course, but that happens in time-domain too). Fourth, you get both the frequencies and the eigenstates at the same time, so you can look at the modes immediately (even closely-spaced ones).

Revision as of 16:09, 3 April 2006

MIT Photonic Bands
Download
Release notes
MPB manual
Introduction
Installation
User Tutorial
Data Analysis Tutorial
User Reference
Developer Information
Acknowledgements
License and Copyright

MIT Photonic Bands is a software package to compute definite-frequency eigenstates of Maxwell's equations in periodic dielectric structures. Its primary intended application is the study of photonic crystals: periodic dielectric structures exhibiting a band gap in their optical modes, prohibiting propagation of light in that frequency range. However, it could also be easily applied to compute other optical dispersion relations and eigenstates (e.g. for conventional waveguides such as fiber-optic cables).

This manual assumes that the reader is familiar with concepts from solid-state physics such as eigenstates, band structures, and Bloch's theorem. We also do not attempt (much) to instruct the reader on photonic crystals or other optical applications for which this code might be useful. For an excellent introduction to all of these topics in the context of photonic crystals, see the book Photonic Crystals: Molding the Flow of Light, by J. D. Joannopoulos, R. D. Meade, and J. N. Winn (Princeton, 1995).

Some of the main design goals we were thinking about when we developed this package were the following (see also the feature list at the MPB home page):

  • Fully vectorial, three-dimensional calculations for arbitrary Bloch wavevectors. (The only approximation is the spatial discretization, or equivalently the planewave cutoff.)
  • Flexible interface. Readable, extensible, scriptable...see also the libctl design goals.
  • Parallel. (Can run on a single-processor machine, but is also supports parallel machines with MPI.)
  • "Targeted" eigensolver: find modes nearest to a specified frequency, not just the lowest-frequency bands. (For defect calculations.)
  • Leverage existing software (LAPACK, BLAS, FFTW, HDF, MPI, GUILE...).
  • Modularity. The eigensolver, Maxwell's equations, user interface, and so on, should be oblivious to each other as much as possible. This way, they can be debugged separately, combined in various ways, replaced, used in other programs...all the usual benefits of modular design.
  • Take advantage of inversion symmetry in the dielectric function, but don't require it. (This means that we have to handle both real and complex fields.)

Frequency-Domain vs. Time-Domain

There are two common computational approaches to studying dielectric structures such as photonic crystals: frequency-domain and time-domain. We feel that each has its own place in a researcher's toolbox, and each has unique advantages and disadvantages. MPB is frequency-domain; that is, it does a direct computation of the eigenstates and eigenvalues of Maxwell's equations (using a planewave basis). Each field computed has a definite frequency. In contrast, time-domain techniques (as in our Meep software) iterate Maxwell's equations in time; the computed fields have a definite time (at each time step) but not a definite frequency per se. It seems worthwhile to say a few words about each method, and to explain why we want a frequency-domain code. (Our group has both time-domain and frequency-domain software, and we use both techniques extensively.)

Time-domain methods are well-suited to computing things that involve evolution of the fields, such as transmission and resonance decay-time calculations. They can also be used to calculate band structures and for finding resonant modes, by looking for peaks in the Fourier transform of the system response to some input. The main advantage of this is that you get all the frequencies (peaks) at once from a calculation involving propagation of a single field. There are several disadvantages to this technique, however. First, it is hard to be confident that you have found all of the states—you may have coupled weakly to some state by accident, or two states may be close in frequency and appear as a single peak; this is especially problematic in higher-order resonant-cavity and waveguide calculations. Second, in the Fourier transform, the frequency resolution is inversely related to the simulation time; to get 10 times the resolution you must run your simulation 10 times as long (although matters are improved by using more sophisticated signal-processing methods such as Harminv's). Third, the time-step size must be proportional to the spatial-grid size for numerical stability; thus, if you double the spatial resolution, you must double the number of time steps (the length of your simulation), even if you are looking at states with the same frequency as before. Fourth, you only get the frequencies of the states; to get the eigenstates themselves (so that you can see what the modes look like and do calculations with them), you must run the simulation again, once for each state that you want, and for a time inversely proportional to the frequency-spacing between adjacent states (i.e. a long time for closely-spaced states).

In contrast, frequency-domain methods like those in MPB are in many ways better-suited to calculating band structures and eigenstates. (Here, we consider the case of an iterative eigensolver like the one in MPB, which iteratively improves approximate eigenstates. Dense solvers, which factorize the matrix directly, are impractical for large problems because of the huge size of the matrix, and because they compute many more eigenvectors than are desired. In iterative methods, the operator is only applied to individual vectors and is never itself computed explicitly.) First, you don't have to worry about missing states—even closely-spaced modes will appear as two eigenvalues in the result. Second, the error in the frequency in an iterative eigensolver typically decays exponentially with the number of iterations, so the number of iterations is logarithmic in the desired tolerance. Third, the number of iterations typically remains almost constant even as you increase the resolution (the work for each iteration increases, of course, but that happens in time-domain too). Fourth, you get both the frequencies and the eigenstates at the same time, so you can look at the modes immediately (even closely-spaced ones).

A traditional disadvantage of frequency-domain methods was that you had to compute all of the lowest eigenstates, up to the desired one, even if you didn't care about the lower ones. This was especially problematic in defect calculations, in which a large supercell is used, because in that case the lower bands are "folded" many times in the Brillouin zone. Thus, you often had to compute a large number of bands in order to get to the one you wanted (incurring large costs both in time and in storage). These disadvantages largely disappear in MPB, however, with the advent of its "targeted" eigensolver—with it, you can solve directly for the localized defect states (i.e the states in the band gap) without computing the lower bands.

History

In order to shed some light on the past development and design decisions of the Photonic-Bands package, I thought I'd write a few paragraphs about its history. Read on to enjoy my narcissistic ramblings.

Many different people have written codes to compute the modes of periodic dielectric materials (although we don't know of any others that have been freely released). I have had experience with several programs written within our group, and this experience has guided the design of MIT Photonic-Bands.

The first program of this sort that I came in contact with, and the code that was used in our group until the development of this package, was initially written around 1990 (in Fortran 77) by R. D. Meade. As this software grew organically over time, several problems became apparent. First, the input format was inflexible (difficult to add new features without breaking old simulations), sensitive to whitespace and other formatting, and required repeated entry of information that was often the same from file to file. Often, pre- and post-processing steps were required using additional scripts and tools. Second, the many parts of the program had become intertwined, lacking modularity or a clear flow of control; this made it difficult to follow or modify substantially. Parallelizing it, or removing constraints that it imposed like inversion symmetry, or even replacing the input format seemed impractical. Besides, even reading code with variables named gxgzco (in common block cabgv) (honest!) is a mind-altering experience.

After an initial experience in the Spring of 1996 at writing a code based on a wavelet, rather than a planewave, basis (which turned out not to be practical), I set out to write a replacement for our Fortran eigensolver. My main aims at this time (Fall 1996) were a more flexible and powerful input format and a code that would be amenable to parallelization. I succeeded in achieving a working code with similar convergence to the old code (after some pain), but I discovered several things. I had lots of fun learning to use lex and yacc (Yet Another Compiler-Compiler) to make a flexible, C-like input format with variables and other advanced features. Having input files that were almost, but not quite, like a programming language made me realize, however, that what I really wanted was a programming language--no matter how many features I added, I always wanted one more "simple" thing. As far as parallelization, I quickly realized that I had a problem: I needed a parallel FFT, and the only ones that were available were proprietary (non-portable), used incompatible data distribution formats, and were often designed to be called only from special languages like HPF. That, plus my dissatisfaction with the available free FFTs, led me to embark on a side project (with my friend Matteo Frigo of MIT's Laboratory for Computer Science) to develop a new, free FFT library, FFTW, that included portable parallel routines. Also, I decided that attempting to support too many models of parallel programming (threads, MPI, Cray shmem) in one program resulted in a mess; it was better to stick with MPI (supporting running only a single process too, of course). Another mistake I discovered was that I allowed the eigensolver to get too intertwined with the specific problem of Maxwell's equations--the eigensolver knew about the data structures for the fields, etcetera, making it difficult to plug in replacement eigensolvers, test things in isolation, or to implement features like the "targeted" eigensolver of Photonic-Bands (which diagonalizes a different operator). The whole program was too mired in complexity. Finally, in the interim I had learned about block eigensolver algorithms. Not only can such algorithms leverage prepackaged, highly-optimized routines like BLAS and LAPACK, but they also promised to be inherently more suited for parallelization (since they remove the serial process of solving for the bands one by one). All of these things convinced me that I needed to rewrite the code again from scratch.

So, I started work on the new package (in Spring 1998), this time determined to develop and debug each component (matrix operations, eigensolver, maxwell operator, user input) in isolation. At the same time, I was thinking about how I would implement the user control language, wanting to develop a general tool that could be applied to other problems and software in our research. It seemed clear that, in order to get other people to use it in their programs, as well as to avoid a lot of the manual labor that went into my previous effort, I wanted to automatically generate the user/program interface from an abstract specification. As for the control language, I briefly considered implementing my own, but was happily led to GNU Guile instead, which gave me a powerful language with little effort. So armed, I set out to write libctl (which generates the Guile interface from an abstract Scheme specification), partially as an experiment to see how hard it would be and what the result would look like. After a weekend of work, it was obvious that I had a powerful tool; I spent couple of weeks adding some finishing touches, writing documentation, and so on, and proudly showed it off to my groupmates in the hope that they could use it for their programs. Without a real example of a program using libctl, however, it was hard to convince them to plug a scripting language into their existing, working codes. So, I went back to puttering at my eigensolvers.

Of course, all this time I was allegedly doing real research, and long periods would go by with little progress on the Photonic-Bands package. The original, Fortran code was still working, and in time one learned to bear its quirks and limitations with stoicism, although we cringed every time we had to show it to anyone else. By the summer of 1999, I had a working block eigensolver (supporting several iteration-scheme variants), a Maxwell operator to plug into the eigensolver (including a "targeted" operator, whose convergence I was unhappy with), and a test program to do convergence experiments on bands of a Bragg mirror. I hadn't attached any general user interface, field output, or other necessary components. At this point, Dr. Doug Allan of Corning (a former student of Prof. Joannopoulos), heard about the new code--in particular, the targeted eigensolver--and began clamoring to try it out. Not put off by my excuses, he asked for a copy of my current code, regardless of its status, to play with. Not wanting to refuse, but aghast at the prospect of someone seeing my masterpiece only half-painted, I told him to give me a week...in which time I added the interface and discovered that I had a useful tool. Over the next week, I added many features, fixed bugs, and wrote documentation, drawing near to a release at last...

Personal tools