Installation


Note: Installing MPB from source can be challenging for novice users. As a simple workaround, the latest version of MPB preinstalled on Ubuntu can be accessed on Amazon Web Services (AWS) Elastic Compute Cloud (EC2) as a free Amazon Machine Image (AMI). To access this AMI, follow these instructions.

In this section, we outline the procedure for installing MPB. Mainly, this consists of downloading and installing various prerequisites. As much as possible, we have attempted to take advantage of existing packages such as BLAS, LAPACK, FFTW, and Guile, in order to make our code smaller, more robust, faster, and more flexible. Unfortunately, this may make the installation of MPB more complicated if you do not already have these packages.

You will also need an ANSI C compiler (gcc is fine) and installation will be easiest on a UNIX-like system (Linux is fine). In the following list, some of the packages are dependent upon packages listed earlier, so you should install them in more-or-less the order given.

Many of these libraries may be available in precompiled binary form, especially for Linux systems. Be aware, however, that library binary packages often come in two parts, library and library-dev, and both are required to compile programs using it.

It is important that you use the same Fortran compiler to compile Fortran libraries (like LAPACK) and for configuring MPB. Different Fortran compilers often have incompatible linking schemes. The Fortran compiler for MPB can be set via the F77 environment variable.

Installation on macOS

See the installation guide for Meep on macOS for the easiest way to do this.

Unix Installation Basics

Installation Paths

First, let's review some important information about installing software on Unix systems, especially in regards to installing software in non-standard locations. None of these issues are specific to MPB, but they've caused a lot of confusion among users.

Most of the software below, including MPB, installs under /usr/local by default. That is, libraries go in /usr/local/lib, programs in /usr/local/bin, etc. If you don't have root privileges on your machine, you may need to install somewhere else, e.g. under $HOME/install (the install/ subdirectory of your home directory). Most of the programs below use a GNU-style configure script, which means that all you would do to install there would be:

./configure --prefix=$HOME/install   ...other flags...

when configuring the program. The directories $HOME/install/lib etc. are created automatically as needed.

Paths for Configuring

There are two further complications. First, if you install dependencies in a non-standard location like $HOME/install/lib, you will need to tell the compilers where to find the libraries and header files that you installed. You do this by passing two variables to ./configure:

./configure LDFLAGS="-L$HOME/install/lib" CPPFLAGS="-I$HOME/install/include"   ...other flags...

Of course, substitute whatever installation directory you used. You may need to include multiple -L and -I flags separated by spaces if your machine has stuff installed in several non-standard locations.

You might also need to update your PATH so that you can run the executables; e.g. if we installed in our home directory as described above, we would do:

export PATH="$HOME/install/bin:$PATH"

Paths for Running (Shared Libraries)

Second, many of the packages installed below (e.g. Guile) are installed as shared libraries. You need to make sure that your runtime linker knows where to find these shared libraries. The bad news is that every operating system does this in a slightly different way. If you installed all of your libraries in a standard location on your operating system (e.g. /usr/lib), then the runtime linker will look there already and you don't need to do anything. Otherwise, if you compile things like libctl and install them into a "nonstandard" location (e.g. in your home directory), you will need to tell the runtime linker where to find them.

There are several ways to do this. Suppose that you installed libraries into the directory /foo/lib. The most robust option is probably to include this path in the linker flags (LDFLAGS above):

./configure LDFLAGS="-L$HOME/install/lib -Wl,-rpath,$HOME/install/lib"   ...other flags...

There are also some other ways. If you use Linux, have superuser privileges, and are installing in a system-wide location (not your home directory!), you can add the library directory to /etc/ld.so.conf and run /sbin/ldconfig.

On many systems, you can also specify directories to the runtime linker via the LD_LIBRARY_PATH environment variable. In particular, by export LD_LIBRARY_PATH="$HOME/install/lib:$LD_LIBRARY_PATH"; you can add this to your .profile file (depending on your shell) to make it run every time you run your shell. (On MacOS, a security feature called System Integrity Protection causes the value of LD_LIBRARY_PATH to be ignored, so using environment variables won't work there.)

Fun with Fortran

MPB, along with many of the libraries it calls, is written in C or C++, but it also calls libraries such as BLAS and LAPACK (see below) that are usually compiled from Fortran. This can cause some added difficulty because of the various linking schemes used by Fortran compilers. Our configure script attempts to detect the Fortran linking scheme automatically, but in order for this to work ''you must use the same Fortran compiler and options with MPB as were used to compile BLAS/LAPACK''.

By default, MPB looks for a vendor Fortran compiler first (f77, xlf, etcetera) and then looks for GNU g77. In order to manually specify a Fortran compiler foobar you would configure it with ./configure F77=foobar ....

If, when you compiled BLAS/LAPACK, you used compiler options that alter the linking scheme (e.g. g77's -fcase-upper or -fno-underscoring), you will need to pass the same flags to MPB via ./configure FFLAGS=...flags... ....

Picking a Compiler

It is often important to be consistent about which compiler you employ. This is especially true for C++ software. To specify a particular C compiler foo, configure with ./configure CC=foo; to specify a particular C++ compiler foo++, configure with ./configure CXX=foo++; to specify a particular Fortran compiler foo90, configure with ./configure F77=foo90.

Linux and BSD Binary Packages

If you are installing on your personal Linux or BSD machine, then precompiled binary packages are likely to be available for many of these packages, and may even have been included with your system. On Debian systems, the packages are in .deb format and the built-in apt-get program can fetch them from a central repository. On Red Hat, SuSE, and most other Linux-based systems, binary packages are in RPM format. OpenBSD has its "ports" system, and so on.

Do not compile something from source if an official binary package is available. For one thing, you're just creating pain for yourself. Worse, the binary package may already be installed, in which case installing a different version from source will just cause trouble.

One thing to watch out for is that libraries like LAPACK, Guile, HDF5, etcetera, will often come split up into two or more packages: e.g. a guile package and a guile-devel package. You need to install both of these to compile software using the library.

To build the latest version of MPB from source on Ubuntu 16.04, follow these instructions.

BLAS and LAPACK

MPB requires the BLAS and LAPACK libraries for matrix computations.

BLAS

The first thing you must have on your system is a BLAS implementation. "BLAS" stands for "Basic Linear Algebra Subroutines," and is a standard interface for operations like matrix multiplication. It is designed as a building-block for other linear-algebra applications, and is used both directly by our code and in LAPACK (see below). By using it, we can take advantage of many highly-optimized implementations of these operations that have been written to the BLAS interface. Note that you will need implementations of BLAS levels 1-3.

You can find more BLAS information, as well as a basic implementation, on the BLAS Homepage. Once you get things working with the basic BLAS implementation, it might be a good idea to try and find a more optimized BLAS code for your hardware. Vendor-optimized BLAS implementations are available as part of the Intel MKL, HP CXML, IBM ESSL, SGI sgimath, and other libraries. An excellent, high-performance, free-software BLAS implementation is OpenBLAS. Another is ATLAS.

Note that the generic BLAS does not come with a Makefile; compile it with something like:

  wget http://www.netlib.org/blas/blas.tgz
  gunzip blas.tgz
  tar xf blas.tar
  cd BLAS
  f77 -c -O3 *.f   # compile all of the .f files to produce .o files
  ar rv libblas.a *.o    #  combine the .o files into a library
  su -c "cp libblas.a /usr/local/lib"   # switch to root and install

Replace -O3 with your favorite optimization options. On Linux, this could be g77 -O3 -fomit-frame-pointer -funroll-loops -malign-double. Note that MPB looks for the standard BLAS library with -lblas, so the library file should be called libblas.a and reside in a standard directory like /usr/local/lib. See also below for the --with-blas=lib option to MPB's configure script, to manually specify a library location.

LAPACK

LAPACK, the Linear Algebra PACKage, is a standard collection of routines, built on BLAS, for more-complicated (dense) linear algebra operations like matrix inversion and diagonalization. You can download LAPACK from the LAPACK Home Page.

Note that MPB looks for LAPACK by linking with -llapack. This means that the library must be called liblapack.a and be installed in a standard directory like /usr/local/lib. Alternatively, you can specify another directory via the LDFLAGS environment variable as described earlier. See also below for the --with-lapack=''lib'' option to our configure script, to manually specify a library location.

We currently recommend installing OpenBLAS which includes LAPACK so you do not need to install it separately.

MPI (parallel machines)

Optionally, MPB is able to run on a distributed-memory parallel machine, and to do this we use the standard message-passing interface (MPI). You can learn about MPI from its homepage. Most commercial supercomputers already have an MPI implementation installed. The recommended implementation is Open MPI. MPI is not required to compile the serial version of MPB.

In order for the MPI version to run successfully, we have a slightly nonstandard requirement: each process must be able to read from the disk. This way, Guile can boot for each process and they can all read your control file in parallel. Most commercial supercomputers satisfy this requirement.

If you use MPB with MPI, you should compile HDF5 with MPI support as well. See below.

Also, in order to get good performance, you'll need fast interconnect hardware such as Gigabit Ethernet, InfiniBand, or Myrinet. The speed bottleneck comes from the FFT, which requires most of the communications in MPB. FFTW's MPI transforms (see below) come with benchmark programs that will give you a good idea of whether you can get speedups on your system. Of course, even with slow communications, you can still benefit from the memory savings per CPU for large problems.

As described below, when you configure MPB with MPI support (--with-mpi), it installs itself as mpb-mpi. See also the User Interface for information on using MPB on parallel machines. Normally, you should also install the serial version of MPB, if only to get the mpb-data utility, which is not installed with the MPI version.

We require a portable, standard binary format for outputting the electromagnetic fields and similar volumetric data, and for this we use HDF. If you don't have HDF5, you can still compile MPB, but you won't be able to output the fields or the dielectric function.

HDF is a widely-used, free, portable library and file format for multi-dimensional scientific data, developed in the National Center for Supercomputing Applications (NCSA) at the University of Illinois. You can get HDF and learn about it on the HDF Home Page.

We require HDF5 which is supported by a number scientific of visualization tools including our own h5utils utilities.

HDF5 includes parallel I/O support under MPI, which can be enabled by configuring it with --enable-parallel. You may also have to set the CC environment variable to mpicc. Unfortunately, the parallel HDF5 library then does not work with serial code, so you have may have to choose one or the other.

We have some hacks in MPB so that it can do parallel I/O even with the serial HDF5 library. These hacks work okay when you are using a small number of processors, but on large supercomputers we strongly recommend using the parallel HDF5.

Note: If you have a version of HDF5 compiled with MPI parallel I/O support, then you need to use the MPI compilers to link to it, even when you are compiling the serial version of MPB. Just use ./configure CC=mpicc CXX=mpic++ or whatever your MPI compilers are when configuring.

FFTW

FFTW is a self-optimizing, portable, high-performance FFT implementation, including both serial and parallel FFTs. You can download FFTW and find out more about it from the FFTW Home Page.

If you want to use MPB on a parallel machine with MPI, you will also need to install the MPI FFTW libraries. This just means including --enable-mpi in the FFTW configure flags.

Readline (optional)

Readline is a library to provide command-line history, tab-completion, emacs keybindings, and other shell-like niceties to command-line programs. This is an optional package, but one that can be used by Guile (see below) if it is installed. We recommend installing it. You can download Readline from its ftp site. Readline is typically preinstalled on Linux systems.

Guile

Guile is required in order to use the Scheme interface, and is strongly recommended. If you don't install it, you can only use the C++ interface.

Guile is an extension/scripting language implementation based on Scheme, and we use it to provide a rich, fully-programmable user interface with minimal effort. It's free, of course, and you can download it from the Guile Home Page. Guile is typically included with Linux systems.

  • Important: Most Linux distributions come with Guile already installed. You can check by seeing whether you can run guile --version from the command line. In that case, do not install your own version of Guile from source — having two versions of Guile on the same system will cause problems. However, by default most distributions install only the Guile libraries and not the programming headers — to compile libctl and MPB, you should install the guile-devel or guile-dev package.

Autoconf (optional)

If you want to be a developer of the MPB package as opposed to merely a user, you will also need the Autoconf program. Autoconf is a portability tool that generates configure scripts to automatically detect the capabilities of a system and configure a package accordingly. You can find out more at the Autoconf Home Page. autoconf is typically installed by default on Linux systems. In order to install Autoconf, you will also need the GNU m4 program if you do not already have it. See the m4 Home Page.

libctl

libctl, which requires Guile, is required to use the Scheme interface, and is strongly r ecommended. If you don't install it, you can only use the C++ interface. libctl version 3.2 or later is required.

Instead of using Guile directly, we separated much of the user interface code into a package called libctl, in the hope that this might be more generally useful. libctl automatically handles the communication between the program and Guile, converting complicated data structur es and so on, to make it even easier to use Guile to control scientific applications. Download libctl from the libctl page, unpack it, and run the usual configure, make, make install sequence. You'll also want to browse the libctl manual, as this will give you a general overview of what the user interface will be like.

If you are not the system administrator of your machine, and/or want to install libctl somewhere else like your home directory, you can do so with the standard --prefix=dir option to configure. The default prefix is /usr/local. In this case, however, you'll need to specify the location of the libctl shared files for MPB, using the --with-libctl=dir/share/libctl option to our configure script.

MPB

If you've made it all the way here, you're ready to install the MPB package and start cranking out eigenmodes. You can obtain the latest version from the Download page. Once you've unpacked it, just run:

./configure
make

to configure and compile the package. See below to install. Hopefully, the configure script will correctly detect the BLAS, FFTW, etcetera libraries which have been installed, as well as the C compiler and so on, and the make compilation will proceed without a hitch. If not, configure accepts several flags to help control its behavior. Some of these are standard, like --prefix=dir to specify an installation directory prefix, and some of them are specific to the MPB package. Use ./configure --help for more info. The configure flags specific to MPB are:

--with-inv-symmetry
           Assume inversion symmetry in the dielectric function, allowing us to use real fields in Fourier space instead of complex fields. This gives a factor of 2 benefit in speed and memory. In this case, the MPB program will be installed as mpbi instead of mpb, so that you can have versions both with and without inversion symmetry installed at the same time. To install both mpb and mpbi, you should do:

./configure
make
sudo make install
make distclean
./configure --with-inv-symmetry
make
sudo make install

--with-hermitian-eps
           Support the use of complex-hermitian dielectric tensors corresponding to magnetic materials, which break inversion symmetry.

--enable-single
           Use single precision (C float) instead of the default double precision (C double) for computations. Not recommended.

--without-hdf5
           Don't use the HDF5 library for field and dielectric function output. In which case, no field output is possible.

--with-mpi
           Attempt to compile a parallel version of MPB using MPI; the resulting program will be installed as mpb-mpi. Requires MPI and MPI FFTW libraries to be installed, as described above.

Does not compile the serial MPB, or mpb-data; if you want those, you should make distclean and compile/install them separately.

--with-mpi
           Can be used along with --with-inv-symmetry, in which case the program is installed as mpbi-mpi.

--with-openmp
           Attempt to compile a shared-memory parallel version of MPB using OpenMP. The resulting program will be installed as mpb and FFTs will use OpenMP parallelism. Requires OpenMP FFTW libraries to be installed.

--with-libctl=dir
           If libctl was installed in a nonstandard location (i.e. neither /usr nor /usr/local), you need to specify the location of the libctl directory, dir. This is either prefix/share/libctl, where prefix is the installation prefix of libctl, or the original libctl source code directory. If you instead pass --without-libctl, then only the libmpb library (for use e.g. by Meep) is built, not the mpb executable; this is useful for people building Meep without Guile.

--with-blas=lib
           The configure script automatically attempts to detect accelerated BLAS libraries, like DXML (DEC/Alpha), SCSL and SGIMATH (SGI/MIPS), ESSL (IBM/PowerPC), ATLAS, and PHiPACK. You can, however, force a specific library name to try via --with-blas=lib.

--with-lapack=lib
           Cause the configure script to look for a LAPACK library called lib. The default is to use -llapack.

--disable-checks
           Disable runtime checks. Not recommended. The disabled checks shouldn't take up a significant amount of time anyway.

--enable-prof
           Compile for performance profiling.

--enable-debug
           Compile for debugging, adding extra runtime checks and so on.

--enable-debug-malloc
           Use special memory-allocation routines for extra debugging (to check for array overwrites, memory leaks, etcetera).

--with-efence
           More debugging: use the Electric Fence library, if available, for extra runtime array bounds-checking.

You can further control configure by setting various environment variables, such as:

  • CC: the C compiler command
  • CFLAGS: the C compiler flags (defaults to -O3).
  • CPPFLAGS: -Idir flags to tell the C compiler additional places to look for header files.
  • LDFLAGS: -Ldir flags to tell the linker additional places to look for libraries.
  • LIBS: additional libraries to link against.

Once compiled, the main program, as opposed to various test programs, resides in the mpb-ctl/ subdirectory, and is called mpb. You can install this program under /usr/local or elsewhere, if you used the --prefix flag for configure, by running:

sudo make install

The "sudo" command is to switch to root for installation into system directories. You can just do make install if you are installing into your home directory instead.

If you make a mistake (e.g. you forget to specify a needed -Ldir flag) or in general want to start over from a clean slate, you can restore MPB to a pristine state by running:

make distclean

Python

The Python inteface to MPB depends on MEEP and can currently only be built as part of MEEP. Eventually, the dependency on MEEP will be removed, and the Python interface will be available from the MPB repository.

The following instructions are for building parallel PyMeep with serial PyMPB from source on Ubuntu 16.04. The parallel version can still be run serially by running a script with just python instead of mpirun -np 4 python. If you really don't want to install MPI and parallel HDF5, just replace libhdf5-openmpi-dev with libhdf5-dev, and remove the --with-mpi, CC=mpicc, and CPP=mpicxx flags. The paths to HDF5 will also need to be adjusted to /usr/lib/x86_64-linux-gnu/hdf5/serial and /usr/include/hdf5/serial. Note that this script builds with Python 3 by default. If you want to use Python 2, just point the PYTHON variable to the appropriate interpreter when calling autogen.sh for building Meep, and use pip instead of pip3.

#!/bin/bash

set -e

RPATH_FLAGS="-Wl,-rpath,/usr/local/lib:/usr/lib/x86_64-linux-gnu/hdf5/openmpi"
MY_LDFLAGS="-L/usr/local/lib -L/usr/lib/x86_64-linux-gnu/hdf5/openmpi ${RPATH_FLAGS}"
MY_CPPFLAGS="-I/usr/local/include -I/usr/include/hdf5/openmpi"

sudo apt-get update
sudo apt-get -y install     \
    libblas-dev             \
    liblapack-dev           \
    libgmp-dev              \
    swig                    \
    libgsl-dev              \
    autoconf                \
    pkg-config              \
    libpng16-dev            \
    git                     \
    guile-2.0-dev           \
    libfftw3-dev            \
    libhdf5-openmpi-dev     \
    hdf5-tools              \
    libpython3.5-dev        \
    python3-numpy           \
    python3-scipy           \
    python3-matplotlib      \
    python3-pip             \

mkdir -p ~/install

cd ~/install
git clone https://github.com/NanoComp/harminv.git
cd harminv/
sh autogen.sh --enable-shared
make && sudo make install

cd ~/install
git clone https://github.com/NanoComp/libctl.git
cd libctl/
sh autogen.sh --enable-shared
make && sudo make install

cd ~/install
git clone https://github.com/NanoComp/h5utils.git
cd h5utils/
sh autogen.sh CC=mpicc LDFLAGS="${MY_LDFLAGS}" CPPFLAGS="${MY_CPPFLAGS}"
make && sudo make install

cd ~/install
git clone https://github.com/NanoComp/mpb.git
cd mpb/
sh autogen.sh --enable-shared CC=mpicc LDFLAGS="${MY_LDFLAGS}" CPPFLAGS="${MY_CPPFLAGS}" --with-hermitian-eps
make && sudo make install

sudo pip3 install --upgrade pip
pip3 install --user --no-cache-dir mpi4py
export HDF5_MPI="ON"
pip3 install --user --no-binary=h5py h5py

cd ~/install
git clone https://github.com/NanoComp/meep.git
cd meep/
sh autogen.sh --enable-shared --with-mpi PYTHON=python3 \
    CC=mpicc CXX=mpic++ LDFLAGS="${MY_LDFLAGS}" CPPFLAGS="${MY_CPPFLAGS}"
make && sudo make install

You may want to add the following line to your .profile so Python can always find the meep package:

export PYTHONPATH=/usr/local/lib/python3.5/site-packages