Tuesday, January 22, 2013

Writing inference algorithms made easier with NIFTY

Today, my colleagues and I, lead by the heroic efforts of Marco Selig, released a new software package called NIFTY (Numerical Information Field Theory). It's a Python library that is designed to simplify the development, testing, and application of statistical inference algorithms (among other things). It does this in a couple of ways.

First, one generally derives an algorithm with pen and paper (or my favorite, chalk), arriving at an equation or set of equations that can be used to obtain an estimate of some model parameters or continuous field (as in the case of image reconstruction, for example). In our group, we use the mathematical framework of Information Field Theory to derive such equations from first principles. NIFTY was designed to make the translation from mathematical language into computer code as simple and intuitive as possible.

But the real power of NIFTY is that it allows one to write algorithms without (much) consideration of the underlying geometry of the problem. Typically an algorithm that's written to be applied to a 3D cartesian geometry, say, wouldn't work for data that is defined on a sphere. This is not the case with NIFTY because with it handles the geometry, or "space", separately from the operators and fields used to describing the problem. In NIFTY, operators and fields are associated with some space when they are initialized, and will automatically call the appropriate operations for that space when required. This means that one can simply change the space on which objects are defined without fiddling with any of the rest of the code.

The advantages of this approach are that it allows for rapid testing of code. For example, one can develop an idea quickly in 1D and then when the software is stable, very easily apply it later to a 3D problem. It also means that code can be more easily recycled, since, for example, an algorithm designed to solve a problem on the sphere can be reused to solve a problem in some other geometry often by just changing one or two lines of code (redefining the spaces).

Furthermore, NIFTY also provides a lot of helpful tools that are often needed when writing inference algorithms, like transformations between sets of often used conjugate spaces, matrix probing, vector smoothing operations, etc. In all, it aims to make the life of the algorithm developer easier.

To illustrate how NIFTY works (I hope this will be illustrative, anyway), say you have a data vector, d, that is related to some physical signal vector, s, via a simple linear relationship

d = Rs + n

where n is a noise vector and R is an operator (represented here as a matrix) that encodes the effects of your measurement device. If both the signal and noise are well modeled using Gaussian statistics, then it can be shown that the Wiener filter is the optimal estimator for s. The Wiener Filter can be expressed as

m = (S-1 + RN-1R)-1RN-1d

where m is the estimate for s, and S and N are the covariance matrices of the signal and noise, respectively. To be clear, X-1 implies matrix inversion and † implies transpose conjugation.

In NIFTY, each of these vectors and operators will be assigned to special NIFTY Python objects. The vectors are defined as instances of NIFTY "field" classes, and the operators are defined as instances of NIFTY "operator" classes. Each of these has an associated "space" class, which defines the geometry of the problem. Assuming that all of the necessary objects are initialized already, coding the filter is as simple as:

m = D(R.adjoint_times(N.inverse_times(d))

where D, which is the (S-1 + RN-1R)-1 part of the formula, is an instance of the "operator" class with an inverse_times() method defined as

def inverse_times(x):
    return S.inverse_times(x) + \
        R.adjoint_times(N.inverse_times(R(x)))

To apply D when we have only defined D.inverse_times() we generally use the Conjugate Gradient method. That's about it. Somewhere before this a space object has been defined and when R, S, N, and d are initialized the space object is specified. The space definition can be changed to your hearts content, and the code shown here doesn't change a bit.

I think you can see that this code is a very natural translation of the mathematical expressions shown above and furthermore that the distinction between fields, operators and spaces makes the algorithm much simpler to develop and much more flexible to work with.

NIFTY has been released as open source under the GPLv3 license. We have also written an accompanying paper that has been posted to the arXiv today and submitted to the journal IEEE Transactions on Signal Processing. For more information, or to get NIFTY:

Thursday, December 6, 2012

Code release: pyrmsynth - Fast RM synthesis in Python

Another in what will be a series of code release posts. It's not that I've been some kind of coding superhero lately, but that I've got a backlog of code to release in the wild and am only now finding time to do it.

Last week I released pyrmsynth, a fast, Python based RM synthesis package including a CLEAN algorithm for simple deconvolution. This is the RM synthesis software that I wrote for the LOFAR collaboration, and has been in use on their compute cluster for some time. 

The pyrmsynth package is both an application for performing RM synthesis on multi-frequency polarized imaging data, as well as a library of sorts for writing your own applications. If your imaging data is in FITS format, or can be converted into FITS, then the included rmsynthesis.py application will probably do the trick for you out of the box. Or, if you have data in another format, you can write your own file I/O and data parsing script, and use the included RMSynth and RMClean classes to actually perform the RM synthesis on your data.

The advantage of using this package is its flexibility and relative speed. The interface is written in Python, so writing scripts for I/O, data manipulation, and visualization is easy. It's also easy to modify and extend the core functionality, if you need to. Despite being mainly written in Python, the code is still fast for two reasons: 
  1. We use FFTs to perform the imaging
  2. We have written gridding code in Cython (which generates C code) to enable usage of FFTs. 
FFTs require that the data is defined at regular intervals, but because sampling of λ2 space is generally unevenly spaced, this is not true for the case of RM synthesis. We use a fancy binning method known as gridding to sample the data at regular intervals, thereby enabling use of the speedy FFT routine. The gridding would itself be slow if written in pure Python, so instead we have implemented it in C (via Cython), and the result is a very fast code. On my machine, with the current version, I can process (with CLEAN and including file I/O) 10,000 lines of sight in ~6 seconds.

pyrmsynth is released under GPLv3. Contributions are welcome for feature additions, bug fixes, documentation, etc. Just contact me or fork the project on github and get cracking.

http://mrbell.github.com/pyrmsynth/

Sunday, November 25, 2012

Code release: GFFT

This week I released a new Python package to the world. It's called GFFT (General Fast Fourier Transformation) and what it does is it makes applying (relatively) fast Fourier transformations to a wider class of data much simpler.

Fast Fourier transformation in Python is pretty simple in many circumstances already. Just use the Numpy FFT and you're all set, right? Well, that's true if you have data that is regularly sampled, e.g. you've repeatedly made some measurement exactly once every second. But what if your measurements are taken at irregular intervals? Or what if you need to do a lot of phase shifts because your data doesn't start at the origin of whatever coordinate system on which it is defined?

Well, these are both situations that I and others in my group face quite often, and we found that we were using similar pieces of code again and again in a variety of projects. So Henrik Junklewitz and I developed GFFT to make life a bit simpler. The features of GFFT include:

  • Fast Fourier transformations of data defined on, and being transformed to, arbitrarily defined coordinate spaces including:
    • regular to regular
    • irregular to regular
    • regular to irregular
    • irregular to irregular
  • Fast transformations, even when working on irregular spaces, thanks to the included gridding algorithm.
  • The gridding code is implemented in C (via Cython), so it's fast.
  • When working with irregular to regular space transforms, you can store irregularly sampled Hermitian symmetric data in an efficient way... just store half of the data and the symmetric parts will be generated internally.
  • Automatically handles arbitrary phase shifts of the input and output data if they are not defined on axes that start at the origin.
Interested? Check out the project on GitHub: http://mrbell.github.com/gfft/

Please note that as of now the package is in beta, and a major revision is already underway (see the rewrite branch in the git repository). The revision will clean up the code base significantly (it's almost embarrassingly sloppy right now, but you know how it goes when you develop code for scientific applications...) and will make the interface more intuitive. So if you start using GFFT immediately, just be aware that we are planning to fundamentally change the interface soon and future updates may break your code (we'll put in a deprecation warning first...). Things will stabilize soon. Until then, try it out and let us know what you think!

Friday, November 23, 2012

New submission: Improving CLEAN for rotation measure synthesis

This week we submitted a new paper to A&A where we suggest a method for improving CLEAN images in the context of RM synthesis. The method allows one to make lower resolution images while obtaining better results than when using CLEAN alone and moreover it makes the results much less dependent on the choice of pixelization.

What we and many others have found is that RMCLEAN doesn't do such a great job at reconstructing the locations or fluxes of sources, especially when there are several of them close together. When writing the 3D CLEAN algorithm for Faraday synthesis we noticed that RMCLEAN doesn't even do that great with a single source unless it is located directly in the middle of an image pixel. The dynamic range in a CLEANed image is well-known to be limited due to the fact that you can't exactly model the location of a source in a pixelized image. You can partially overcome the issue by making the image have very high resolution, but especially for 3D imaging this becomes expensive both computationally and in terms of storage.

So we looked into this in a bit more detail, and devised a method for improving the CLEAN generated model using maximum likelihood (ML) estimation. The method is similar to others that have been suggested for aperture synthesis imaging. but it seems to have even more impact in the case of RM synthesis. In our testing, we found that the ML method dramatically reduces the error in measurements of both source location and flux. Somewhat surprisingly, we also found that increasing resolution doesn't reduce the errors in normal CLEAN images in the case where two sources are nearby to one another. However, he ML algorithm was able to get accurate results in such a case even in a low resolution image.

Ultimately we don't think that CLEAN (or any method that assumes the signal to be diagonal in pixel space) is the right approach in all cases, and we're actively working on new methods that take advantage of correlated structures in the data to help constrain reconstructions. Nevertheless, CLEAN is undoubtedly a useful algorithm in some circumstances, and together with this new algorithm it can give rather good results while being easy to implement and (relatively) computationally inexpensive.

The pre-print is available now on the arXiv. Give it a read if you want to learn more. And if you're interested in implementing the method for your own analysis, then be sure to stay tuned because I plan to release the code for this relatively soon.

Friday, June 24, 2011

An early look at LOFAR imaging results

The LOFAR station near Munich
Things are progressing quite nicely with the LOw Frequency ARray a.k.a. LOFAR. Despite the significant challenges involved with operating a radio telescope of its kind (maybe I'll post something about this one day), high quality data is now routinely being collected, and some very impressive images are being produced (see e.g. the ASTRON picture of the day from March 18th). Who knew that you could throw a bunch of relatively cheap dipole antennas on the ground and end up with a world class telescope? It's not been possible without a lot of hard work and ingenuity, I assure you. And while there are still many very complicated (and interesting) calibration and imaging issues left to solve, overall I would say that we are on the verge of producing scientific quality results.  

Don't believe me? Well, George Heald from ASTRON has just published a conference proceeding highlighting some of the recent achievements of a group of hardworking LOFAR commissioners. To this paper, I have contributed some results from our polarization commissioning efforts. Check it out. You can find it on the arXiv here: http://arxiv.org/abs/1106.3195

Saturday, May 28, 2011

Faraday caustics paper submitted

Together with Henrik Junklewitz and Torsten Enßlin, I've just submitted a new paper to A&A introducing Faraday caustics to the world.  Never heard of them?  I don't blame you, we just coined the phrase.  So what are they?  They are singularities that appear in the Faraday spectrum, i.e. the distribution of polarized intensity as a function of Faraday depth (analogous to rotation measure).  These singularities are caused by reversals of the magnetic field direction along the line of sight.

Sounds a pretty obscure, I know, but there are two reasons why they are important to understand.  First, the RM Synthesis technique (used to measure Faraday spectra) is becoming very popular.  Anyone looking at the polarized emission from diffuse polarized sources, e.g. the synchrotron emission from the interstellar medium in the Milky Way, is bound to find these Faraday caustics in their data.  It's important to understand what these features are and where they come from in order to properly understand the results.  Furthermore we show how Faraday caustics can be used as a powerful tool for studying the structural and statistical properties of magnetic fields.  In this way, observations of Faraday caustics could be used to greatly improve our understanding of both the large and small-scale properties of the magnetic field in the Milky Way, for example.

If you are interested in learning more, our paper entitled "Faraday caustics: Singularities in the Faraday spectrum and their utility as probes of magnetic field properties" is available in pre-print now.

SS 433 jet evolution paper accepted!

This week I received word that a paper that I've written along with David Roberts and John Wardle, entitled "Structure and Magnetic Fields in the Precessing Jet System SS433 III. Evolution of the Intrinsic Brightness of the Jets from a Deep Multi-Epoch VLA Campaign" (whew... that's a mouthful), has been accepted for publication in the Astrophysical Journal.  It is tentatively scheduled to appear in the July 10, 2011, v735 - 2 issue so keep an eye out.  Of course, if you can't wait that long there is a pre-print version available.

As you might have gathered from the title, this is one in a series of papers on SS 433.  I am working on a follow up to this now where we will look at the images of linear polarization from the same data analyzed in paper III.  We hope to have it submitted by the Fall.  For reference, here are links to the first two papers in the series: Paper I, Paper II.