Summary
This manual explains the use of a Python script for the analysis of video meteor spectra. The theory behind it has been described in (Dubs and Schlatter, 2015): meteor_spectroscopy_wgn43-4_2015.pdf).
Until now, the method had been applied successfully to analyse meteor spectra obtained with high sensitive video cams such as the Watec 902 H2 ultimate or colour cameras such as the Sony alpha 7S, which allow video recording. Unfortunately the method required the use of different software for the various steps of the spectra processing, since standard spectroscopy was not adapted to the special requirements of meteor spectroscopy with a fast moving spectral source and nonlinear spectra caused by oblique incidence of the light to be analysed. By combining the different steps of the analysis in a single pipeline, using Python, the analysis should be easier, faster and more reliable. The processing steps include extraction of the images from the video file, defining a background image, subtraction of background, transformation of the images to orthographic projection, resulting in linear spectra. These are superposed (registered) and converted to 1-dimensional spectra by summing over several rows. The intermediate images of the processing are stored as FITS-images. The advantage of this format is that it also contains information about the image creation and processing, important for spectra of professional quality. The last steps are wavelength calibration and plotting of the final spectra. The procedure follows the processing as described in (Dubs and Maeda, 2016: Calibration_of_Meteor_Spectra_Dubs_IMC2016.pdf).
At the moment some processing steps are not implemented yet, such as the flat correction and the instrument response. These will be added later if needed.
The program is still in a development stage, so the user is asked to report problems and errors, for correction in later versions.
Introduction
For amateurs, the analysis of meteor spectra has been quite difficult. Standard spectroscopy software is not particularly suited for meteor spectra. Therefore different software had to be used for parts of the processing, which discouraged potential users and made the analysis quite complex (see the manual: processing-meteor-spectra-v151.pdf for the old method). Most of the processing steps are the same for different meteors, therefore a processing pipeline which combines the different steps can simplify the meteor spectra processing. The pipeline is written as a Python script.
Python was chosen, because
- it contains all the necessary tools to do the analysis
- it finds widespread use in the astronomy community
- it is free
- it runs on different platforms
I was inspired to use Python by Giovanni Leidi, who was giving a talk about the use of Python for the analysis of spectra in the spectroscopy workshop at OHP 2018 (https://www.shelyak.com/ohp-spectro-star-party-2018/).
For the processing of meteor spectra it is necessary to calibrate the equipment beforehand. For this a separate script, also in Python, was written (see:
Note of caution: I am new to Python, so the script presented here may not be the best solution. Some things have been done in a complicated way, copying examples from different sources and trying to make it work. It certainly should be improved for clarity and safety of operation. Therefore I hope you will suggest improvements.
The manual is divided into three main sections. First a general description of the processing steps is given, which are followed by any processing software. The next section describes the installation of Python, such that the specific Python scripts for analysing meteor spectra can be run. In the following section a detailed description how to process a video file into a calibrated meteor spectrum is given, using the script m_pipe6.py. At last some hints and tricks are given for efficient use of the script.
The pipeline m_pipe6.py is intended for a first analysis of meteor spectra. For more detailed studies and reanalysis of processed meteor spectra standard spectroscopy such as ISIS or Vspec may be used. The registered images produced by the Python script are in a format, which is suitable for these spectroscopy programs.
In addition, a short script, m_spec.py was written for the reanalysis of meteor spectra, which allows also correction of tilt and slant. In a future version, these features will be integrated into the pipeline. The reanalysis of meteor spectra produced by other software (in particular IRIS and ImageTools) with m_spec.py is not always possible at the moment, caused by some incompatibilities of the image format FITS ( Python expects either 16-bit integer or -1 to +1 floating point data values, ‘BITPIX’ = 16 or ‘BITPIX’ = -32 ). This will be corrected if necessary in the future.
Processing of meteor spectra, overview
Image extraction from video sequence
The starting point for the analysis is a video file. Typically these are recorded with UFO capture, a program to detect meteors in real time. The program uses a pretrigger to record one second before the meteor appears until the end of the meteor. This video is separated into an image sequence which is stored as *.BMP images. In Python this done with a call to ffmpeg (https://www.ffmpeg.org/). For the Watec camera, the images are either single frames or the interlaced frames can be separated into two fields, each containing a half frame (even and odd scan lines). These have to be arranged in the correct order (bottom field first or top field first). The use of fields with double image rate is useful for fast meteors, having a large velocity component in the dispersion direction, which reduces the spectral resolution.
For colour videos, there are two processing methods, in colour or black and white. Processing in colour produces nice colour images of the spectra and makes the assignment of spectral lines easier. On the other hand it is three times slower and uses lots of disk space. For the extraction of 1d-spectra the colour images have to be converted to b/w images. Therefore it is usually more convenient to do the whole processing in b/w. Another possibility to speed up the processing is by binning the images in the video extraction in case of 4k videos (only 2×2 binning supported at present).
Background image
An important step in the processing of meteor spectra is background subtraction. The video records not only the meteor spectrum, but also other light sources such as sky background, starlight and light pollution from different sources. Most of this does not vary rapidly and can be subtracted before further processing of the spectra.
The background image is made by averaging a number of images without the meteor. For this, the first images of the video can be used. The pretrigger in UFO capture is adjusted normally to about one second. At a frame rate of 25 images per second this means that the first 20 images can be used for the background, leaving some safety margin if the detection algorithm misses the first few meteor images. Averaging of 20 images reduces the statistical noise in the images by a factor square root of 20 or about 4.5. In the case that the video is converted to fields the image rate is doubled and also the number of images for the background can be doubled.
This background image is subtracted from the remaining images containing the meteor spectra.
The effect of the background subtraction in the case of moonlight:
![]() |
![]() |
Image transformation to the orthographic projection
As described in the above mentioned paper: meteor_spectroscopy_wgn43-4_2015.pdf, the recorded spectra have a nonlinear dispersion and are curved differently in different image areas. These spectra are linearized and made parallel by a transformation to an orthographic projection. This transformation can also correct lens distortions. In the same processing step the spectra can be rotated, so that the resulting linear spectra are parallel to the image x-axis. The transformation is described by an axial symmetric radial transformation of the form
r = r’ * (1 + a3*r’2 +a5*r’4) (plus higher terms in the future), where r and r’ are polar coordinates in the original and transformed images. The equation describes the inverse transformation, since for every image point in the resulting image the original coordinates have to be calculated. The polar coordinates are measured from the coordinates of the optical axis (x0, y0) and the polar angle is corrected by the spectrum rotation. The transformation is done in three steps, conversion to polar coordinates, radial and angular transformation, conversion back to Cartesian coordinates. For the interested reader the Python code is given here:

The parameters x0, y0, a3, a5 and rotation have to be determined beforehand in a one time calibration for each camera – lens – grating configuration. In addition, in the case of non-square pixels a transformation (stretching in y-direction) by a factor yscale has to be applied before the transformation to orthographic coordinates. When fields (half frames) were extracted from the interlaced video, the images have to be scaled by an additional factor 2 in y-direction, in order to compensate for the missing lines.
The radial transformation conserves the dispersion or image scale in the center of the image (at x0, y0). The dispersion disp0 can be used to convert the spectra from pixel to wavelength (more about this see below).
Image registration
In stellar spectroscopy, the position of the star image can be kept in a fixed place, usually defined by a narrow slit, which allows a constant calibration of the spectrum. With meteor spectra this is hardly possible, the meteor appearing without warning and moving fast enough to make capture and tracking impossible, at least for the amateur. Therefore the spectra may be recorded anywhere in the image plane, in a different position from image to image. For the analysis they can be shifted in such a way that they overlap. This works only because they are all parallel and have the same linear dispersion after the orthographic transformation. Aligning therefore the zero order will automatically align the whole spectrum. Summing these aligned (registered) spectra allows to improve signal to noise ratio. The aligning is done by measuring the position of a prominent line (the zero order if visible, or another prominent line if the zero order is outside the image area) in each image. This position is determined by fitting a Gaussian peak shape to the line and using the centre as line position. The line position in the first spectrum is used as a reference, to which all the following spectra are shifted.
In the next step a suitable number of images are added, which results in a 2-D spectrum image, which can be analysed further in the following steps.
Correction of tilt and slant
The deviation of the meteor spectrum from the horizontal axis is called tilt. For the extraction of the spectrum from the 2-dimensional image a number of rows corresponding to the width of the spectrum are added. This works correctly only if the spectrum is aligned parallel to the rows. During calibration this angle has been determined and the images were rotated so that the spectra should be aligned parallel. In the case that the grating has been rotated since the calibration, the spectra will not be parallel to the rows. This can be corrected by applying a “rotation” to the added spectrum. Mathematically a shear is applied, the columns are shifted vertically in proportion to the column number:
dy = tilt * (x – x0)
xo is selected in the center of the image.


The same effect would be obtained by changing the rotation angle in the configuration file by increasing rot by 0.04 (angle measured in radians).
Notice that the spectral lines are not vertical. This is caused by the movement of the meteor. The deviation from the vertical is called slant and is defined as the slope
Slant = dx/dy
For the correction of the slant the rows are shifted horizontally in proportion to the row number.
dx = slant * (y – y0). Y0 is chosen as the center of the selected rows for adding the spectrum. Therefore the spectrum is shifted only minimally by the slant operation.

It is advisable to perform the operations in this order and adjust the row selection before the slant correction.
With these corrections, the processing may continue with the spectrum extraction.
Spectrum extraction
The corrected sum image can be converted to a 1-d spectrum by adding a suitable number of rows of the image which contain the spectrum. The number of rows is determined by the width of the spectrum. Ideally one wants to add all the rows which contain the information but avoid the rows which contain only background noise. The choice is somewhat arbitrary and depends on the skill of the operator, but not too critical. There exist methods of optimal extraction of the spectrum by measuring the width of the spectrum and signal to noise ratio in comparison with the background. This has not been implemented yet.

With the known dispersion disp0 [nm/pixel] wavelength intervals can be calculated by measuring separations in pixels of prominent features (zero order, Na-line, Mg-line etc.).This helps in a course assignment of spectral features.
Spectrum calibration
In principle, the meteor spectrum can be calibrated in wavelength by setting the zero point at the zero order and convert the pixels to nm by multiplying the distance in pixels of a spectral line from the zero order by the factor disp0, the (inverse) dispersion. In practice, some changes in alignment of the grating, changes of focus and focal length (for zoom lenses) may occur and it is advisable to calibrate the spectrum with the help of known spectral lines. Although in theory a linear function should fit the spectrum, in order to correct for small calibration errors a second order polynomial may give a better result. Before fitting a higher order polynomial it is advisable to check if a lower order polynomial works as well. Checking with more lines than required for a fit is always advisable in order to be able to eliminate measurement errors or assignment of the wrong lines.
Plotting the spectra
Once calibrated, the spectrum can be plotted and saved as *cal.dat file. This is a common data format, a text file with two columns for wavelength and intensity, each data point in a separate row. Some programs which read spectra expect a constant wavelength interval; therefore the spectra are resampled with a constant wavelength interval, about half as large as the average original sampling interval. Quadratic interpolation is used. This helps to prevent loss of information with the resampling.

For the calibration in this example a linear fit of the zero order and the two spectral lines of Mg and Na were used.
Further information
This finishes the general description. If you are interested in the Python script and the complete manual with detailed instruction, send me a message: Contact
check also this post: https://meteorspectroscopy.org/2019/08/04/perseid-meteor-spectrum-at-ohp/
One thought on “Meteor spectra analysed with Python”