Summed and Composite Likelihood

Jeremy S. Perkins (NASA/GSFC) NASA Meatball

Summed Analysis

Summed likelihood is a way of performing a joint likelihood fit on two sub-selections of data using the same XML model. This could be useful if you want to do the following:

  • Seperate Front and Back Analysis
  • Multiple time cuts
  • Different cuts in different energy ranges

There are many other possible applications and you're just limited by your imagination (based on your scientific experience).

Example with 3C 279

We're going to use the 3C 279 data set that we know and love and perform a seperate front/back analysis and then perform a summed likelihood at the end. We're going to do a binned anlysis this time since we haven't really done one yet.

I've provided a lot of the ancillary files in a tarball linked to the agenda. Let's look at the directory

In [117]:
!ls -1
3C279_back_BinnedExpMap.fits
3C279_back_CCUBE.fits
3C279_back_filtered.fits
3C279_back_filtered_gti.fits
3C279_back_srcMap.fits
3C279_front_BinnedExpMap.fits
3C279_front_CCUBE.fits
3C279_front_filtered.fits
3C279_front_filtered_gti.fits
3C279_front_ltcube.fits
3C279_front_srcMap.fits
3C279_input_model_back.xml
3C279_input_model_front.xml
gll_iem_v05_rev1.fit
iso_source_back_v05_rev1.txt
iso_source_front_v05_rev1.txt
iso_source_v05.txt
L1405221252264C652E7F67_PH00.fits
L1405221252264C652E7F67_SC00.fits
Summed_and_Composite.ipynb
SwiftJ1644_expMap.fits
SwiftJ1644_filtered_gti.fits
SwiftJ1644_likeMinuit.xml
SwiftJ1644_ltcube.fits
SwiftJ1644_SC.fits

Note that you need to have the galacitic diffuse model and the raw data files in that directory (gll_iem_v05_rev1.fit, L1405221252264C652E7F67_PH00.fits and L1405221252264C652E7F67_SC00.fits). I didn't put them in the tar ball since they are a bit big and you already have them in your VM (or you should). You'll also need to copy or link the front and back isotropic models into your working directory (iso_source_back_v05_rev1.fits and iso_source_front_v05_rev1.fits). These two files are in $FERMI_DIR/refdata/fermi/galdiffuse if you need to find them.

Now, we need to import some functions so that we can work on these data. Note that you don't have to run all of these. If I say 'Don't run this', don't run it.

In [118]:
from gt_apps import filter, maketime, expMap, expCube, evtbin, srcMaps

Selecting Front and Back Events

Don't run these command (unless you want to rerun everything).

We first need to run gtselect (called 'filter' in python) twice. Once, we select only the front events and the other time we select only back events. You do this with the (hidden on the command line) parameter 'convtype'.

In [25]:
filter['rad'] = 15
filter['evclass'] = 2
filter['infile'] = "L1405221252264C652E7F67_PH00.fits"
filter['outfile'] = "3C279_front_filtered.fits"
filter['ra'] = 194.046527
filter['dec'] = -5.789312
filter['tmin'] = 239557417
filter['tmax'] = 255398400
filter['emin'] = 100
filter['emax'] = 100000
filter['zmax'] = 100
filter['convtype'] = 0
In [26]:
filter.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtselect infile=L1405221252264C652E7F67_PH00.fits outfile=3C279_front_filtered.fits ra=194.046527 dec=-5.789312 rad=15.0 tmin=239557417.0 tmax=255398400.0 emin=100.0 emax=100000.0 zmax=100.0 evclsmin="INDEF" evclsmax="INDEF" evclass=2 convtype=0 phasemin=0.0 phasemax=1.0 evtable="EVENTS" chatter=2 clobber=yes debug=no gui=no mode="ql"
Done.
real 1.28
user 0.37
sys 0.56
In [27]:
filter['rad'] = 15
filter['evclass'] = 2
filter['infile'] = "L1405221252264C652E7F67_PH00.fits"
filter['outfile'] = "3C279_back_filtered.fits"
filter['ra'] = 194.046527
filter['dec'] = -5.789312
filter['tmin'] = 239557417
filter['tmax'] = 255398400
filter['emin'] = 100
filter['emax'] = 100000
filter['zmax'] = 100
filter['convtype'] = 1
In [28]:
filter.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtselect infile=L1405221252264C652E7F67_PH00.fits outfile=3C279_back_filtered.fits ra=194.046527 dec=-5.789312 rad=15.0 tmin=239557417.0 tmax=255398400.0 emin=100.0 emax=100000.0 zmax=100.0 evclsmin="INDEF" evclsmax="INDEF" evclass=2 convtype=1 phasemin=0.0 phasemax=1.0 evtable="EVENTS" chatter=2 clobber=yes debug=no gui=no mode="ql"
Done.
real 1.28
user 0.42
sys 0.52

Compute the GTIs

Don't run these commands. (unless you want to rerun everything).

Now, we need to find the GTIs for each data set (front and back).

In [29]:
maketime['scfile'] = 'L1405221252264C652E7F67_SC00.fits'
maketime['filter'] = '(DATA_QUAL==1)&&(LAT_CONFIG==1)'
maketime['roicut'] = 'yes'
maketime['evfile'] = '3C279_front_filtered.fits'
maketime['outfile'] = '3C279_front_filtered_gti.fits'
In [30]:
maketime.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtmktime scfile=L1405221252264C652E7F67_SC00.fits sctable="SC_DATA" filter="(DATA_QUAL==1)&&(LAT_CONFIG==1)" roicut=yes evfile=3C279_front_filtered.fits evtable="EVENTS" outfile="3C279_front_filtered_gti.fits" apply_filter=yes overwrite=no header_obstimes=yes tstart=0.0 tstop=0.0 gtifile="default" chatter=2 clobber=yes debug=no gui=no mode="ql"
real 2.85
user 0.82
sys 1.30
In [31]:
maketime['scfile'] = 'L1405221252264C652E7F67_SC00.fits'
maketime['filter'] = '(DATA_QUAL==1)&&(LAT_CONFIG==1)'
maketime['roicut'] = 'yes'
maketime['evfile'] = '3C279_back_filtered.fits'
maketime['outfile'] = '3C279_back_filtered_gti.fits'
In [32]:
maketime.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtmktime scfile=L1405221252264C652E7F67_SC00.fits sctable="SC_DATA" filter="(DATA_QUAL==1)&&(LAT_CONFIG==1)" roicut=yes evfile=3C279_back_filtered.fits evtable="EVENTS" outfile="3C279_back_filtered_gti.fits" apply_filter=yes overwrite=no header_obstimes=yes tstart=0.0 tstop=0.0 gtifile="default" chatter=2 clobber=yes debug=no gui=no mode="ql"
real 2.76
user 0.83
sys 1.21

Compute the Livetime Cube

You only need to do this once since we made the exact same time cuts and used the same GTI filter on both data sets.

Don't run these commands. (unless you want to rerun everything).

In [33]:
expCube['evfile'] = '3C279_front_filtered_gti.fits'
expCube['scfile'] = 'L1405221252264C652E7F67_SC00.fits'
expCube['outfile'] = '3C279_front_ltcube.fits'
expCube['dcostheta'] = 0.025
expCube['binsz'] = 1.0
In [34]:
expCube.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtltcube evfile="3C279_front_filtered_gti.fits" evtable="EVENTS" scfile=L1405221252264C652E7F67_SC00.fits sctable="SC_DATA" outfile=3C279_front_ltcube.fits dcostheta=0.025 binsz=1.0 phibins=0 tmin=0.0 tmax=0.0 file_version="1" zmin=0.0 zmax=180.0 chatter=2 clobber=yes debug=no gui=no mode="ql"
Working on file L1405221252264C652E7F67_SC00.fits
.....................!
real 505.33
user 502.57
sys 0.86

Compute the counts cube

Don't run these commands. (unless you want to rerun everything).

The counts cube is the counts from our data file binned up in space and energy. All of the steps above use a circular ROI (or a cone, really). Once you switch to binned, you start doing things in squares. Your counts cube can only be as big as the biggest square that can fit in your circular ROI.

CCUBE Geometry

This basically means that width of your square, s, should be no bigger than $r\times\sqrt(2)$. We are using an ROI with a radius, r, of 15, so we can only use a square with sides of size:

In [121]:
import numpy as np
15.*np.sqrt(2)
Out[121]:
21.213203435596427

So, we're going to make a square with 100 pixels in each dimension and set the degrees/pixel to 0.2 so that we have a 20x20 degree cube. We are also making 30 energy bins with logarithmic spacing. You have to bin up both the front and back data sets.

In [35]:
evtbin['evfile'] = '3C279_front_filtered_gti.fits'
evtbin['outfile'] = '3C279_front_CCUBE.fits'
evtbin['algorithm'] = 'CCUBE'
evtbin['nxpix'] = 100
evtbin['nypix'] = 100
evtbin['binsz'] = 0.2
evtbin['coordsys'] = 'CEL'
evtbin['xref'] = 194.046527
evtbin['yref'] =  -5.789312
evtbin['axisrot'] = 0
evtbin['proj'] = 'AIT'
evtbin['ebinalg'] = 'LOG'
evtbin['emin'] = 100
evtbin['emax'] = 100000
evtbin['enumbins'] = 30
evtbin['scfile'] = 'L1405221252264C652E7F67_SC00.fits'
In [36]:
evtbin.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtbin evfile=3C279_front_filtered_gti.fits scfile=L1405221252264C652E7F67_SC00.fits outfile=3C279_front_CCUBE.fits algorithm="CCUBE" ebinalg="LOG" emin=100.0 emax=100000.0 enumbins=30 ebinfile=NONE tbinalg="LIN" tbinfile=NONE nxpix=100 nypix=100 binsz=0.2 coordsys="CEL" xref=194.046527 yref=-5.789312 axisrot=0.0 rafield="RA" decfield="DEC" proj="AIT" hpx_ordering_scheme="RING" hpx_order=3 hpx_ebin=yes evtable="EVENTS" sctable="SC_DATA" efield="ENERGY" tfield="TIME" chatter=2 clobber=yes debug=no gui=no mode="ql"
This is gtbin version ScienceTools-v9r33p0-fssc-20140317
real 2.01
user 0.74
sys 0.68
In [37]:
evtbin['evfile'] = '3C279_back_filtered_gti.fits'
evtbin['outfile'] = '3C279_back_CCUBE.fits'
evtbin['algorithm'] = 'CCUBE'
evtbin['nxpix'] = 100
evtbin['nypix'] = 100
evtbin['binsz'] = 0.2
evtbin['coordsys'] = 'CEL'
evtbin['xref'] = 194.046527
evtbin['yref'] =  -5.789312
evtbin['axisrot'] = 0
evtbin['proj'] = 'AIT'
evtbin['ebinalg'] = 'LOG'
evtbin['emin'] = 100
evtbin['emax'] = 100000
evtbin['enumbins'] = 30
evtbin['scfile'] = 'L1405221252264C652E7F67_SC00.fits'
In [38]:
evtbin.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtbin evfile=3C279_back_filtered_gti.fits scfile=L1405221252264C652E7F67_SC00.fits outfile=3C279_back_CCUBE.fits algorithm="CCUBE" ebinalg="LOG" emin=100.0 emax=100000.0 enumbins=30 ebinfile=NONE tbinalg="LIN" tbinfile=NONE nxpix=100 nypix=100 binsz=0.2 coordsys="CEL" xref=194.046527 yref=-5.789312 axisrot=0.0 rafield="RA" decfield="DEC" proj="AIT" hpx_ordering_scheme="RING" hpx_order=3 hpx_ebin=yes evtable="EVENTS" sctable="SC_DATA" efield="ENERGY" tfield="TIME" chatter=2 clobber=yes debug=no gui=no mode="ql"
This is gtbin version ScienceTools-v9r33p0-fssc-20140317
real 1.87
user 0.75
sys 0.58

Generate a Binned Exposure Map

The binned exposure map is an exposure map binned in energy and direction. It's basically gives the exposure in bins of energy and direction.

Don't run these commands. (unless you want to rerun everything).

You first need to import the python version of 'gtexpcube2' which doesn't have a gtapp version by default. It's easy to do (you can import any of the command line tools into python this way). Then, you can check out it's parameters with the pars function.

In [52]:
from GtApp import GtApp
expCube2 = GtApp('gtexpcube2','Likelihood')
In [53]:
expCube2.pars()
Out[53]:
' infile= cmap= outfile= irfs="CALDB" nxpix=360 nypix=180 binsz=1.0 coordsys="GAL" xref=0.0 yref=0.0 axisrot=0.0 proj="CAR" ebinalg="LOG" emin=100.0 emax=300000.0 enumbins=10 ebinfile="NONE" bincalc="EDGE" ignorephi=no thmax=180.0 thmin=0.0 table="EXPOSURE" chatter=2 clobber=yes debug=no mode="ql"'

There's a tricky bit here. You need to make sure and compute the exposure way beyond the edges of your counts cube. This is because you need to know how the LAT responds to photons produced by an object beyond your ROI. A good rule of thumb is to take the PSF at the lowest energy and add 10 degrees to that. So, at 100 MeV, we have a PSF of about 10 degrees, thus, we need to go out to a radius of 15+10+10 = 30. Then, the largest square that can fit in that circle is

In [122]:
30*np.sqrt(2)
Out[122]:
42.426406871192853

So, at the minimum we should make binned exposure maps that are 43 degrees on a side. To be safe, we'll make a map that's 300 x 300 with the same pixel size we used for the counts cube so we end up with a cube that 60 degrees x 60 degrees. We also need to use the same energy binning we used in the counts cube.

Also note that we are giving seperat front and back IRFs at this point.

In [123]:
expCube2['infile'] = '3C279_front_ltcube.fits'
expCube2['cmap'] = 'none'
expCube2['outfile'] = '3C279_front_BinnedExpMap.fits'
expCube2['irfs'] = 'P7REP_SOURCE_V15::FRONT'
expCube2['nxpix'] = 300
expCube2['nypix'] = 300
expCube2['binsz'] = 0.2
expCube2['coordsys'] = 'CEL'
expCube2['xref'] = 194.046527
expCube2['yref'] = -5.789312
expCube2['axisrot'] = 0.0
expCube2['proj'] = 'AIT'
expCube2['ebinalg'] = 'LOG'
expCube2['emin'] = 100
expCube2['emax'] = 100000
expCube2['enumbins'] = 30
In [124]:
expCube2.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtexpcube2 infile=3C279_front_ltcube.fits cmap=none outfile=3C279_front_BinnedExpMap.fits irfs="P7REP_SOURCE_V15::FRONT" nxpix=300 nypix=300 binsz=0.2 coordsys="CEL" xref=194.046527 yref=-5.789312 axisrot=0.0 proj="AIT" ebinalg="LOG" emin=100.0 emax=100000.0 enumbins=30 ebinfile="NONE" bincalc="EDGE" ignorephi=no thmax=180.0 thmin=0.0 table="EXPOSURE" chatter=2 clobber=yes debug=no mode="ql"
Computing binned exposure map....................!
real 13.31
user 12.99
sys 0.30
In [125]:
expCube2['infile'] = '3C279_front_ltcube.fits'
expCube2['cmap'] = 'none'
expCube2['outfile'] = '3C279_back_BinnedExpMap.fits'
expCube2['irfs'] = 'P7REP_SOURCE_V15::BACK'
expCube2['nxpix'] = 300
expCube2['nypix'] = 300
expCube2['binsz'] = 0.2
expCube2['coordsys'] = 'CEL'
expCube2['xref'] = 194.046527
expCube2['yref'] = -5.789312
expCube2['axisrot'] = 0.0
expCube2['proj'] = 'AIT'
expCube2['ebinalg'] = 'LOG'
expCube2['emin'] = 100
expCube2['emax'] = 100000
expCube2['enumbins'] = 30
In [126]:
expCube2.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtexpcube2 infile=3C279_front_ltcube.fits cmap=none outfile=3C279_back_BinnedExpMap.fits irfs="P7REP_SOURCE_V15::BACK" nxpix=300 nypix=300 binsz=0.2 coordsys="CEL" xref=194.046527 yref=-5.789312 axisrot=0.0 proj="AIT" ebinalg="LOG" emin=100.0 emax=100000.0 enumbins=30 ebinfile="NONE" bincalc="EDGE" ignorephi=no thmax=180.0 thmin=0.0 table="EXPOSURE" chatter=2 clobber=yes debug=no mode="ql"
Computing binned exposure map....................!
real 12.59
user 12.27
sys 0.31

Compute the sourcemaps

The sourcemaps convolve the LAT response with your data and your sourcemodel.

Don't run these commands. (unless you want to rerun everything).

In [127]:
srcMaps['scfile'] = 'L1405221252264C652E7F67_SC00.fits'
srcMaps['expcube'] = '3C279_front_ltcube.fits'
srcMaps['cmap'] = '3C279_front_CCUBE.fits'
srcMaps['srcmdl'] = '3C279_input_model_front.xml'
srcMaps['bexpmap'] = '3C279_front_BinnedExpMap.fits'
srcMaps['outfile'] = '3C279_front_srcMap.fits'
srcMaps['irfs'] = 'P7REP_SOURCE_V15::FRONT'
In [128]:
srcMaps.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtsrcmaps scfile=L1405221252264C652E7F67_SC00.fits sctable="SC_DATA" expcube=3C279_front_ltcube.fits cmap=3C279_front_CCUBE.fits srcmdl=3C279_input_model_front.xml bexpmap=3C279_front_BinnedExpMap.fits outfile=3C279_front_srcMap.fits irfs="P7REP_SOURCE_V15::FRONT" convol=yes resample=yes rfactor=2 minbinsz=0.1 ptsrc=yes psfcorr=yes emapbnds=yes copyall=no chatter=2 clobber=yes debug=no gui=no mode="ql"
Generating SourceMap for 3C 273....................!
Generating SourceMap for 3C 279....................!
Generating SourceMap for gll_iem_v05_rev1....................!
Generating SourceMap for iso_source_front_v05_rev1....................!
real 157.21
user 156.15
sys 0.98
In [129]:
srcMaps['scfile'] = 'L1405221252264C652E7F67_SC00.fits'
srcMaps['expcube'] = '3C279_front_ltcube.fits'
srcMaps['cmap'] = '3C279_back_CCUBE.fits'
srcMaps['srcmdl'] = '3C279_input_model_back.xml'
srcMaps['bexpmap'] = '3C279_back_BinnedExpMap.fits'
srcMaps['outfile'] = '3C279_back_srcMap.fits'
srcMaps['irfs'] = 'P7REP_SOURCE_V15::BACK'
In [130]:
srcMaps.run()
time -p /home/fermi2014/AstroSoft/ScienceTools/x86_64-unknown-linux-gnu-libc2.12/bin/gtsrcmaps scfile=L1405221252264C652E7F67_SC00.fits sctable="SC_DATA" expcube=3C279_front_ltcube.fits cmap=3C279_back_CCUBE.fits srcmdl=3C279_input_model_back.xml bexpmap=3C279_back_BinnedExpMap.fits outfile=3C279_back_srcMap.fits irfs="P7REP_SOURCE_V15::BACK" convol=yes resample=yes rfactor=2 minbinsz=0.1 ptsrc=yes psfcorr=yes emapbnds=yes copyall=no chatter=2 clobber=yes debug=no gui=no mode="ql"
Generating SourceMap for 3C 273....................!
Generating SourceMap for 3C 279....................!
Generating SourceMap for gll_iem_v05_rev1....................!
Generating SourceMap for iso_source_back_v05_rev1....................!
real 141.03
user 139.98
sys 0.96

Now, Perform the Likelihood Analysis

Start running commands now if you want.

First import the BinnedAnalysis and SummedAnalysis libraries.

In [131]:
from BinnedAnalysis import *
from SummedLikelihood import *

Then, create a likelihood object for both the front and back data sets.

In [132]:
like_f = binnedAnalysis(irfs='P7REP_SOURCE_V15::FRONT', 
                        expcube='3C279_front_ltcube.fits', 
                        srcmdl='3C279_input_model_front.xml',
                        optimizer='NEWMINUIT',
                        cmap='3C279_front_srcMap.fits',
                        bexpmap='3C279_front_BinnedExpMap.fits')

like_b = binnedAnalysis(irfs='P7REP_SOURCE_V15::BACK', 
                        expcube='3C279_front_ltcube.fits', 
                        srcmdl='3C279_input_model_back.xml',
                        optimizer='NEWMINUIT',
                        cmap='3C279_back_srcMap.fits',
                        bexpmap='3C279_back_BinnedExpMap.fits')

Then, create the summed likelihood object and add the two seperate likelihood objects to it.

In [76]:
summed_like = SummedLikelihood()
summed_like.addComponent(like_f)
summed_like.addComponent(like_b)

Perform the fit and print out the results.

In [134]:
summed_like.fit(0)
Out[134]:
83296.65845449014
In [135]:
summed_like.model
Out[135]:
3C 273
   Spectrum: PowerLaw
0      Prefactor:  1.128e+01  4.666e-01  1.000e-03  1.000e+03 ( 1.000e-09)
1          Index: -2.591e+00  2.913e-02 -5.000e+00 -1.000e+00 ( 1.000e+00)
2          Scale:  1.000e+02  0.000e+00  3.000e+01  2.000e+03 ( 1.000e+00) fixed

3C 279
   Spectrum: PowerLaw
3      Prefactor:  4.313e+00  2.889e-01  1.000e-03  1.000e+03 ( 1.000e-09)
4          Index: -2.282e+00  3.555e-02 -5.000e+00  0.000e+00 ( 1.000e+00)
5          Scale:  1.000e+02  0.000e+00  3.000e+01  2.000e+03 ( 1.000e+00) fixed

gll_iem_v05_rev1
   Spectrum: ConstantValue
6          Value:  1.422e+00  3.644e-02  0.000e+00  1.000e+01 ( 1.000e+00)

iso_source_front_v05_rev1
   Spectrum: FileFunction
7     Normalization:  1.217e+00  4.046e-02  1.000e-05  1.000e+03 ( 1.000e+00)

Composite Likelihood

In the composite likelihood you can use completely seperate ROIs and tie parameters together between the two different ROIs. So, for example, you can tie the spectral shapes of a stacked sample of GRB (or fix their index and just tie the flux) while independently fitting the background in each region. This is different than in the summed analysis since there you need to have the same background model. Another example might be to fit a dark matter distribution that should be the same across many different sources (like a sample of dwarf galaxies). The DM parameters can be tied together while the backgrounds in each region can be independently fit.

Example

We're going to do something meaningless and do a composite fit of the SwiftJ1644 data set and the 3C 279 front data set to just show you how to do it.

First, import the relavant libraries.

In [136]:
from UnbinnedAnalysis import *
from Composite2 import Composite2

Then, create the unbinned analysis object from the SwiftJ1644 data set (note that we are also joining up a binned and unbinned object which is perfectly fine).

In [137]:
like_u = unbinnedAnalysis(evfile='SwiftJ1644_filtered_gti.fits',
                         scfile='SwiftJ1644_SC.fits',
                         expmap='SwiftJ1644_expMap.fits',
                         expcube='SwiftJ1644_ltcube.fits',
                         irfs='P7REP_SOURCE_V15',
                         srcmdl='SwiftJ1644_likeMinuit.xml',
                         optimizer='NEWMINUIT')

Now, take a look at the model before we do the fit. The index of the Swift source is pegged to its limit at -5 (also, I tweeked the model a bit so that the index is negative like it is in the 3C 279 model).

In [138]:
like_u.model['SwiftJ1644']
Out[138]:
SwiftJ1644
   Spectrum: PowerLaw2
0       Integral:  1.004e-01  3.712e-01  1.000e-04  1.000e+04 ( 1.000e-07)
1          Index: -4.999e+00  3.067e-01 -5.000e+00  0.000e+00 ( 1.000e+00)
2     LowerLimit:  1.000e+02  0.000e+00  2.000e+01  5.000e+05 ( 1.000e+00) fixed
3     UpperLimit:  3.000e+05  0.000e+00  2.000e+01  5.000e+05 ( 1.000e+00) fixed

Next, create the composite2 object and add the two likelihood objects to it.

In [110]:
c_like = Composite2(optimizer='MINUIT')
c_like.addComponent(like_f)
c_like.addComponent(like_u)

Tie the paramters together that you want tied.

In [111]:
tiedParGroup1 = ((like_u, 'SwiftJ1644', 'Index'),
                (like_f, '3C 279', 'Index'))
In [112]:
c_like.tieParameters(tiedParGroup1)

Now, actually do the fit.

In [113]:
c_like.fit()
Out[113]:
51336.454791607364

And take a look at the results. Note that the the bright 3C 279 is dominating the fit.

In [114]:
like_u.model['SwiftJ1644']
Out[114]:
SwiftJ1644
   Spectrum: PowerLaw2
0       Integral:  1.041e-04  4.935e-04  1.000e-04  1.000e+04 ( 1.000e-07)
1          Index: -2.265e+00  1.420e-02 -5.000e+00  0.000e+00 ( 1.000e+00)
2     LowerLimit:  1.000e+02  0.000e+00  2.000e+01  5.000e+05 ( 1.000e+00) fixed
3     UpperLimit:  3.000e+05  0.000e+00  2.000e+01  5.000e+05 ( 1.000e+00) fixed
In [115]:
like_f.model['3C 279']
Out[115]:
3C 279
   Spectrum: PowerLaw
3      Prefactor:  4.275e+00  1.109e-01  1.000e-03  1.000e+03 ( 1.000e-09)
4          Index: -2.265e+00  1.420e-02 -5.000e+00  0.000e+00 ( 1.000e+00)
5          Scale:  1.000e+02  0.000e+00  3.000e+01  2.000e+03 ( 1.000e+00) fixed