Tags:
create new tag
view all tags
---+ %NOTOC% xAOD Analysis in !EventLoop
<!-- this line is optional -->
---+ Recent updates
29.09.2016
* creating a new ALRB_TutorialData (cern-oct2016)
12.07.2016
* updating to use 2.4.14
08.06.2016
* switching from scanDQ2() to scanRucio() in running on the Grid (although scanDQ2 uses rucio behind the scene now), more information here
07.06.2016
* new $ALRB_TutorialData (cern-june2016)
* updated decoration name for jets from "signal" to "mySignal"
* updated input DxAOD to use a 20.7 SUSY1 derivation
* updated Analysis Release to 2.4.9
05.04.2016
* fixed small typo, thanks Katharina Ecker!
04.03.2016
* new $ALRB_TutorialData (chicago-mar2016)
* using a new DxAOD as input (DAOD_SUSY1)
29.02.2016
* updating to Analysis Base, 2.3.45
* migrating the return code check to the new and more central Asg provided tool ( ANA_CHECK)
28.01.2016
* updating to Analysis Base 2.3.41
* updating the input xAOD to use a top derivation
* changing the jet input collection from AntiKt4LCTopoJets to AntiKt4EMTopoJets (to use a collection actually in the top derivation)
* updating the GRL from 2012 to 2015
* creating a new ALRB_TutorialData (cern-feb2016)
30.10.2015
* updating to Analysis Base 2.3.32
19.10.2015
* moving to new ATLASLocalRootSetup commands, for example lsetup 'rcsetup Base,xxx'=
Older updates to this tutorial can be found below
---+ Introduction
This hands-on tutorial will lead you through some typical analysis tools used for doing an xAOD analysis in the EventLoop framework, with the RootCore build system. Please also note that you can do ROOT-based xAOD analysis in the Athena framework and the CMT build system as documented here.
---+ 1. Setup the Analysis Release
We will work on lxplus, so let's log in to it:
<verbatim style="background: #e0ebf6;">
ssh -Y lxplus.cern.ch
</verbatim>
Note: From a linux laptop you should instead use -X in the ssh command.
Let's setup the ATLAS software environment:
<verbatim style="background: #e0ebf6;">
setupATLAS
</verbatim>
This alias is already defined for you when you log in to lxplus. After you type it you will see a list of commands you can type to setup various ATLAS computing tools.
<blockquote>
If you are not working on lxplus: you will need to define these variables: <br/>
<verbatim>
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'
setupATLAS
</verbatim>
</blockquote>
We will use the ATLAS Analysis Release (new for Run 2) to setup our environment and other RootCore packages with correct package tags (such as those recommended by the CP groups). You can use the Analysis Release if you have access to cvmfs (lxplus, local clusters, machines, and the Grid).
<blockquote>
It is also possible to checkout all packages in a given Analysis Release and manually compile them on your working machine. This alternative setup is described below. But for this tutorial we recommend you just use the release from cvmfs on lxplus.
</blockquote>
Let's create a new directory for this analysis from where we will work. When we setup the Analysis Release (or just RootCore), RootCore will create a new directory called =RootCoreBin that will store information about your setup and local packages. Let's create our analysis working directory and change directory there:
<verbatim style="background: #e0ebf6;">
</verbatim>
Now let's setup the Analysis Release, we will use a 'flavor' called AnalysisBase (which is the general purpose one maintained by ASG. Other version and series are documented on AnalysisRelease page) :
<verbatim style="background: #e0ebf6;">
lsetup 'rcsetup Base,2.4.14'
rc find_packages
rc compile
</verbatim>
---++ Define ALRB_TutorialData
Warning: Can't find topic Sandbox.SoftwareTutorialSoftwareBasics
---++ Checking out extra packages
The command
<verbatim style="background: #e0ebf6;">
rc version
</verbatim>
shows you the package tags in your release, including the full path.
In case you want a different tag in your release, copy the path and change the package tag. Then use
<verbatim style="background: #e0ebf6;">
rc checkout_pkg <new path>
</verbatim>
If you checked out any package you should run:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
which will find the local version of this package and compile it.
---++ What to do every time you log in
The next time you start a new session to start working, you'll have to setup the ATLAS software environment again:
<verbatim style="background: #e0ebf6;">
setupATLAS
</verbatim>
Then navigate to your working directory (the one containing the RootCoreBin/ directory) and setup the same analysis release:
<verbatim style="background: #e0ebf6;">
lsetup rcsetup
</verbatim>
The script will automatically setup the same release as you previously defined in that directory (by looking in RootCoreBin/).
And to follow this tutorial define the environment variable ALRB_TutorialData as explained above.
---++ Alternative setup on local machine (advanced/optional)
See the instructions on: <br/>
https://twiki.cern.ch/twiki/bin/view/AtlasComputing/SoftwareTutorialxAODEDM#On_your_local_machine_optional <br/>
on how to set up AnalysisBase-2.0.2 on your laptop. The change now is that you should add/update the packages listed above to/in the packages.txt file described in the instructions. A similar prescription can be followed for any numbered release.
---++ How to update to a newer Analysis Release (advanced/optional)
It is likely that in the course of your analysis development you will need to update to use a newer version of the Analysis Release. It can be very easy to do this (of course this comes with caveats, especially if you have manually checked out specific package tags, and/or made modifications to those packages).
If you want to update to a newer Analysis Release, first navigate to your working directory (in the case of this tutorial ROOTAnalysisTutorial/, where you can see the RootCoreBin/ directory), and unsetup the last Analysis Release:
<verbatim style="background: #e0ebf6;">
lsetup 'rcsetup -u'
</verbatim>
Now you need to stop and think a bit ....
* You may have checked out additional packages on top of the previous Analysis Release you were using (not including your analysis package), as we may have done above:
* Do you still need a local setup of those additional packages?
* Did you make any local changes to those packages?
* How will the tags of those packages be different in the newer Analysis Release?
* Other issues to be aware of is the possibility the tools you are using in your analysis code have changed (if the packages tags are different between the old and new release).
* A good piece of advice is to join the atlas-sw-pat-releaseannounce@cern.ch mailing list, where new Analysis Release's are announced, along with the relevant updates to that release.
Second, assuming the points above are not issues, you can simply setup a new Analysis Release from the working directory as usual, find packages, and re-compile. If you wanted to updated to AnalysisBase 9.9.99 (which doesn't actually exist), you would do:
<verbatim style="background: #e0ebf6;">
lsetup 'rcsetup Base,9.9.99'
rc find_packages
rc clean
rc compile
</verbatim>
---+ 2. xAOD samples
In this tutorial we are using an example derived xAOD file. For your analysis it is highly recommended (dare I say required) that you use a derived xAOD (due to corrections that take place in going from the primary xAOD to the derived version). Information about what derivations exist can be found here:
---+ 3. Creating your analysis package and algorithm
---++ Creating your analysis package
Now it is time to create your own package. RootCore provides you a script that creates a skeleton package, and we highly recommend you use it. We'll call the package MyAnalysis, but you can name it anything you want. From your working directory (the one containing RootCoreBin/) execute:
<verbatim style="background: #e0ebf6;">
rc make_skeleton MyAnalysis
</verbatim>
If you list the contents of the package directory, you'll see:
* cmt directory: Where the package Makefile sits. You'll have to modify this Makefile if your package depends on other packages (we'll do that soon).
* MyAnalysis directory: This is where all of your C++ header files go.
* Root directory: This is where all of your C++ source files go. It also holds the LinkDef.h file that ROOT uses to build dictionaries.
Now let's just ask RootCore to pick up this new package:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
Technically you don't have to rebuild after every step, since you won't be running code until the very end of this tutorial, but it is good practice to check in regular intervals whether your code still compiles.
---++ Creating your Event Loop algorithm
EventLoop is a package maintained by the ASG group for looping over events. You first start out by making an algorithm class that will hold your analysis code. We'll call it MyxAODAnalysis, but you can name it anything you want. You could create this class by hand, but the EventLoop package provides a script to create the skeleton of the class for you inside the package you specify (here MyAnalysis). From your working directory (the one containing RootCoreBin/)
<verbatim style="background: #e0ebf6;">
$ROOTCOREBIN/user_scripts/EventLoop/make_skeleton MyAnalysis MyxAODAnalysis
</verbatim>
Now we should be able to build our package with our empty algorithm:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
We had to rerun find_packages here, because make_skeleton updated the dependencies for our package. Normally you can recompile using just compile.sh.
Of course you don't have to use EventLoop to loop through your events, you can write your own analysis code to do that. But EventLoop has some nice features that will be shown later, and has full support from the ASG group (meaning this group will help you debug your EventLoop code, add extra features, etc.).
If you manually add a new class to your package, you will need to manually add that class to the MyAnalysis/Root/LinkDef.h file.
*Basic notes about EventLoop:* <br/>
* You will see this script has created a skeleton EventLoop code for you, with a source code MyAnalysis/Root/MyxAODAnalysis.cxx and header code MyAnalysis/MyAnalysis/MyxAODAnalysis.h=
* In this EventLoop algorithm you will have (among others) the following functions:
* initialize(): called once, before the first event is executed
* =execute(): called once per event
* finalize(): called once, after the final event has completed
* You may notice there is a return code for each EventLoop function, to let you know how successful (or not) the function ended
*Creating member variables in EventLoop:* <br/>
As with all c++ code, if you want to create a member variable of your algorithm you add that variable (with the appropriate include statement if necessary) to your header code MyAnalysis/MyAnalysis/MyxAODAnalysis.h.
Now what is different with EventLoop is the distinction between a member variable that will be initialized either when the initialize() method of your algorithm is called or when the steering macro is executed. The notation for each case is shown below:
| member variable initialized in initialize() | add //! after member variable declaration | Ex: m_counter; //! |
| member variable initialized in steering macro | nothing special | Ex: m_counter; |
ALERT! Forgetting the //! is the most common problem when using EventLoop. It tends to manifest itself as a crash when you try to run the job. In effect what it does is to tell EventLoop that the corresponding variable will not be initialized until initialize gets called. So you should add this to all variables in your algorithm, except for configuration parameters.
HELP The exact meaning of //!: What happens internally is that EventLoop will store a copy of your algorithm object on disk, and then for every sample it reads back a copy, initializes it and processes the samples. Saving objects and reading them back happens through a so called streamer. These get automatically generated for you by ROOT. Adding //! to a variable tells ROOT to ignore that variable. If you were to forget the //! in this case, ROOT would try to store a copy of the event structure on disk, but since it hasn't been initialized yet it will (most likely) crash.
---+ 4. Accessing xAOD quantities
---++The xAODRootAccess package
In ROOT if you want to do any sort of real analysis you will access the xAOD objects using a package called xAODRootAccess. The good news is this is already included in the Analysis Release (so no need to check it out!). The most important class is the xAOD::TEvent class which is used to read and write information from the xAOD (which acts similar to the D3PDReader, for those that used on D3PDs). We will also use the xAOD::Init class to handle any xAOD access "magic" that will happen behind the scenes to the user (basically so we can talk to the interface classes).
To include this package in our analysis, first we have to update the package dependencies by adding xAODRootAccess. So in MyAnalysis/cmt/Makefile.RootCore make sure the line PACKAGE_DEP looks something like:
%CODE{"cpp"}%
PACKAGE_DEP = EventLoop xAODRootAccess
%ENDCODE%
(it probably already has EventLoop listed there).
Now we are going to add the hooks needed for the xAODRootAccess classes to our algorithm. First we'll need to add the xAODRootAccess header files to the includes in at the top of MyAnalysis/Root/MyxAODAnalysis.cxx:
%CODE{"cpp"}%
// Infrastructure include(s):
#include "xAODRootAccess/Init.h"
#include "xAODRootAccess/TEvent.h"
%ENDCODE%
---+++ Using xAODRootAccess in our EventLoop package
Now we will focus on the EventLoop-specific way of using the TEvent class. In every function of our algorithm that we will need information from the xAOD, either reading or writing it, we need to define a variable of type xAOD::TEvent to access this information. In this tutorial were are using EventLoop, and it has a special way of connecting to this event store, so the syntax is:
%CODE{"cpp"}%
xAOD::TEvent* event = wk()->xaodEvent();
%ENDCODE%
To prepare ourselves for the rest of the tutorial (because I know where we're going...) add the line above to MyAnalysis/Root/MyxAODAnalysis.cxx at the top of the following three functions:
* initialize()=
* execute()=
* =finalize()=
Now we need to tell EventLoop that we actually want to use the xAODRootAccess in our job. The easiest way to do that is to add the request in the =setupJob function in =MyAnalysis/Root/MyxAODAnalysis.cxx:
%SYNTAX{ syntax="cpp" }%
EL::StatusCode MyxAODAnalysis :: setupJob (EL::Job& job)
{
// Here you put code that sets up the job on the submission object
// so that it is ready to work with your algorithm, e.g. you can
// request the D3PDReader service or add output files. Any code you
// put here could instead also go into the submission script. The
// sole advantage of putting it here is that it gets automatically
// activated/deactivated when you add/remove the algorithm from your
// job, which may or may not be of value to you.
// let's initialize the algorithm to use the xAODRootAccess package
job.useXAOD ();
xAOD::Init(); // call before opening first file
return EL::StatusCode::SUCCESS;
}
%ENDSYNTAX%
Now let's slightly modify the initialize() function in MyAnalysis/Root/MyxAODAnalysis.cxx to print out the number of events, modify it so it looks like this (basically adding the Info line, everything else should be there):
%SYNTAX{ syntax="cpp" }%
EL::StatusCode MyxAODAnalysis :: initialize ()
{
// Here you do everything that you need to do after the first input
// file has been connected and before the first event is processed,
// e.g. create additional histograms based on which variables are
// available in the input files. You can also create all of your
// histograms and trees in here, but be aware that this method
// doesn't get called if no events are processed. So any objects
// you create here won't be available in the output if you have no
// input events.
xAOD::TEvent* event = wk()->xaodEvent(); // you should have already added this as described before
// as a check, let's see the number of events in our xAOD
Info("initialize()", "Number of events = %lli", event->getEntries() ); // print long long int
return EL::StatusCode::SUCCESS;
}
%ENDSYNTAX%
We've updated our package dependencies, so from our working directory we have to rerun rc find_packages before compiling:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
More information about using EventLoop to access xAOD quantities can be read here: <br/>
---+++ Return codes
You may have noticed that the EventLoop functions return a code (a StatusCode, similar to the StatusCode of Athena). The options are failure, success, and recoverable. Actually every time you use xAOD::TEvent (or xAOD::TStore) you need to check this return code thing. You should always check the status of these return codes, these basically make sure your code is doing what you want it to do ... however in ROOT if you don't explicitly check these return codes you will only be pinged only with warnings when you execute your code (it will compile fine) with error messages like:
<verbatim style="background: #e0ebf6;">
Warning in <xAOD::TReturnCode>:
Warning in <xAOD::TReturnCode>: Unchecked return codes encountered during the job
Warning in <xAOD::TReturnCode>: Number of unchecked successes: 502
</verbatim>
Since there are many many commands we will execute that have a return code, let's use a more central tool that will check the return codes for us (this works with any type of status code return).At the top of our source code MyAnalysis/Root/MyxAODAnalysis.cxx add this header
%SYNTAX{ syntax="cpp" }%
// ASG status code check
#include <AsgTools/MessageCheck.h>
%ENDSYNTAX%
Since we are now relying on a new package we have to update our package dependencies in MyAnalysis/cmt/Makefile.RootCore to include:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] AsgTools
%ENDSYNTAX%
Let's start with this good practice right away and take advantage of this tool. In the last section we modified the setupJob() function in MyAnalysis/Root/MyxAODAnalysis.cxx to initialize using the xAOD, in fact this returns a status code that we should check. So in setupJob() function in MyAnalysis/Root/MyxAODAnalysis.cxx replace this line:
%SYNTAX{ syntax="cpp" }%
xAOD::Init(); // call before opening first file
%ENDSYNTAX%
with this line:
%SYNTAX{ syntax="cpp" }%
ANA_CHECK_SET_TYPE (EL::StatusCode); // set type of return code you are expecting (add to top of each function once)
ANA_CHECK(xAOD::Init());
%ENDSYNTAX%
We will take advantage of this ANA_CHECK throughout the rest of the tutorial.
Add the following line:
%SYNTAX{ syntax="cpp" }%
ANA_CHECK_SET_TYPE (EL::StatusCode); // set type of return code you are expecting (add to top of each function once)
%ENDSYNTAX%
to the top of each of the following functions in MyAnalysis/Root/MyxAODAnalysis.cxx:
* initialize()
* execute()
* finalize()
(I know already we will need to check return codes in each of these functions later on in this tutorial... I'd rather get us setup now than forget later smile )
*Alternative:* <br/>
Alternatively you can check each return code 'by-hand', without using the macro we have created. For the event-level material we will cover later (see Section 6 below) that would look something like this:
%SYNTAX{ syntax="cpp" }%
if( ! event->retrieve( eventInfo, "EventInfo").isSuccess() ){
Error("execute()", "Failed to retrieve event info collection. Exiting." );
return EL::StatusCode::FAILURE;
}
%ENDSYNTAX%
Instead of something like this using this macro:
%SYNTAX{ syntax="cpp" }%
ANA_CHECK(event->retrieve( eventInfo, "EventInfo") );
%ENDSYNTAX%
---++ Knowing what information is in the xAOD
One question everyone will have is: how do I know what information/variables are actually stored in my xAOD for each container type? You can be sure for "particles" (inheriting from IParticle) you will have things like pt, eta, and phi. But what other variables are associated to the different containers? We'll try to answer that question...
---+++ Containers and key names
In order to "retrieve" the information stored in the xAOD containers we need to know the container type and the container key name. We will use a handy script called checkxAOD.py which is available in both the Athena-world and when using the RootCore Analysis Release with 2.X.Y where Y>=29. If you have an xAOD file, say xAOD.pool.root and want to know the containers and associated key names do the following:
<verbatim style="background: #e0ebf6;">
checkxAOD.py xAOD.pool.root
</verbatim>
(Note: You need to replace the fake xAOD.pool.root with the full path to an xAOD sample, for example $ALRB_TutorialData/p2622/mc15_13TeV.410000.PowhegPythiaEvtGen_P2012_ttbar_hdamp172p5_nonallhad.merge.DAOD_SUSY1.e3698_s2608_s2183_r7725_r7676_p2622/DAOD_SUSY1.08377960._000012.pool.root.1)
The last column will show you the xAOD container names and types. When you are retrieving information you usually need to know the container type (for example xAOD::CaloCluster) and the key name for the particular instance of that container you are interested in (for example "egammaClusters").
In your analysis you can ignore the "Aux" containers (for Auxiliary store), these hold some behind-the-scenes magic. You can also "mostly" ignore the versions like _v1.
Most information in the xAOD is stored and retrieved via the Auxillary store. The user doesn't need to worry about this Auxillary store, and only interacts with the interface (so something called TEvent for ROOT standalone analysis.) So now you should know the container type and key name. If you use the wrong key name the code will compile, but it will crash at run-time.
---+++ Variables inside the containers
Now to know what variables are associated to this container, the trick I use at the moment (again maybe something official will come along...) is to use interactive ROOT. So back into your Analysis Release shell (with ROOT automatically setup), you can simply do this:
<verbatim style="background: #e0ebf6;">
root -l xAOD.pool.root
root [1] CollectionTree->Print("egammaClusters*")
</verbatim>
You will get a printout of all variables you can access from that container (aka collection). Note that the variable name you will use in your code is the one that comes after the ".", so for example you might see:
<verbatim style="background: #e0ebf6;">
egammaClusters.rawEta : vector<float>
</verbatim>
So in your analysis code (after setting up the TEvent and interface magic), you can access this variable from the xAOD::CaloCluster object by calling rawEta.
If you try to request a variable associated to a particular xAOD object that does not exist the code will crash at compile-time, complaining the xAOD object has no member named ‘whatever’.
---+++ Accessing object variables
To access variables associated to objects in your analysis code there are often special accessor functions available to help. These are associated to the objects of the containers. At the moment the best place to find these accessors is by browsing the code. All of the xAOD EDM classes live in atlasoff/Event/xAOD, and the naming should be obvious to find the object type you are interested in. Alternatively you can access the variables directly without using the accessors, but this is slow as it depends on string comparisons.
Here is one example that might clarify these points (you don't have to copy and paste this anywhere, it's just a qualitative description). Let's say you have a pointer to an xAOD muon object for a particular event, called ( *muon_itr)* (we'll actually do this later on in complete detail), and now we want to access the ptcone20 isolation variable for this muon.

To access the variable with the help of the muon accessor you can do:
%SYNTAX{ syntax="cpp" }%
float muPtCone20 = 0.; // your variable that will be filled after calling the isolation function
(*muon_itr)->isolation(muPtCone20, xAOD::Iso::ptcone20); // second arg is an enum defined in xAODPrimitives/IsolationType.h
%ENDSYNTAX%
Alternatively you can access that same variable by simply doing:
%SYNTAX{ syntax="cpp" }%
(*muon_itr)->auxdata< float >("ptcone20");
%ENDSYNTAX%
For the muons you can find the complete list of accessors in the xAOD Muon class (version 1)
---+ 5. Creating and running our steering macro
To actually run this EventLoop algorithm we need some steering code. This can be a root macro in either C++ or python or some compiled C++ code. For this tutorial we will create a C++ macro.
We will use another ASG tool called SampleHandler which is a nice tool that allows for easy sample management. In this example we will create and configure a SampleHandler object. We will specify the path to the main directory, under which there could be several subdirectories (typically representing datasets) and within those the individual input files. Here we will tell SampleHandler we are only interested in one input xAOD file (specified by the exact name, but wildcards are accepted to use several specific inputs). More information and options for using SampleHandler to 'find' your data is found on the dedicated SampleHandler wiki.
First let's create a Run directory for it. From your main working directory execute:
<verbatim style="background: #e0ebf6;">
mkdir Run
cd Run
</verbatim>
And in that directory we will create a new file called ATestRun.cxx. Fill this new file, ATestRun.cxx, with the following:
%SYNTAX{ syntax="cpp" }%
void ATestRun (const std::string& submitDir)
{
//===========================================
// FOR ROOT6 WE DO NOT PUT THIS LINE
// (ROOT6 uses Cling instead of CINT)
// Load the libraries for all packages
// gROOT->Macro("$ROOTCOREDIR/scripts/load_packages.C");
// Instead on command line do:
// > root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
// The above works for ROOT6 and ROOT5
//==========================================
// Set up the job for xAOD access:
xAOD::Init().ignore();
// create a new sample handler to describe the data files we use
SH::SampleHandler sh;
// scan for datasets in the given directory
// this works if you are on lxplus, otherwise you'd want to copy over files
// to your local machine and use a local path. if you do so, make sure
// that you copy all subdirectories and point this to the directory
// containing all the files, not the subdirectories.
// use SampleHandler to scan all of the subdirectories of a directory for particular MC single file:
const char inputFilePath = gSystem->ExpandPathName ("$ALRB_TutorialData/p2622/");
SH::ScanDir().filePattern("DAOD_SUSY1.08377960._000012.pool.root.1").scan(sh,inputFilePath);
// set the name of the tree in our files
// in the xAOD the TTree containing the EDM containers is "CollectionTree"
sh.setMetaString ("nc_tree", "CollectionTree");
// further sample handler configuration may go here
// print out the samples we found
sh.print ();
// this is the basic description of our job
EL::Job job;
job.sampleHandler (sh); // use SampleHandler in this job
job.options()->setDouble (EL::Job::optMaxEvents, 500); // for testing purposes, limit to run over the first 500 events only!
// add our algorithm to the job
MyxAODAnalysis alg = new MyxAODAnalysis;

// later on we'll add some configuration options for our algorithm that go here
job.algsAdd (alg);
// make the driver we want to use:
// this one works by running the algorithm directly:
EL::DirectDriver driver;
// we can use other drivers to run things on the Grid, with PROOF, etc.
// process the job using the driver
driver.submit (job, submitDir);
}
%ENDSYNTAX%
Read over the comments carefully to understand what is happening. Notice that we will only run over the first 500 events (for testing purposes). Obviously if you were doing a real analysis you would want to remove that statement to run over all events in a sample.
Ok, now the big moment has come. Within your Run directory execute your ATestRun.cxx macro with root:
<verbatim style="background: #e0ebf6;">
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
You can quit ROOT by typing ".q" at the ROOT prompt.
IDEA!Note that submitDir is the directory where the output of your job is stored. If you want to run again, you either have to remove that directory or pass a different name into ATestRun.cxx.
<blockquote>
*Please Note: For actual work you should give more thought about how you want to organize your submission directories. Since those contain the output of your jobs you probably want to keep old versions around, so that you can compare. Also, you may not want to keep them in your source directory tree, but instead put them into a separate data directory tree. However, try to avoid putting them inside a RootCore managed package. Doing so may result in them getting copied around when submitting to the grid or a batch system.
</blockquote>
---++ Alternative: Run the job from a compiled application (optional)
In order to debug problems with the code, it is often not practical to run the job from ROOT's interpreter. So, if you encounter any problems, you should create a directory like MyAnalysis/util/, and in there put an executable source file like MyAnalysis/util/testRun.cxx for instance, with the content:
%SYNTAX{ syntax="cpp" }%
#include "xAODRootAccess/Init.h"
#include "SampleHandler/SampleHandler.h"
#include "SampleHandler/ScanDir.h"
#include "SampleHandler/ToolsDiscovery.h"
#include "EventLoop/Job.h"
#include "EventLoop/DirectDriver.h"
#include "SampleHandler/DiskListLocal.h"
#include <TSystem.h>
#include "MyAnalysis/MyxAODAnalysis.h"
int main( int argc, char* argv[] ) {
// Take the submit directory from the input if provided:
std::string submitDir = "submitDir";
if( argc > 1 ) submitDir = argv[ 1 ];
// Set up the job for xAOD access:
xAOD::Init().ignore();
// Construct the samples to run on:
SH::SampleHandler sh;
// use SampleHandler to scan all of the subdirectories of a directory for particular MC single file:
const char* inputFilePath = gSystem->ExpandPathName ("$ALRB_TutorialData/p2622/");
SH::ScanDir().filePattern("DAOD_SUSY1.08377960._000012.pool.root.1").scan(sh,inputFilePath);
// Set the name of the input TTree. It's always "CollectionTree"
// for xAOD files.
sh.setMetaString( "nc_tree", "CollectionTree" );
// Print what we found:
sh.print();
// Create an EventLoop job:
EL::Job job;
job.sampleHandler( sh );
job.options()->setDouble (EL::Job::optMaxEvents, 500);
// Add our analysis to the job:
MyxAODAnalysis* alg = new MyxAODAnalysis();
job.algsAdd( alg );
// Run the job using the local/direct driver:
EL::DirectDriver driver;
driver.submit( job, submitDir );
return 0;
}
%ENDSYNTAX%
From your working directory recompile everything:
<verbatim style="background: #e0ebf6;">
rc compile
</verbatim>
And you can execute the compiled steering macro by doing:
<verbatim style="background: #e0ebf6;">
testRun submitDir
</verbatim>
Which should give the same results as shown previously by running interactively in ROOT.
---+ 6. Objects and tools for analysis
This section covers the various reconstructed objects and associated tools that you may use for your analysis. For the actual tutorial, which is of limited time, please try a handful of these, but be aware of the full content of this section, as it may come in handy in your own analysis.
---++ Event-level
---+++ General event information
Now let's get some event-level information, like how many events we've processed, and if the event is data or MC.
First add the xAOD EDM event package to our MyAnalysis/cmt/Makefile.RootCore file:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] xAODEventInfo
%ENDSYNTAX%
where [...] is any other package dependencies you may already have included.
In our analysis header file MyAnalysis/MyAnalysis/MyxAODAnalysis.h, let's define a variable that we will use to count the number of events we have processed, you can put it under public:
%SYNTAX{ syntax="cpp" }%
int m_eventCounter; //!
%ENDSYNTAX%
(Again, remember the //!!)
Now, to our analysis source code MyAnalysis/Root/MyxAODAnalysis.cxx let's include the corresponding xAOD EDM class header file (add it near the top with all the other include statements):
%SYNTAX{ syntax="cpp" }%
// EDM includes:
#include "xAODEventInfo/EventInfo.h"
%ENDSYNTAX%
And in the initialize() method let's initialize our event counting variable to zero:
%SYNTAX{ syntax="cpp" }%
// count number of events
m_eventCounter = 0;
%ENDSYNTAX%
And one last thing - let's actually do something event-by-event, which happens in the execute() method:
%SYNTAX{ syntax="cpp" }%
// print every 100 events, so we know where we are:
if( (m_eventCounter % 100) =0 ) Info("execute()", "Event number = %i", m_eventCounter );
m_eventCounter++;
//----------------------------
// Event information
//---------------------------
const xAOD::EventInfo* eventInfo = 0;
ANA_CHECK(event->retrieve( eventInfo, "EventInfo"));
// check if the event is data or MC
// (many tools are applied either to data or MC)
bool isMC = false;
// check if the event is MC
if(eventInfo->eventType( xAOD::EventInfo::IS_SIMULATION ) ){
isMC = true; // can do something with this later
}
%ENDSYNTAX%
Since we've updated our package dependencies from our working directory we have to rerun rc find_packages before compiling:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
---+++ The Good Runs List
The Good Runs List (GRL) is an xml file that selects good luminosity blocks from within the data runs (spanning 1-2 minutes of data-taking). Luminosity blocks which are not listed in this xml file are considered bad, and should not be used in your analysis.
We will need to use a new tool that searches this xml file and returns a boolean as to whether the luminosity block provided is good or not. This tool lives in the GoodRunsList package, so we must tell RootCore and our analysis where to find it. Add the following to your =MyAnalysis/cmt/Makefile.RootCore file:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] GoodRunsLists
%ENDSYNTAX%
where [...] is any other package dependencies you may already have included.
Going to our header file, MyAnalysis/MyAnalysis/MyxAODAnalysis.h , near the top we will add the GoodRunsList package header:
%SYNTAX{ syntax="cpp" }%
// GRL
#include "GoodRunsLists/GoodRunsListSelectionTool.h"
%ENDSYNTAX%
And still in MyAnalysis/MyAnalysis/MyxAODAnalysis.h in our class definition (so after the lines class MyxAODAnalysis : public EL::Algorithm) add a new member variable:
%SYNTAX{ syntax="cpp" }%
GoodRunsListSelectionTool m_grl; //!

%ENDSYNTAX%
Now let's move to our source code MyAnalysis/Root/MyxAODAnalysis.cxx, and initialize the tool for usage in our initialize() method:
%SYNTAX{ syntax="cpp" }%
// GRL
m_grl = new GoodRunsListSelectionTool("GoodRunsListSelectionTool");
const char GRLFilePath = "$ALRB_TutorialData/data15_13TeV.periodAllYear_DetStatus-v73-pro19-08_DQDefects-00-01-02_PHYS_StandardGRL_All_Good_25ns.xml";
const char* fullGRLFilePath = gSystem->ExpandPathName (GRLFilePath);
std::vector<std::string> vecStringGRL;
vecStringGRL.push_back(fullGRLFilePath);
ANA_CHECK(m_grl->setProperty( "GoodRunsListVec", vecStringGRL));
ANA_CHECK(m_grl->setProperty("PassThrough", false)); // if true (default) will ignore result of GRL and will just pass all events
ANA_CHECK(m_grl->initialize());
%ENDSYNTAX%
At the top of your source code (with all the other includes) we have to include the appropriate header file to use this gSystem class:
%SYNTAX{ syntax="cpp" }%
#include <TSystem.h>
%ENDSYNTAX%
Next in the execute() method, let's check if the event is in fact a data event, and if it is, then retrieve the result of the GRL for this event. After you have retrieved the EventInfo object (done above) and after you have set the flag isMC (also shown above), add these lines:
%SYNTAX{ syntax="cpp" }%
// if data check if event passes GRL
if(isMC){ // it's data!
if(m_grl->passRunLB( eventInfo)){

return EL::StatusCode::SUCCESS; // go to next event
}
} // end if not MC
%ENDSYNTAX%
And last but not least, remember to clean up the memory in finalize():
%SYNTAX{ syntax="cpp" }%
// GRL
if (m_grl) {
delete m_grl;
m_grl = 0;
}
%ENDSYNTAX%
Finally let's compile! Note we have to run rc find_packages again, as we updated the package dependencies in our Makefile.
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
Choices of GRLs are described on this page: Good Run Lists for Analysis
To add: Determine the integrated luminosity of our data sample <br/>
---+++ Removing additional detector imperfections
The GRL helps remove detector problems that affect entire luminosity blocks (1-2 minutes of data-taking). Note that even one luminosity block can be thousands of events. To avoid throwing away perfectly fine events there are some additional event-level detector flags, to help single events that are plagued with some detector problem. For the 2015 data-taking period there are three flags to remove problematic events:
* due to the liquid argon system
* due to the tile calorimeter system
* due to the SCT inner detector system
* due to incomplete events (event information missing after TTC restarts)
Here we will show you how to remove these problematic events. Here we assume the EventInfo EDM classes have already been defined and used from instructions at the top of this page.
We can count the number of 'clean' events, by defining a member variable in our header file MyAnalysis/MyAnalysis/MyxAODAnalysis.h:
%SYNTAX{ syntax="cpp" }%
int m_numCleanEvents; //!
%ENDSYNTAX%
Now in our source code let's initialize this event count to zero, so in the initialize() method in MyAnalysis/Root/MyxAODAnalysis.cxx simply add this line:
%SYNTAX{ syntax="cpp" }%
m_numCleanEvents = 0;
%ENDSYNTAX%
Now for the cleaning! This is something done event-by-event (think execute() method!), and only for data:
%SYNTAX{ syntax="cpp" }%
//------------------------------------------------------------
// Apply event cleaning to remove events due to
// problematic regions of the detector
// or incomplete events.
// Apply to data.
//------------------------------------------------------------
// reject event if:
if(isMC){
if( (eventInfo->errorState(xAOD::EventInfo::LAr)==xAOD::EventInfo::Error ) || (eventInfo->errorState(xAOD::EventInfo::Tile)==xAOD::EventInfo::Error ) || (eventInfo->errorState(xAOD::EventInfo::SCT)==xAOD::EventInfo::Error ) || (eventInfo->isEventFlagBitSet(xAOD::EventInfo::Core, 18) ) )
{
return EL::StatusCode::SUCCESS; // go to the next event
} // end if event flags check
} // end if the event is data
m_numCleanEvents++;
%ENDSYNTAX%
You can print out the final number of clean events from the finalize() method:
%SYNTAX{ syntax="cpp" }%
Info("finalize()", "Number of clean events = %i", m_numCleanEvents);
%ENDSYNTAX%
This will do nothing for MC files, but could actually remove some events from the 2015 data samples (you could try using the data xAOD file as input in your steering macro).
You can compile your package as before (we do not need to run rc find_packages again as we did not update the package dependencies), and you can run your job in ROOT from the Run/ directory as before. From your working directory:
<verbatim style="background: #e0ebf6;">
rc compile
cd Run/
rm -rf submitDir/
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
---++ Jets
Let's create a basic loop over a jet container, for the AntiKt4EMTopoJets jet collection:
First, let's add the jet xAOD EDM package to MyAnalysis/cmt/Makefile.RootCore:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] xAODJet
%ENDSYNTAX%
where [...] is any other package dependencies you may already have included.
Now in our source code MyAnalysis/Root/MyxAODAnalysis.cxx, add the jet xAOD EDM container class to include statement near the top:
%SYNTAX{ syntax="cpp" }%
#include "xAODJet/JetContainer.h"
%ENDSYNTAX%
Still in the source code, in execute() (run for every event) retrieve the jet container and loop over all jets for this event:
%SYNTAX{ syntax="cpp" }%
// get jet container of interest
const xAOD::JetContainer jets = 0;
ANA_CHECK(event->retrieve( jets, "AntiKt4EMTopoJets" ));
Info("execute()", " number of jets = %lu", jets->size());
// loop over the jets in the container
xAOD::JetContainer::const_iterator jet_itr = jets->begin();
xAOD::JetContainer::const_iterator jet_end = jets->end();
for( ; jet_itr = jet_end; ++jet_itr ) {
Info("execute()", " jet pt = %.2f GeV", (( jet_itr)->pt() * 0.001)); // just to print out something

} // end for loop over jets
%ENDSYNTAX%
If you are not sure of the container type and/or container name see above.
You can do the usual RootCore thing to update/find the (new) package dependencies and compile:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
---+++ Jet cleaning tool
Let's add the jet cleaning tool to our code. This jet selector tool applies the jet cleaning described on this page: How to clean jets 2015. It is a cleaning that you could apply to jets in both data and MC (see the dedicated wiki page for the recommended use).
Below describe the steps necessary to including this jet cleaning in our analysis code:
First, add the package dependency to MyAnalysis/cmt/Makefile.RootCore:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] JetSelectorTools
%ENDSYNTAX%
where [...] is any other package dependencies you may already have included.
Second, add the class dependency to our header file, at the top near where all the other "includes" are, we will add:
%SYNTAX{ syntax="cpp" }%
#include "JetSelectorTools/JetCleaningTool.h"
%ENDSYNTAX%
Still in the header file, let's create an instance of the tool. In the class definition (so somewhere within class MyxAODAnalysis : public EL::Algorithm{...}) :
%SYNTAX{ syntax="cpp" }%
JetCleaningTool *m_jetCleaning; //!
%ENDSYNTAX%
Now moving to our source code,=MyAnalysis/Root/MyxAODAnalysis.cxx= ,we need to initialize this tool, in the initialize() method add these lines:
%SYNTAX{ syntax="cpp" }%
// initialize and configure the jet cleaning tool
m_jetCleaning = new JetCleaningTool("JetCleaning");
m_jetCleaning->msg().setLevel( MSG::DEBUG );
ANA_CHECK(m_jetCleaning->setProperty( "CutLevel", "LooseBad"));
ANA_CHECK(m_jetCleaning->setProperty("DoUgly", false));
ANA_CHECK(m_jetCleaning->initialize());
%ENDSYNTAX%
Then, in finalize(), don't forget to delete the tool from memory.
%SYNTAX{ syntax="cpp" }%
if( m_jetCleaning ) {
delete m_jetCleaning;
m_jetCleaning = 0;
}
%ENDSYNTAX%
In the above we have set a DEBUG level of text information to be printed out, and we have set the cut level to "MediumBad", there are several other options; the choice however is usually analysis dependent.
Now in the execute() method, just *before we loop over the jets let's add an integer variable to count the number of good jets passing this cleaning:
%SYNTAX{ syntax="cpp" }%
int numGoodJets = 0;
%ENDSYNTAX%
And in the loop over the jets, check if the jet passes the cleaning criteria by adding:
%CODE{"cpp"}%
if( m_jetCleaning->accept( *jet_itr )) continue; //only keep good clean jets

numGoodJets++;
%ENDCODE%
After the jet loop you could print out the number of good jets.
<blockquote>
Apart from the setup and configuration, here is a quick comparison of how this tool was called for jets using D3PDs and how it is now called using xAODs:
*D3PDs <br/>
We showed you how to do this in the D3PD-version of this tutorial (using the D3PDReader) here: SoftwareTutorialAnalyzingD3PDsInROOT: Using Object Selectors
%SYNTAX{ syntax="cpp" }%
my_JetCleaningTool->accept(event->jet_AntiKt4EMTopo[i].eta(),
event->jet_AntiKt4EMTopo[i].NegativeE(),
event->jet_AntiKt4EMTopo[i].hecf(),
event->jet_AntiKt4EMTopo[i].HECQuality(),
event->jet_AntiKt4EMTopo[i].emfrac(),
event->jet_AntiKt4EMTopo[i].sumPtTrk_pv0_500MeV(),
event->jet_AntiKt4EMTopo[i].LArQuality(),
event->jet_AntiKt4EMTopo[i].Timing(),
event->jet_AntiKt4EMTopo[i].fracSamplingMax(),
event->jet_AntiKt4EMTopo[i].AverageLArQF() ))
%ENDSYNTAX%
which was error prone (in fact for a few editions of the tutorial I realized we had one of the arguments wrong, but everything still ran!!).
*xAODs* <br/>
%SYNTAX{ syntax="cpp" }%
m_jetCleaning->accept( *jet_itr );

%ENDSYNTAX%
Ta-da! Isn't that just SO much nicer :-)
</blockquote>
---+++ Jet energy resolution
This tool takes jets as inputs and smears the MC such that the energy resolution matches that of data (note in 2015 the data and MC agreed pretty well, so this smearing was not necessary). The tool also provides the uncertainty associated with the energy resolution. More official information about this tool (including configuration options) can be found on the following jet pages:
Here we will show you how to implement the tool. Like all tools, there are several steps:
*1. Add the package to MyAnalysis/cmt/Makefile.RootCore:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] JetResolution
%ENDSYNTAX%
where [...] is any other package dependencies you may already have included.
*2.* Create a member variable to our class. In MyAnalysis/MyAnalysis/MyxAODAnalysis.h add the following lines in the appropriate place (hopefully you know by now where that appropriate place is):
%SYNTAX{ syntax="cpp" }%
#include "JetResolution/JERTool.h"
...
// JER
JERTool m_JERTool; //!

%ENDSYNTAX%
*3. In the source code, MyAnalysis/Root/MyxAODAnalysis.cxx, initialize the tool, do something with the tool every event, and finally delete the new tool. Add these lines to the appropriate places, but if you're not sure please ask us!
%SYNTAX{ syntax="cpp" }%
#include <TSystem.h> // used to define JERTool calibration path (you may already have this from the GRL part)
...
// instantiate and initialize the JER (using default configurations)
m_JERTool = new JERTool("JERTool");
ANA_CHECK(m_JERTool->initialize());
...
// event-by-event (execute) and in a loop over jets:
// JER and uncert
if(isMC){ // assuming isMC flag has been set based on eventInfo->eventType( xAOD::EventInfo::IS_SIMULATION )
// Get the MC resolution
double mcRes = m_JERTool->getRelResolutionMC( jet_itr);

// Get the resolution uncertainty
double uncert = m_JERTool->getUncertainty(*jet_itr); // you can provide a second argument specify which nuisance parameter, the default is all
Info("execute()", "jet mcRes = %f , uncert = %f", mcRes, uncert);
} // end if MC
...
// in finalize, delete tool:
if(m_JERTool){
delete m_JERTool;
m_JERTool = 0;
}
%ENDSYNTAX%
We've done a lot of stuff - let's recompile the package and test that we didn't break anything:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
cd Run/
rm -rf submitDir/
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
---++ Muons
In this introductory part we will show you how to make a basic loop over muon objects. This is very much the same thing we did for jets already.
Add the muon xAOD EDM package to MyAnalysis/cmt/Makefile.RootCore:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] xAODMuon
%ENDSYNTAX%
where [...] is any other package dependencies you may already have included.
Now in our source code MyAnalysis/Root/MyxAODAnalysis.cxx, add the muon xAOD EDM container class to include statement near the top:
%SYNTAX{ syntax="cpp" }%
#include "xAODMuon/MuonContainer.h"
%ENDSYNTAX%
Still in the source code, in execute() (run for every event) retrieve the muon container and loop over all muons for this event:
%SYNTAX{ syntax="cpp" }%
//------------
// MUONS
//------------
// get muon container of interest
const xAOD::MuonContainer muons = 0;
ANA_CHECK(event->retrieve( muons, "Muons" ));
// loop over the muons in the container
xAOD::MuonContainer::const_iterator muon_itr = muons->begin();
xAOD::MuonContainer::const_iterator muon_end = muons->end();
for( ; muon_itr = muon_end; ++muon_itr ) {
Info("execute()", " original muon pt = %.2f GeV", (( muon_itr)->pt() * 0.001)); // just to print out something

} // end for loop over muons
%ENDSYNTAX%
If you are not sure of the container type and/or container name see above.
You can do the usual RootCore thing to update/find the (new) package dependencies and compile:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
More information about using muons for DC14 can be found here: <br/>
---+++ Muon calibration and smearing tool, and associated systematics
Now let's use the MuonCalibrationAndSmearingTool found in the MuonMomentumCorrections package which is used to correct the MC to look like the data.
As with all tools we have to do the basics:
*1. Let RootCore know where to find the package that we will use, by adding it to MyAnalysis/cmt/Makefile.RootCore:=
%SYNTAX{ syntax="cpp" }%
%ENDSYNTAX%
where [...] is any other package dependencies you may already have included.
*2.* Create our member variable in your class =MyAnalysis/MyAnalysis/MyxAODAnalysis.h by adding the following lines in the appropriate place (hopefully you know by now where that appropriate place is):
%SYNTAX{ syntax="cpp" }%
#include "MuonMomentumCorrections/MuonCalibrationAndSmearingTool.h"
CP::MuonCalibrationAndSmearingTool m_muonCalibrationAndSmearingTool; //!

%ENDSYNTAX%
*3. In the source code, !MyAnalysis/Root/MyxAODAnalysis.cxx, initialize the tool, and finally delete the new tool (we'll show you in a second what we will do event-by-event). Add these lines to the appropriate places, but if you're not sure please ask us!
%SYNTAX{ syntax="cpp" }%
...
// initialize the muon calibration and smearing tool
m_muonCalibrationAndSmearingTool = new CP::MuonCalibrationAndSmearingTool( "MuonCorrectionTool" );
//m_muonCalibrationAndSmearingTool->msg().setLevel( MSG::DEBUG );
ANA_CHECK(m_muonCalibrationAndSmearingTool->initialize());
...
// in finalize, delete tool:
if(m_muonCalibrationAndSmearingTool){
delete m_muonCalibrationAndSmearingTool;
m_muonCalibrationAndSmearingTool = 0;
}
%ENDSYNTAX%
*Tool usage* <br/>
Ok, now we have the basics out of the way, so let's actually do something with this tool, event by event. Here we will show you a nice example of a tool that implements all the dual-use guidelines, from smearing the pt of MC muons, to applying systematic uncertainties associated to this correction.
In this example the tool will actually 'update' the muon's pt to the corrected value (leaving all other variables the same). We can't actually override values of the input (const) objects, so we must create a new (non-const) muon object that is a copy of the orignal muon.
There are two ways to do this:
1. Create a shallow copy of the original muon object and apply the correction directly to this copy, via the applyCorrection(xAOD::Muon& mu) method.
1. Create a new Muon object and pass this to the correctedCopy(const xAOD::Muon& input, xAOD::Muon*& output) method, which will automatically create a deep copy of the original muon and modify the copy.
Here we will show you the first method.
Near the top of MyAnalysis/Root/MyxAODAnalysis.cxx add an include statement so we can check the return status of the tool:
%SYNTAX{ syntax="cpp" }%
#include "PATInterfaces/CorrectionCode.h" // to check the return correction code status of tools
#include "xAODCore/ShallowAuxContainer.h"
#include "xAODCore/ShallowCopy.h"
%ENDSYNTAX%
Still in our source code, in execute(), somewhere after you have already retrieved xAOD::MuonContainer* muons from TEvent, let's create the shallow copy of the muon container and loop over this shallow copy.
%SYNTAX{ syntax="cpp" }%
// create a shallow copy of the muons container
std::pair< xAOD::MuonContainer*, xAOD::ShallowAuxContainer* > muons_shallowCopy = xAOD::shallowCopyContainer( muons );

// iterate over our shallow copy
xAOD::MuonContainer::iterator muonSC_itr = (muons_shallowCopy.first)->begin();
xAOD::MuonContainer::iterator muonSC_end = (muons_shallowCopy.first)->end();
for( ; muonSC_itr = muonSC_end; ++muonSC_itr ) {
if(m_muonCalibrationAndSmearingTool->applyCorrection(**muonSC_itr) = CP::CorrectionCode::Error){ // apply correction and check return code
// Can have CorrectionCode values of Ok, OutOfValidityRange, or Error. Here only checking for Error.
// If OutOfValidityRange is returned no modification is made and the original muon values are taken.
Error("execute()", "MuonCalibrationAndSmearingTool returns Error CorrectionCode");
}
Info("execute()", " corrected muon pt = %.2f GeV", ((*muonSC_itr)->pt() * 0.001));
} // end for loop over shallow copied muons
delete muons_shallowCopy.first;
delete muons_shallowCopy.second;
%ENDSYNTAX%
*Tool systematic uncertainties <br/>
The dual-use tools infrastructure has set in place a systematic 'registry', where all tools can list (by strings) the systematics associated with that particular tool. When a tool gets initialized for use in your analysis, you can setup this registry and get the list of all systematics recommended by the tool developers.
The ideal use case is that you:
* loop over the events
* loop over the systematics
* use the CP tool, which will be automatically configured for you for the systematic you are evaluating (or no systematic which defaults to the nominal tool use)
You can then decide what you actually do with the outcome of these systematics: use the results in event/object selections, write out individual ntuples or trees per systematic, etc.
First we will show you how to setup the systematic registry. In your header file, MyAnalysis/MyAnalysis/MyxAODAnalysis.h, add the following lines in the appropriate spots (of course ask if you're not sure):
%SYNTAX{ syntax="cpp" }%
// header for systematics:
#include "PATInterfaces/SystematicRegistry.h"
...
// list of systematics
std::vector<CP::SystematicSet> m_sysList; //!
%ENDSYNTAX%
Now to the source code, =MyAnalysis/Root/MyxAODAnalysis.cxx, let's add the following with the other headers near the top:
%SYNTAX{ syntax="cpp" }%
// header files for systematics:
#include "PATInterfaces/SystematicVariation.h"
#include "PATInterfaces/SystematicsUtil.h"
%ENDSYNTAX%
And in initialize() let's get the list of systematics from the systematic registry associated to our initialized tools. This must be done after all the other tools have been initialized.
%SYNTAX{ syntax="cpp" }%
// get the systematics registry and add the recommended systematics into our list of systematics to run over (+/-1 sigma):
const CP::SystematicRegistry& registry = CP::SystematicRegistry::getInstance();
const CP::SystematicSet& recommendedSystematics = registry.recommendedSystematics(); // get list of recommended systematics
m_sysList = CP::make_systematics_vector(recommendedSystematics);
%ENDSYNTAX%
In execute() we should modify the part where we loop over the muons and apply the muon calibration and smearing tool to look something like this instead (where first we loop over the systematics, apply the systematic to the appropriate tool, and then loop over the muon objects that are smeared according to this systematic):
%SYNTAX{ syntax="cpp" }%
std::vector<CP::SystematicSet>::const_iterator sysListItr;
// loop over recommended systematics
for (sysListItr = m_sysList.begin(); sysListItr = m_sysList.end(); ++sysListItr){
if(( sysListItr).name()=="") std::cout << "Nominal (no syst) " << std::endl;

else std::cout << "Systematic: " << (*sysListItr).name() << std::endl;
// apply recommended systematic for muonCalibrationAndSmearingTool
if( m_muonCalibrationAndSmearingTool->applySystematicVariation( *sysListItr ) = CP::SystematicCode::Ok ) {
Error("execute()", "Cannot configure muon calibration tool for systematic" );
continue; // go to next systematic
} // end check that systematic applied ok
// create a shallow copy of the muons container
std::pair< xAOD::MuonContainer, xAOD::ShallowAuxContainer* > muons_shallowCopy = xAOD::shallowCopyContainer( muons );

// iterate over our shallow copy
xAOD::MuonContainer::iterator muonSC_itr = (muons_shallowCopy.first)->begin();
xAOD::MuonContainer::iterator muonSC_end = (muons_shallowCopy.first)->end();
for( ; muonSC_itr = muonSC_end; ++muonSC_itr ) {
m_muonCalibrationAndSmearingTool->applyCorrection(*muonSC_itr);
Info("execute()", "corrected muon pt = %.2f GeV", ((*muonSC_itr)->pt() * 0.001));
} // end for loop over shallow copied muons
delete muons_shallowCopy.first;
delete muons_shallowCopy.second;
} // end for loop over systematics
%ENDSYNTAX%
Now you can recompile to see this new tool in action:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
cd Run/
rm -rf submitDir/
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
---++ Taus
More information about looping over taus and accessing tau xAOD quantities can be found on this page: <br/>
Information about tau-related tools for:
TauSelectionTool: generic tool to apply a set of requirements on tau candidates
* TauSmearingTool: currently supports tau energy corrections
* TauEfficiencyCorrectionsTool: currently provides jet identification scale factors and the associated uncertainties
can be read in this very nice documentation included in the package: <br/>
https://svnweb.cern.ch/trac/atlasoff/browser/PhysicsAnalysis/TauID/TauAnalysisTools/tags/TauAnalysisTools-00-00-04/README.rst <br/>
(Note I have linked to TauAnalysisTools-00-00-04 as this is the tagged version in AnalysisBase 2.0.6., but this can/will change with release)
---++ MC Truth
Lots of goodies about how to extract MC truth information in this talk: Monte Carlo truth in the xAOD
---++ Trigger
Here we will show you how to do some simple things with the trigger (namely if the trigger passed/failed and the final trigger prescale factor). First let me mention that the trigger is a complicated beast... It relies on meta-data, which is "data about the data", like which trigger menu was used which contains info about the thresholds, chains, and prescales. And it relies on event-by-event level information, like which triggers passed/failed and the associated trigger objects that fired these triggers (like the object that was identified as an electron at the trigger level that fired a particular electron trigger). Because of this level of complication you can't just browse the trigger decision in a TBrowser in ROOT.
The primary tool you must use to interact with the trigger is called the TrigDecisionTool. Note that if you want to use the TrigDecisionTool in EventLoop (which is what we are doing here in this tutorial), you must also make use of another tool: the TrigConf::xAODConfigTool. Both of which must be declared on the "heap" (meaning a member variable declared in your header file). But we'll lead you through all that below!
Let's do the basics :
*1. Let RootCore know where to find the packages containing the tools that we will use, by adding the following lines to MyAnalysis/cmt/Makefile.RootCore:=
%SYNTAX{ syntax="cpp" }%
%ENDSYNTAX%
where [...] is any other package dependencies you may already have included.
*2.* Create our member variables in =MyAnalysis/MyAnalysis/MyxAODAnalysis.h by adding the following lines in the appropriate place (hopefully you know by now where that appropriate place is):
%SYNTAX{ syntax="cpp" }%
// include files for using the trigger tools
#include "TrigConfxAOD/xAODConfigTool.h"
#include "TrigDecisionTool/TrigDecisionTool.h"
// trigger tools member variables
Trig::TrigDecisionTool m_trigDecisionTool; //!

TrigConf::xAODConfigTool *m_trigConfigTool; //!
%ENDSYNTAX%
*3. In the source code, MyAnalysis/Root/MyxAODAnalysis.cxx, we will initialize the tool in the initialize() function. This requires a few more steps than other tools, as we are setting up both tools: TrigDecisionTool and TrigConf::xAODConfigTool.
%SYNTAX{ syntax="cpp" }%
// Initialize and configure trigger tools
m_trigConfigTool = new TrigConf::xAODConfigTool("xAODConfigTool"); // gives us access to the meta-data
ANA_CHECK( m_trigConfigTool->initialize() );
ToolHandle< TrigConf::ITrigConfigTool > trigConfigHandle( m_trigConfigTool );
m_trigDecisionTool = new Trig::TrigDecisionTool("TrigDecisionTool");
ANA_CHECK(m_trigDecisionTool->setProperty( "ConfigTool", trigConfigHandle ) ); // connect the TrigDecisionTool to the ConfigTool
ANA_CHECK(m_trigDecisionTool->setProperty( "TrigDecisionKey", "xTrigDecision" ) );
ANA_CHECK(m_trigDecisionTool->initialize() );
%ENDSYNTAX%
*4.* Still in the source code, MyAnalysis/Root/MyxAODAnalysis.cxx, let's delete the variables (and associated memory allocation) in the finalize() method:
%SYNTAX{ syntax="cpp" }%
// cleaning up trigger tools
if( m_trigConfigTool ) {
delete m_trigConfigTool;
m_trigConfigTool = 0;
}
if( m_trigDecisionTool ) {
delete m_trigDecisionTool;
m_trigDecisionTool = 0;
}
%ENDSYNTAX%
*5.* Let's actually do something now, event-by-event. Here we will get all triggers matching the string "HLT_xe80*", and for each of those we will get the associated trigger chain (the L1 input). And for each chain group we will ask if the chain passed or failed the trigger selection (so like an AND of all parts of that chain, L1 and HLT), and also the total chain prescale (so like the multiplication of the L1 and HLT trigger prescales). For MC the prescales are 1, but for data this can very often be not equal to 1. So in the execute() function in MyAnalysis/Root/MyxAODAnalysis.cxx (after you have defined an xAOD::TEvent object) add the following lines:
%SYNTAX{ syntax="cpp" }%
// examine the HLT_xe80* chains, see if they passed/failed and their total prescale
auto chainGroup = m_trigDecisionTool->getChainGroup("HLT_xe80.*");
std::map<std::string,int> triggerCounts;
for(auto &trig : chainGroup->getListOfTriggers()) {
auto cg = m_trigDecisionTool->getChainGroup(trig);
std::string thisTrig = trig;
Info( "execute()", "%30s chain passed(1)/failed(0): %d total chain prescale (L1*HLT): %.1f", thisTrig.c_str(), cg->isPassed(), cg->getPrescale() );
} // end for loop (c++11 style) over chain group matching "HLT_xe80*"
%ENDSYNTAX%
Now you can recompile to see this new tool in action:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
cd Run/
rm -rf submitDir/
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
Other references:
* xAOD trigger analysis in ROOT (January 2015) (Note that at the time of that tutorial the trigger information was not accessible from within ROOT in the primary xAODs (the xAODs used in this tutorial), so this dedicated trigger tutorial used a special xAOD that had a fix applied so that trigger information can be accessed.)
* TriggerAnalysisTools: a wiki page with more complex examples in both RootCore and Athena
---++ Metadata
Metadata is basically "data about data" and includes things like the trigger menu, MC channel number, number of events processed in a file, and luminosity information. The (D)xAOD is being improved to include more and more of this information. And more tools are becoming available to be able to access this information from within your analysis code. More information (and some examples) about accessing this metadata can be found here: Analysis Metadata
---++ More tools and status'
You can find more documentation about each xAOD object and associated tools in the appropriate workbook:
The status of CP tool migration to the xAOD can be found on the Tools xAOD Migration page. Please check here to see if the tool you need has been migrated. You should be able to configure and use these tools in a similar way to those shown above.
---+ 7. Creating and saving histograms
Usually you will want to create and save histograms to a root file. With EventLoop this is really easy to do. EventLoop handles much of the heavy lifting, and as the user you just need to define what histogram you want (type, range, etc.), and fill it appropriately. When working with PROOF (feature to come), EventLoop will take care of collecting the histograms from all the worker nodes, merging them and saving them on the submission node.
In this simple example (to demonstrate how it is done) we will plot the pt of one jet collection. In the header file MyAnalysis/MyAnalysis/MyxAODAnalysis.h add an include for the TH1 class file (before the class statement):
%SYNTAX{ syntax="cpp" }%
#include <TH1.h>
%ENDSYNTAX%
Within our algorithm class add a histogram pointer as a member to our MyxAODAnalysis algorithm class (in MyxAODAnalysis.h). You can make this either public or private, it doesn't matter. Please note that the //! is important:
%SYNTAX{ syntax="cpp" }%
TH1 h_jetPt; //!

%ENDSYNTAX%
Now inside the source file MyAnalysis/Root/MyxAODAnalysis.cxx, we need to book the histogram and add it to the output. That happens in the histInitialize function:
%SYNTAX{ syntax="cpp" }%
EL::StatusCode MyxAODAnalysis :: histInitialize ()
{
// Here you do everything that needs to be done at the very
// beginning on each worker node, e.g. create histograms and output
// trees. This method gets called before any input files are
// connected.
h_jetPt = new TH1F("h_jetPt", "h_jetPt", 100, 0, 500); // jet pt [GeV]
wk()->addOutput (h_jetPt);
return EL::StatusCode::SUCCESS;
}
%ENDSYNTAX%
This method is called before processing any events. Note that the wk()->addOutput call is a mechanism EventLoop uses for delivering the results of an algorithm to the outside world. When running in PROOF, ROOT will merge all of the objects in this list. In principle you can place any object that inherits from TObject in there, but if you put anything other than histograms there you need to take special precautions for the merging step.
Now that we have the histogram, we also need to fill it. For that we can use the execute() method (called for every event). You probably have a loop over
the AntiKt4EMTopoJets collection (as described above), in that jet loop let's fill this histogram, so the loop looks something like:
%SYNTAX{ syntax="cpp" }%
// loop over the jets in the container
xAOD::JetContainer::const_iterator jet_itr = jets->begin();
xAOD::JetContainer::const_iterator jet_end = jets->end();
for( ; jet_itr = jet_end; ++jet_itr ) {
Info("execute()", " jet pt = %.2f GeV", ((*jet_itr)->pt() * 0.001)); // just to print out something
h_jetPt->Fill( ( (*jet_itr)->pt()) * 0.001); // GeV
} // end for loop over jets
%ENDSYNTAX%
Now we should be able to build our package with the full algorithm:
<verbatim style="background: #e0ebf6;">
rc compile
</verbatim>
And rerun from the Run/ directory:
<verbatim style="background: #e0ebf6;">
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
*Outputs <br/>
Once your job is finished you can find the histogram(s) inside the unique directory you created at job submission ( submitDir/). There is a different file for each sample you submitted ( hist-label.root), so in our case we have only one submitDir/hist-AOD.05352803._000031.pool.root.1.root. Please note that the second part of the histogram file name will correspond directly to the name of the sample in SampleHandler, while the first part ( hist-) is set by EventLoop and can not be changed.
A second output is made in the submitDir/hist/ directory such that you can access these histograms through SampleHandler, e.g.:
%SYNTAX{ syntax="cpp" }%
SH::SampleHandler sh;
sh.load ("submitDir/");
sh.get ("hist")->readHist ("h_jetPt");
%ENDSYNTAX%
If you want, at the very end of your steering macro, Run/ATestRun.cxx, (after the job has been submitted with the driver) you can ask SampleHandler to plot this histogram (making a canvas and the histogram pop us), by adding the following line:
%SYNTAX{ syntax="cpp" }%
// Fetch and plot our histogram
SH::SampleHandler sh_hist;
sh_hist.load (submitDir + "/hist");
TH1 hist = (TH1) sh_hist.get ("AOD.05352803._000031.pool.root.1")->readHist ("h_jetPt");
hist->Draw ();
%ENDSYNTAX%
Note that if you submit your job to the Grid (shown later) you should remove these lines.
---+ 8. Creating and saving ntuples with trees
In this section we'll build a simple example of writing a new n-tuple. You may want to write out an ntuple (with a tree and branches connected to that tree) after you have finished working within the xAOD format. So for example, after you have applied the object tools and applied systematic uncertainties, for subsequent steps of your analysis (like preparing input to a statistical package like HistFitter).
For this example we will use EventLoop to define an output stream and we will use the EventLoop NTupleSvc to create an ntuple (root file) format. This will be a trivial example showing you how to fill one tree with one very simple quantity (event number), but you can reproduce this example to produce multiple trees in one root file.
First let's open our header file, MyAnalysis/MyAnalysis/MyxAODAnalysis.h, and add the following lines which will define the name of output file (which will be supplied when the macro is executed), the TTree, and the branch we will fill in the ntuple (just the event number, not very exciting):
%SYNTAX{ syntax="cpp" }%
// defining the output file name and tree that we will put in the output ntuple, also the one branch that will be in that tree
std::string outputName;
TTree tree; //!

int EventNumber; //!
%ENDSYNTAX%
Notice outputName does not have a //! beside it, that's because we will defined it at run time (in the macro). And before closing this header file we need to include the TTree header file near the top of our code:
%SYNTAX{ syntax="cpp" }%
#include <TTree.h>
%ENDSYNTAX%
Now in our source code, MyAnalysis/Root/MyxAODAnalysis.cxx, let's do the actual implementation. First in the histInitialize() method, add these lines:
%SYNTAX{ syntax="cpp" }%
// get the output file, create a new TTree and connect it to that output
// define what braches will go in that tree
TFile *outputFile = wk()->getOutputFile (outputName);
tree = new TTree ("tree", "tree");
tree->SetDirectory (outputFile);
tree->Branch("EventNumber", &EventNumber);
%ENDSYNTAX%
Now in the execute() method let's actually fill the branch. You should do this somewhere after you have retrieved the EventInfo object:
%SYNTAX{ syntax="cpp" }%
// fill the branches of our trees
EventNumber = eventInfo->eventNumber();
%ENDSYNTAX%
And then somewhere near the end of the execute() method be sure to fill the tree:
%SYNTAX{ syntax="cpp" }%
tree->Fill();
%ENDSYNTAX%
Finally near the top of the source code add the include statement for the TFile object we are using:
%SYNTAX{ syntax="cpp" }%
#include <TFile.h>
%ENDSYNTAX%
To MyAnalysis/cmt/Makefile.RootCore add the dependency for EventLoopAlgs which where the NtupleSvc lives:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] EventLoopAlgs
%ENDSYNTAX%
After these modifications we should re-compile our package and make sure it works:
<verbatim style="background: #e0ebf6;">
rc find_packages
rc compile
</verbatim>
Moving now to our Run/ directory and our macro ATestRun.cxx, after you have defined the job but before you have added your algorithm to the job, add these lines which will create an output and an ntuple in that output stream:
%SYNTAX{ syntax="cpp" }%
// define an output and an ntuple associated to that output
EL::OutputStream output ("myOutput");
job.outputAdd (output);
EL::NTupleSvc *ntuple = new EL::NTupleSvc ("myOutput");
job.algsAdd (ntuple);
%ENDSYNTAX%
After this line job.algsAdd (alg); add the following line, which is basically letting your algorithm know the name of the output file:
%SYNTAX{ syntax="cpp" }%
alg->outputName = "myOutput"; // give the name of the output to our algorithm
%ENDSYNTAX%
(If you are using the compiled macro you will need to add two include statements for the NtupleSvc and Output ntuple, #include <EventLoopAlgs/NTupleSvc.h> and #include <EventLoop/OutputStream.h>.)
And that's it. You can rerun your macro again:
<verbatim style="background: #e0ebf6;">
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
Your new ntuple will appear in the directory submitDir/data-myOutput/.
---+ 9. Playing with xAOD containers
Here we will show you how to modify or create new xAOD containers/objects, and then how to write these to another xAOD in ROOT.
Note that if you do it this way you will not be able to read the output xAOD back into Athena. We will make use of some methods in the xAODRootAccess package to create our new xAOD.
*IMPORTANT NOTE: If you are creating a new smaller xAOD file that is used by your analysis group, consider setting up a derivation in the Derivation Reduction Framework, which will automatically generate this smaller xAOD in the production system for you. See the Derivation Framework for more information.
---++ Creating a new xAOD output in EventLoop
Since we are using EventLoop we need to tell it we are writing a new output. In MyAnalysis/Root/MyxAODAnalysis.cxx go into the setupJob (EL::Job& job) function and let it know about our new output:
%SYNTAX{ syntax="cpp" }%
// tell EventLoop about our output xAOD:
EL::OutputStream out ("outputLabel", "xAOD");
job.outputAdd (out);
%ENDSYNTAX%
And put this class into our list of included header files:
%SYNTAX{ syntax="cpp" }%
#include "EventLoop/OutputStream.h"
%ENDSYNTAX%
The second argument to the OutputStream constructor is needed to correctly merge xAOD outputs (including the metadata) on the grid. AnalysisBase-2.3.15 (or AnalysisBase 2.1.35 for the DC14 version) or later is needed to use this option.
Now in initialize() let's tell our instance of the TEvent that we are writing an output file, and to be ready to set some output containers/branches active
%SYNTAX{ syntax="cpp" }%
// output xAOD
TFile file = wk()->getOutputFile ("outputLabel");

ANA_CHECK(event->writeTo(file));
%ENDSYNTAX%
---++ Copying full container(s) to a new xAOD
Here we will show you how to copy the contents of a full container, un-modified, for every event. We assume you have followed the instructions above to define a new output xAOD in EventLoop.
We will create this copy in the event loop, so in MyAnalysis/Root/MyxAODAnalysis.cxx in the execute() method add the following line to copy the full container for AntiKt4EMTopoJets:
%SYNTAX{ syntax="cpp" }%
// copy full container(s) to new xAOD
// without modifying the contents of it:
ANA_CHECK(event->copy("AntiKt4EMTopoJets"));
%ENDSYNTAX%
At the *end of execute() add this line to fill the xAOD with the content we have specified in the event loop:
%SYNTAX{ syntax="cpp" }%
// Save the event:
event->fill();
%ENDSYNTAX%
Finally, in finalize() let's tell the job to close up the output xAOD by adding:
%SYNTAX{ syntax="cpp" }%
// finalize and close our output xAOD file:
TFile file = wk()->getOutputFile ("outputLabel");

ANA_CHECK(event->finishWritingTo( file ));
%ENDSYNTAX%
Compile like usual and test your code:
<verbatim style="background: #e0ebf6;">
rc compile
cd Run
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
(Remember that if you already have a submitDir directory the job will crash.)
If you have followed the instructions above you will find your output xAOD in submitDir/data-outputLabel/.
Note that you can only copy xAOD objects and/or containers. You can determine if the container is of xAOD type by running checkxAOD.py on the xAOD and seeing which objects are of type xAOD::x (where x is the container of interest).
---++ Creating modified container(s)
Sometimes you don't want to just blindly copy the entire contents of a container for every event into a new xAOD, sometimes you want to modify the (content of the) containers themselves. You could imagine you might want to do some of the following:
* *deep copy: create a new container with the same variables as an existing container, but with only for a subset of objects/events passing some selection criteria (example: create a new jet collection that only contains jets with a pt greater than 50 GeV)
* shallow copy: create a light-weight container that only "updates" some variables from an original container to new values (example: apply some energy correction and override existing reconstructed energy)
* adding new variables: add variables/attributes to objects (example: adding a variable that identifies the jet as a bjet)
Each of these are described in more detail below.
Many of the CP tools take advantage of one of these features to either apply corrections to existing containers or copied containers, or to decorate objects with new information.
---+++ Deep copy
Deep copying will create new objects/containers that have all the attributes (aka variables) of the original container. This is useful when you are only interested in objects that pass certain criteria.
You previously saw how to write out the entire original jet container to an output file, and how you can select good jets, let's go one step further. Let's create a new jet container of selected, good jets. First, you need to create the new jet container in your execute() function. For this, remember what you heard about the xAOD EDM in the tutorial. An xAOD container is always described by two objects. An "interface container" that you interact with, and an auxiliary store object that holds all the data. Now that we want to create a new container, you need to create both of these objects.
The naming convention for the auxiliary stores is that if an interface container is called xAOD::BlaContainer, then its auxiliary store type is called xAOD::BlaAuxContainer, where Bla is the object of interest.
So for jets will look like:
%SYNTAX{ syntax="cpp" }%
#include "xAODJet/JetContainer.h"
#include "xAODJet/JetAuxContainer.h"
%ENDSYNTAX%
However instead of using the JetAuxContainer we will use the generic AuxContainerBase this is because if we were using a derivation as input some of the original (so-called "auxillary") variables may have been slimmed away (removed to make the container smaller), so if we were to do a deep-copy of the full JetAuxContainer then we would make our container larger than necessary (by creating a bunch of variables that were not even in the original input DxAOD). Instead let's add these lines instead:
%SYNTAX{ syntax="cpp" }%
#include "xAODJet/JetContainer.h"
#include "xAODCore/AuxContainerBase.h"
...
EL::StatusCode MyxAODAnalysis::execute() {
...
// Create the new container and its auxiliary store.
xAOD::JetContainer* goodJets = new xAOD::JetContainer();
xAOD::AuxContainerBase* goodJetsAux = new xAOD::AuxContainerBase();
goodJets->setStore( goodJetsAux ); //< Connect the two
...
for( ; jet_itr = jet_end; ++jet_itr ) {
if( ! m_jetCleaning->accept( *jet_itr ) ) continue;

// Copy this jet to the output container:
xAOD::Jet jet = new xAOD::Jet();
goodJets->push_back( jet ); // jet acquires the goodJets auxstore
jet= **jet_itr; // copies auxdata from one auxstore to the other

}
...
}
%ENDSYNTAX%
Also make sure you have updated your package dependencies in MyAnalysis/cmt/Makefile.RootCore to include:
%SYNTAX{ syntax="cpp" }%
PACKAGE_DEP = EventLoop [...] xAODJet xAODCore
%ENDSYNTAX%
where [...] represents any other package dependencies you may already have included.
Now to write this new container to a new xAOD output, we assume you have first setup the new xAOD output in EventLoop as we just described above:
And then, still in execute() in the source code, after you have created and filled the new goodJet collection above, add these lines:
%SYNTAX{ syntax="cpp" }%
// Record the objects into the output xAOD:
ANA_CHECK(event->record( goodJets, "GoodJets" ));
ANA_CHECK(event->record( goodJetsAux, "GoodJetsAux." ));
%ENDSYNTAX%
Of course all of this needs to be put before the event->fill(); call somewhere. Also take note that you can call the interface container whatever you want (within reason, it should be an alphanumeric name), but the accompanying auxiliary store object must always have the same name, postfixed by "Aux.". So, for any "Bla" interface container you would record a "BlaAux." auxiliary container.
Notice that we create the output objects on the heap. (With new.) This mimics the behaviour of StoreGate. Just like StoreGate, TEvent also takes ownership of the objects recorded in it, so you must not delete the containers that you successfully recorded into TEvent.
You can compile and test your code:
<verbatim style="background: #e0ebf6;">
rc compile
cd Run
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
The new output xAOD in submitDir/data-outputLabel/ will now contain our new "GoodJets" container. You can check that it has fewer jets than the original AntiKt4EMTopoJets from which we deep copied.
---+++ Shallow copy
Another way of making copies of containers is called the shallow copy. If you want to apply some sort of modification/calibration to all objects in an input container, but you don't want to perform an object selection at the same time, then the best idea is to make a "shallow copy" of the input container. This is done with the help of xAOD::shallowCopyContainer. Note that you can not add/remove objects to/from a shallow copy container (but you can decorate it with new variables associated to each object - see decorations in the next section). However it's absolutely possible to make deep copies of some selected shallow copies later on in your code, and put these deep copies into a container of selected objects.
This creates a copy which only overrides default values with new ones that you set, while keeping all other unaffected quantities unchanged from the original. The shallowCopyContainer will return a pair of xAOD objects (one for the interface and one for the auxiliary store).
Let's create a shallow copy of the AntiKt4EMTopoJets and shift the pt of all jets up by 5%. In our source code, MyAnalysis/Root/MyxAODAnalysis.cxx, near the top add the include statement to do this shallow copy:
%SYNTAX{ syntax="cpp" }%
#include "xAODCore/ShallowCopy.h"
%ENDSYNTAX%
Now in execute(), we assume you already have some lines like:
%SYNTAX{ syntax="cpp" }%
const xAOD::JetContainer jets = 0;
ANA_CHECK(event->retrieve( jets, "AntiKt4EMTopoJets" ));
%ENDSYNTAX%
So, now somewhere below these lines let's create our shallow copy:
%SYNTAX{ syntax="cpp" }%
//--------------
// shallow copy
//--------------
// "jets" jet container already defined above
std::pair< xAOD::JetContainer*, xAOD::ShallowAuxContainer* > jets_shallowCopy = xAOD::shallowCopyContainer( jets );

// iterate over the shallow copy
xAOD::JetContainer::iterator jetSC_itr = (jets_shallowCopy.first)->begin();
xAOD::JetContainer::iterator jetSC_end = (jets_shallowCopy.first)->end();
for( ; jetSC_itr = jetSC_end; ++jetSC_itr ) {
// apply a shift in pt, up by 5%
double newPt = (*jetSC_itr)->pt() * (1 + 0.05);
xAOD::JetFourMom_t newp4(newPt, (*jetSC_itr)->eta(), (*jetSC_itr)->phi(), (*jetSC_itr)->m());
(*jetSC_itr)->setJetP4( newp4); // we've overwritten the 4-momentum
} // end iterator over jet shallow copy
%ENDSYNTAX%
By default when you create a shallowCopyContainer you take ownership of the pairs (meaning you need to take care to delete the both). You can give ownership to either the TStore or TEvent, which will then handle deletion for you. If you want to work with the pair internally to your algorithm or pass it around to different algorithms without writing it out to an output xAOD, you should give it to the TStore. If you plan to write it out to an output xAOD you can give ownership to TEvent. Each of these cases is described below:
*Shallow copy: record to output xAOD <br/>
There are two options here, dictated by how you set the flag setShallowIO:
1. Save a real shallow copy only writing to the xAOD container the variables you have overwritten while still pointing to the original for all other variables. In this case you must also write the original container ( setShallowIO is true) as previously described.
1. Save an actually deep copy; in this case you do not need to also write the original container to the xAOD ( setShallowIO is false).
For either 1. and 2. add these lines below our iterator over the shallow copied jets:
%SYNTAX{ syntax="cpp" }%
jets_shallowCopy.second->setShallowIO( false ); // true = shallow copy, false = deep copy
// if true should have something like this line somewhere:
// event->copy("AntiKt4EMTopoJets");
ANA_CHECK(event->record( jets_shallowCopy.first, "ShallowCopiedJets" ));
ANA_CHECK(event->record( jets_shallowCopy.second, "ShallowCopiedJetsAux." ));
%ENDSYNTAX%
*Shallow copy: record to TStore* <br/>
You can record the shallow copy to TStore, in a very similar way we did above by storing it to TEvent. First in the source code in execute() you will need to define an instance of a TStore object (making use of the EventLoop worker object):
%SYNTAX{ syntax="cpp" }%
xAOD::TStore* store = wk()->xaodStore();
%ENDSYNTAX%
Then simply record your shallowed jet container (and aux container) to the store:
%SYNTAX{ syntax="cpp" }%
ANA_CHECK(store->record( jets_shallowCopy.first, "ShallowCopiedJets" ));
ANA_CHECK(store->record( jets_shallowCopy.second, "ShallowCopiedJetsAux." ));
%ENDSYNTAX%
!EventLoop takes care of clearing the memory for you.
(Tip: At any point you can see what is stored to your TStore by doing store->print().)
Compile like usual and test your code:
<verbatim style="background: #e0ebf6;">
rc compile
cd Run
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
And depending on how you set setShallowIO you will have more or less variables in your new xAOD associated to the ShallowCopiedJets. You can try changing the flag, recompiling and checking the alternative content of the xAOD.
More examples of how this code works, over here.
---+++ New variables
You might want to modify an object by adding a new attribute (or variable) to it's container. There are two ways to do this, depending on the "const" state of the container you want to modify:
* for const objects (for example adding variables to containers from the input xAOD) you will "decorate" the container using the auxdecor function
* for nonconst objects (for example possibly like a shallow or deep copied container) you will add auxillary data to this container using the auxdata function
Now we will show you a simple example of each below.
*auxdecor* <br/>
Recall we looped over the "AntiKt4EMTopoJets" jet container and simply wrote to screen the jet pt (pretty boring). But now let's decorate this (const) jet container by adding a variable called "mySignal" that just checks if the jet pt is greater than 40 GeV. Modify the loop over the jets so it looks like:
%SYNTAX{ syntax="cpp" }%
// loop over the jets in the container
xAOD::JetContainer::const_iterator jet_itr = jets->begin();
xAOD::JetContainer::const_iterator jet_end = jets->end();
for( ; jet_itr = jet_end; ++jet_itr ) {
Info("execute()", " jet pt = %.2f GeV", (( jet_itr)->pt() * 0.001)); // just to print out something

if((*jet_itr)->pt() > 40000 ){
( *jet_itr )->auxdecor< int >( "mySignal" ) = 1; // 1 = yes, it's a signal jet!
}
else{
( *jet_itr )->auxdecor< int >( "mySignal" ) = 0; // 0 = nope, not a signal jet
}
} // end for loop over jets
%ENDSYNTAX%
*auxdata <br/>
Here we will add a variable to the (nonconst) jet container we did the shallow copy of above. In the loop over the shallow copied jets, after we have shifted the pt up by 5% you can add the following line:
%SYNTAX{ syntax="cpp" }%
// iterate over the shallow copy
xAOD::JetContainer::iterator jetSC_itr = (jets_shallowCopy.first)->begin();
xAOD::JetContainer::iterator jetSC_end = (jets_shallowCopy.first)->end();
for( ; jetSC_itr = jetSC_end; ++jetSC_itr ) {
...
// adding a (silly) variable: checking if shallow copied pt is greater than 40 GeV, after 5% shift up (classify as signal or not)
if(( jetSC_itr)->pt() > 40000 ){

( *jetSC_itr )->auxdata< int >( "mySignal" ) = 1; // 1 = yes, it's a signal jet!
}
else{
( *jetSC_itr )->auxdata< int >( "mySignal" ) = 0; // 0 = nope, not a signal jet
}
} // end iterator over jet shallow copy
%ENDSYNTAX%
Here we have added an integer variable called 'mySignal' to the shallow copied jets
---+++ Future feature: Easy container slimming
As a final ingredient to writing out modified objects, you can select which of their properties should be written to the output file. The xAOD design was based around the idea that objects/containers may be slimmed during the analysis easily.
As you should know, all the data payload of xAOD objects/containers is in their auxiliary store objects. Because of this the way to specify which variables should be written out, is to set a property for the auxiliary store in question. This is done using the xAOD::TEvent::setAuxItemList function. By putting something like this into your algorithm's initialize() function:
%SYNTAX{ syntax="cpp" }%
// Set which variables not to write out:
event->setAuxItemList( "AntiKt4EMTopoJetsAux.", "-NumTrkPt1000.-NumTrkPt500" );
// Set which variable to do write out:
event->setAuxItemList( "GoodJetsAux.", "JetGhostArea.TrackCount" );
%ENDSYNTAX%
Unfortunately at the time of writing this still has some issues when using multiple input files (code crashing on a few files into the job), but the formalism is going to be this once the code works as intended...
---+ 10. More EventLoop features
!EventLoop is an ASG supported ROOT tool for handling optimized event looping and management. Below are some nice features of using EventLoop in your analysis. One nice feature is the different drivers available to easily switch between running your code locally, on the Grid, using PROOF, or on a batch system.
A more complete description of the tool can be found on the dedicated EventLoop wiki page.
---++ Running on the grid
In this section of the tutorial we will teach you how to run on the grid. There are two main advantages to running on the grid: First, you have access to the vast computing power of the grid, which means even very lengthy jobs can finish within hours, instead of days or weeks. Second, you have direct access to all the datasets available on the grid, which means you don't need to download them to your site first, which can save you hours if not days of time. There are also two main disadvantages: First, for all but the simplest jobs your turnaround time will be measured in hours, if not days. And this time can vary depending on the load at the various grid sites. Second, there are more things beyond your control that can go wrong, e.g. the only grid site with your samples may experience problems and go offline for a day or two thereby delaying the execution of your jobs.
As a first step, set up the Panda client which will be needed for running on the grid. You should set these up before setting up root, so it's probably best to *start from a clean shell and issue the commands:
%SYNTAX{ syntax="sh" }%
setupATLAS
lsetup panda
%ENDSYNTAX%
Now navigate to your working area, and setup your Analysis Release, following the recommendations above What to do everytime you log in.
The nice thing about using EventLoop is that you don't have to change any of your algorithms code, we simply change the driver in the steering macro. It is recommended that you use a separate submit script when running on the grid.
Let's copy the content of ATestRun.cxx all the way up to, and including, the driver.submit statement into a new file ATestSubmit.cxx. Don't forget to change the name of the macro at the beginning of the file to ATestSubmit:
%SYNTAX{ syntax="cpp" }%
void ATestSubmit (const std::string& submitDir)
%ENDSYNTAX%
If you did the section on FAX the configuration for SampleHandler should already be set, otherwise open ATestSubmit.cxx and comment out the directory scan and instead scan using DQ2 (shown below). Note since we are just testing this functionality we will use a very small input dataset (a SUSY signal point) so your job will run quickly and you can have quick feedback regarding the success (let's hope it's a success) of your job.
%SYNTAX{ syntax="cpp" }%
//const char* inputFilePath = gSystem->ExpandPathName ("$ALRB_TutorialData/p2622/");
//SH::ScanDir().filePattern("DAOD_SUSY1.08377960._000012.pool.root.1").scan(sh,inputFilePath);
SH::scanRucio (sh, "mc15_13TeV.370900.MadGraphPythia8EvtGen_A14NNPDF23LO_GG_direct_200_0.merge.DAOD_SUSY1.e4008_a766_a821_r7676_p2666/");
%ENDSYNTAX%
Next, replace the driver with the PrunDriver:
%SYNTAX{ syntax="cpp" }%
//EL::DirectDriver driver;
EL::PrunDriver driver;
%ENDSYNTAX%
We actually need to specify a structure to the output dataset name, as our input sample has a really really long name, and by default the output dataset name will contain (among other strings) this input dataset name which is too long for the Grid to handle. So after you've defined this PrunDriver add:
%SYNTAX{ syntax="cpp" }%
driver.options()->setString("nc_outputSampleName", "user.lheelan.test.%in:name[2]%.%in:name[6]%");
%ENDSYNTAX%
where you should replace lheelan with your Grid nickname (usually same as your lxplus username), the argument [2] will put the datasetID (MC ID or run number) and [6] will put the AMI tags (basically we are removing this "physics short" text). Note that if you wanted to rerun this again with the same input but a slightly different analysis setting you would need to come up with a different output dataset name (or you will get an error telling you this output dataset name already exists). You need to have unique dataset names.
The PrunDriver supports a number of optional configuration options that you might recognize from the prun program. If you want detailed control over how jobs are submitted, please consult this page for a list of options: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/EventLoop#Grid_Driver
Finally, submit the jobs as before:
%SYNTAX{ syntax="sh" }%
root -l -b -q '$ROOTCOREDIR/scripts/load_packages.C' 'ATestSubmit.cxx ("myGridJob")'
%ENDSYNTAX%
This job submission process may take a while to complete - do not interrupt it! You will be prompted to enter your Grid certificate password. When all jobs are submitted to panda, a monitoring loop will start. Output histograms for a each input dataset will be downloaded as soon as processing of that dataset is completed. You can stop the monitoring loop with ctrl+c and restart it later by calling
%SYNTAX{ syntax="cpp" }%
EL::Driver::wait("myGridJob");
%ENDSYNTAX%
or do a single check on the status of the jobs and retrieve and new output:
%SYNTAX{ syntax="cpp" }%
EL::Driver::retrieve("myGridJob");
%ENDSYNTAX%
See here for more info on how to create a separate retrieve script which could contain any post-processing code that you want to run once the jobs are finished. If you do not want to enter the monitoring loop in the first place you can, at the end of ATestSubmit.cxx, replace driver.submit with:
%SYNTAX{ syntax="cpp" }%
// submit the job, returning control immediately if the driver supports it
driver.submitOnly (job, submitDir);
}
%ENDSYNTAX%
If you need to log out from your computer but you still want output to be continuously downloaded so that it is immediately available when you come back, a somewhat more advanced GridDriver exists which will use Ganga and GangaService to keep running in the background, see the EventLoop twiki page for more info.
You can follow the evolution of your jobs by going to http://bigpanda.cern.ch, clicking on users and then finding yourself in the list.
IDEA! Specifying paths to local files: When running on the grid, your local file system is of course inaccessible. All axuiliary files needed by your job must be placed in the share/ directory in one of your RootCore packages and accessed in your source code using paths of the form $ROOTCOREBIN/data/PackageName/FileName.root. When the package is compiled (via rc compile) RootCore will create a symbolic link in $ROOTCOREBIN/data/PackageName/ to the actual file in the package.
(take a look around inside the RootCore directory after compiling to see how symlinks on this form are automatically created). It is a good practice to always use this system, as it works with all drivers. For example, with the Good Runs List we showed you earlier, copy the GRL from it's afs area to your area: <br/>
=MyAnalysis/share/= <br/>
( $ROOTCOREBIN should be defined after you setup the Analysis Release, you may need to manually create the share/ directory). Then in your source code in the initialization of the GRL tool specify the path to this file with something like (put these lines in the appropriate places):
%SYNTAX{ syntax="cpp" }%
const char* grlFilePath = "$ROOTCOREBIN/data/MyAnalysis/data15_13TeV.periodAllYear_DetStatus-v73-pro19-08_DQDefects-00-01-02_PHYS_StandardGRL_All_Good_25ns.xml";
const char* fullGRLFilePath = gSystem->ExpandPathName (grlFilePath);
vecStringGRL.push_back(fullGRLFilePath);
ANA_CHECK(m_grl->setProperty( "GoodRunsListVec", vecStringGRL));
%ENDSYNTAX%
Don't forget to recompile!
*Note: If you are using the compiled application to run your macro, you need to add two things:*
* to MyAnalysis/cmt/Makefile.RootCore add EventLoopGrid to the list of PACKAGE_DEP
* in your compiled steering macro MyAnalysis/util/testRun.cxx add this near the top with the other header include statements: #include "EventLoopGrid/PrunDriver.h"=
Don't forget to run rc find_packages before compiling since you've just updated the package dependencies!
If you need more information on options available for running on the grid, check out the grid driver documentation.
---++ Using TTreeCache
The speed with which your jobs will run will in most cases be dominated by the speed with which you can read your input data. So it is often worthwhile to try to maximize that speed.
One way to improve read performance is to use TTreeCache, which will predict the data that you are likely to access and preload it in big chunks. This is mostly important when reading files over the network, and it is virtually mandatory in case you read the files directly from the grid (see the next section on FAX).
Using TTreeCache with EventLoop is very straightforward, just specify the size of the cache you would like for your job before submitting it (in this case 10MB):
%SYNTAX{ syntax="cpp" }%
job.options()->setDouble (EL::Job::optCacheSize, 10*1024*1024);
%ENDSYNTAX%
The default way in which TTreeCache predicts what data your job will access is by looking at the first n events and assume that the pattern will hold for the rest of your jobs. If you want to, you can change the number of events to be read to suit your needs:
%SYNTAX{ syntax="cpp" }%
job.options()->setDouble (EL::Job::optCacheLearnEntries, 20);
%ENDSYNTAX%
You may have to play around a little to determine which number works best for you. There is a definite tradeoff, in that a too large number will mean that you read too many events without the benefit of the cache, while a too small number means that you will not be able to cache variables that you do not access on every event.
---++ Using FAX
FAX is a mechanism that allows you to read data directly from the grid without downloading it first. Depending on your usage pattern this may not only be more convenient, but also faster than downloading the files directly: If you download a file, you have to download it in its entirety, whereas if you read the file directly from the grid you may read only a small fraction of it. So if you are reading the file only once (or only a few times) this may be an option to improve your analysis workflow.
%W% Warning: While not strictly necessary, it is strongly recommended that you use TTreeCache when using FAX, otherwise your performance is likely to be very poor. So if you haven't done so already, you should work through the section above on TTreeCache first.
*Exit your shell and start a new session on lxplus (or your working environment).* As always, we have to setup the atlas environment:
%SYNTAX{ syntax="sh" }%
setupATLAS
%ENDSYNTAX%
Now in your new shell source the environments to use the FAX tools and establish a voms proxy:
%SYNTAX{ syntax="sh" }%
lsetup fax
voms-proxy-init -voms atlas
%ENDSYNTAX%
And when prompted enter your grid certificate password. The first line here takes care of setting up FAX and other necessary tools. If you do not work on lxplus you may need to do something different. The second line will initialize the VOMS proxy that holds your grid certificate. If you have problems with that command you should ask your resident grid expert. There are too many things that can go wrong with that to discuss them here.
Navigate to your working area, and from there setup your Analysis Release, following the recommendations above What to do everytime you log in.
Now it's time to actually use FAX. For that, in =ATestRun.cxx comment out the part where we scan the local directory for samples, and instead scan DQ2:
%SYNTAX{ syntax="cpp" }%
//const char* inputFilePath = gSystem->ExpandPathName ("$ALRB_TutorialData/p2622/");
//SH::ScanDir().filePattern("DAOD_SUSY1.08377960._000012.pool.root.1").scan(sh,inputFilePath);
SH::scanRucio (sh, "mc15_13TeV.370900.MadGraphPythia8EvtGen_A14NNPDF23LO_GG_direct_200_0.merge.DAOD_SUSY1.e4008_a766_a821_r7676_p2666/");
%ENDSYNTAX%
(Note we are using a small SUSY input dataset so your test job can run quickly.)
That should do it. You can now just run your script the same way you did before and with a little luck you will get the same result as before. The initialization will take a little longer, as this will actually query dq2 to find the datasets matching your request, and then again for each dataset to locate the actual files. However, compared to the overall run-time this overhead should be small, and the power you gain is most likely worth it.
---++ Using PROOF Lite
Warning: PROOF-Lite is not currently working with ROOT6. A jira ticket has been filed here:
https://its.cern.ch/jira/browse/ATLASG-26
PROOF lite is a fairly effective and easy way of improving the performance of your analysis code. PROOF lite will execute your code in parallel on every CPU core available on your local machine, so if your machine has 16 cores, your analysis will run up to 16 times as fast. The actual amount of speed-up will depend on a variety of things, the number of cores in your machine, the speed of your hard drive, how you read your data, the amount of processing you do for each event, etc.
Using PROOF lite is very straight-forward, you just change the driver statement in ATestRun.cxx to:
%SYNTAX{ syntax="cpp" }%
EL::ProofDriver driver;
%ENDSYNTAX%
That's it. And run as usual:
<verbatim style="background: #e0ebf6;">
root -l '$ROOTCOREDIR/scripts/load_packages.C' 'ATestRun.cxx ("submitDir")'
</verbatim>
Note you may get an error like:
<verbatim style="background: #e0ebf6;">
Function SETUP_e89fa50d() busy. loaded after "/cvmfs/atlas.cern.ch/repo/sw/ASG/AnalysisBase/2.0.14/RootCore/scripts/load_packages.C"
Error: G__unloadfile() Can not unload "/cvmfs/atlas.cern.ch/repo/sw/ASG/AnalysisBase/2.0.14/RootCore/scripts/load_packages.C", file busy ...
Note: File "/cvmfs/atlas.cern.ch/repo/sw/ASG/AnalysisBase/2.0.14/RootCore/scripts/load_packages.C" already loaded
</verbatim>
But it does not cause any problems.
There are a couple of practical changes though. The main one is that any messages your job prints while processing events no longer appear on the screen. Instead they go into a log file in the ~/.proof directory hierarchy. In general it is a little hard to debug jobs in PROOF, so if something goes wrong there is an advantage to running with DirectDriver instead.
For PROOF farm support see here. (Note I have not tested this with the xAOD ... yet.)
---++ Using Lxbatch and Other Batch Systems
If you work on lxplus, you can run your job on the local batch system: lxbatch. This allows you to run on many nodes in parallel, improving the turnaround time on your job drastically. If you work on your institutions tier 3 site you may be able to do something similar there by using the correct driver for your batch system. For lxbatch just change the driver statement in ATestRun.cxx to:
%SYNTAX{ syntax="cpp" }%
EL::LSFDriver driver;
driver.options()->setString (EL::Job::optSubmitFlags, "-L /bin/bash"); // or whatever shell you are using
driver.shellInit = "export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase && source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh";
%ENDSYNTAX%
Note the shellInit parameter, which is used to set up AtlasLocalSetup on each of the worker nodes. If you run somewhere else than lxplus you will have to update that. Other supported batch drivers can be found here: <br/>
*Note: If you are using the compiled application to run your macro, you need to do one thing:*
* in your compiled steering macro MyAnalysis/util/testRun.cxx add this near the top with the other header include statements: #include "EventLoop/LSFDriver.h"=
---++ Variable usage statistics
If you would like to know how often each of the variables inside the xAOD are accessed by your analysis code, you can simply add the following lines to your steering macro (ATestRun.cxx) after you have defined the =EL::job object:
%SYNTAX{ syntax="cpp" }%
// print to screen information about xAOD variables used
job.options()->setDouble (EL::Job::optXAODPerfStats, 1);
job.options()->setDouble (EL::Job::optPrintPerFileStats, 1);
%ENDSYNTAX%
When you run the code you will get a long print out listing all the variables inside the xAOD and how many times you've accessed each of them. You probably don't want to run this by default, but it might be useful if you are trying to determine which variables to save to a derived xAOD.
---+ 11. More SampleHandler features
This section (will) describe some nice features of using SampleHandler in your ROOT analysis. It is used to handle the mass amounts of data and MC typically used in your analysis. It can assign 'metadata' to the samples (luminosity, color schemes, etc.), that are easily retrieved by simply referring to the sample.
A complete description of this tool can be found on the dedicated SampleHandler wiki.
---+ 12. More example code
* Example xAOD example code: CPAnalysisExamples
* group-dependent analysis frameworks: RunIIAnalysisFrameworks
---+ 13. Notes and caveats
---++ Systematics
This is a brief summary of how working with systematics and CP tools is expected to work.
All CP tools that implement some sort of systematic variation(s) will be required to implement a common interface. A first version of this is available here. This will first of all make it simpler to hard-code the usage of systematics into the analysis code if the user wants to do that. (Since all CP tools will provide the same interface for setting them up to use different systematics.)
Then, the tools will be required to "register themselves" into a central registry. This registry will know about all the CP tools that have been created by the analysis code, and have a central list of all the possible systematic variations that can be applied to them. The user will then be able to just interact with this singleton registry to set up his/her code for applying different systematics. Without having to interact with each CP tool one by one. This registry is being developed here, but is not ready for prime time just yet.
An example of a tool setting up the systematics registry, and user code using this tool and associated systematic is found in the package PhysicsAnalysis/AnalysisCommon/CPAnalysisExamples and is described on this wiki: https://twiki.cern.ch/twiki/bin/view/AtlasComputing/CPAnalysisExamples
---++ Checklist: Accessing EDM objects
If you want to access and loop over EDM containers (electrons, taus, etc.), there is a general prescription, similar to what has been shown above:
* Source code MyAnalysis/Root/MyxAODAnalysis.cxx
* in the execute() method (that loops event-by-event) retrieve the container type and key name, and iterate over the contents of that container
* at the very top, include the header file to the xAOD EDM container class of interest
* Makefile MyAnalysis/cmt/Makefile.RootCore=
* add the appropriate package to the PACKAGE_DEP list to point the compilation to the right location of the xAOD EDM package (usually somewhere in =Event/xAOD)
---++ Note: xAODVertex
The xAODVertex package is depreciated. You can access xAOD::Vertex through the xAODTracking package:
%SYNTAX{ syntax="cpp" }%
#include "xAODTracking/VertexContainer.h"
%ENDSYNTAX%
---++ Note: Including xAOD headers
ROOT CINT has some trouble with the xAOD code.
If you include an xAOD header file in your algorithms header file, like adding the following to your MyAnalysis/MyAnalysis/MyxAODAnalysis.h:
%SYNTAX{ syntax="cpp" }%
#include "xAODJet/JetContainer.h"
%ENDSYNTAX%
You will receive an extremely cryptic error message upon compiling ( rc compile); something to the affect of:
<verbatim style="background: #e0ebf6;">
Error: class,struct,union or type __int128 not defined /afs/cern.ch/user/l/lheelan/public/ROOT/MT_July2014_2.0.3/RootCoreBin/include/boost/config/suffix.hpp:496:
...
Error: Symbol hash_node<unsigned long,0> is not defined in current scope /afs/cern.ch/user/l/lheelan/public/ROOT/MT_July2014_2.0.3/RootCoreBin/include/CxxUtils/hashtable.h:347:
</verbatim>
This is ROOT CINT attempting to make a dictionary for your algorithm but having trouble dealing with the xAODJet EDM....
The moral of this story is: Do not include the xAOD EDM header files in any headers for which you want to use for CINT dictionary generation (so like your algorithm). You can include them in your source code.
If you absolutely need access to these classes in your header file, you should hide these includes from CINT. With something like this:
%SYNTAX{ syntax="cpp" }%
#ifndef __MAKECINT__
# include "xAODJet/Jet.h"
#endif // not __MAKECINT__
...
#ifndef __MAKECINT__
void someFunction( const xAOD::Jet& jet ) const;
#endif // not __MAKECINT__
%ENDSYNTAX%
Note that like this you can have functions that depend on xAOD parameter/return types, but you can't really declare an xAOD member variable to your class. (At least, it can only be done very carefully.) But in any case, declaring an xAOD member for an EventLoop algorithm doesn't seem like a smart idea anyway.
Forward declaring xAOD types could in principle be another way to go, but that's not any simpler than hiding the includes. And that also results in your code explicitly depending on a certain version of the xAOD EDM.
A final note, this will not be a problem with ROOT6. Hopefully soon (timescale?) the Analysis Release will switch to be a ROOT6 based release and this problem will be solved (as dictionary generation will not be done with CINT, instead clang).
---++ Note: TEvent and TStore
To have a container in xAOD::TEvent it must be coming from an input file, or going to an output file. It does not act like a generic whiteboard like StoreGate in Athena. If you want to work with containers in your analysis that are not in xAOD::TEvent (so not associated to the input or output file, or transient data) you can record them to the xAOD::TStore. When objects/containers are recorded to xAOD::TStore it takes ownership of them, and handles the memory management for you (the deletion).
---+ Getting Help
If you have questions about this tutorial or doing analysis on an xAOD please post your questions to the following mailing list: <br/>
hn-atlas-PATHelp@cern.ch
Sometimes it is best to browse the code to figure out what's going on. There are two places I find useful to search for code:
* LXR which allows you to browse code which is in some official Athena version. This can be useful as most (all?) of the CP tools are dual-use, meaning they exist in Athena-land too.
* svn offline browser from which you can navigate through the packages to find the code of interest. The challenge here is knowing in which package the code you are interested in lives. After setting up your analysis release you can type rc version to see the package layout (and tags) of all the packages included in the release. This svn offline browser holds the code for both Athena and the Analysis Releases.
---+ If you've got this far...
... Congratulations! :-)
And if you have got this far then feel free to start (migrating) your own analysis code, and take advantage of the experts around to help you!!!
---+ Older updates to this tutorial
05.08.2015:
* fixed typo in arguments of JER tool getUncertainty (thanks O.Brandt)
28.07.2015:
* updated to 2.3.21
* updated commands necessary to submit to the Grid
14.07.2015:
* updated implementation of systematics to use make_systematics_vector to create list of systematics to run over
* updated to 2.3.18
* this required updating the instantiation of the jet energy resolution tool
* this required updating the initialization of properties in the jet cleaning tool to set the property "DoUgly" and use one of the two CutLevels to either "LooseBad" or "TightBad" (previously had "MediumBad" which is not an option anymore)
25.06.2015:
* using Analysis Base 2.3.15
08.05.2015:
* updated to MC15 input samples, and correspondingly to Analysis Base 2.3.11 (some hints on updating your analysis code from DC14/Release19 samples to MC15/Release20 can be found here)
07.05.2015:
* removed the forward declarations to just including the regular old xAOD headers, as we are using ROOT6 there are no more CINT (ROOT5) problems.
26.03.2015:
* updating to use 2.1.29
* using checkxAOD.py script in the Analysis Release (2.X.Y, where Y>=29) to find the container key names and types (instead of checkSG.py in Athena)
19.03.2015:
* changed the xAOD::TEvent pointer, previously was member variable m_event, now there is one created in each method where it is used (better coding practice, improves flexibility) (details)
* added EL_CHECK_RETURN macro, instead of the mix-and-match status code check previously implemented (details)
27.02.2015:
* Using 2.1.27 (ROOT6!!!); a description of some of the differences can be found here
17.02.2015:
* Using 2.1.25 (ROOT6!!!); a description of some of the differences can be found here
* Currently the instructions below will work in both ROOT5 (Analysis Release 2.0.X) and ROOT6 (Analysis Release 2.1.X) - this may change shortly.
* New xAOD samples: Data with AMI tag p1814 and DC14 MC 13 TeV with AMI tag r5787. These samples should have the correct trigger information.
* Use EventLoop to only run over the first 500 events.
* FYI: The last stable revision of this tutorial (2.0.22 and older xAODs was r112)
<!-- ********************************************************* --> <!-- Do NOT remove the remaining lines, but add requested info as appropriate --> <!-- ********************************************************* -->
---
<!-- For significant updates to the topic, consider adding your 'signature' (beneath this editing box) --> Major updates:
-- LouiseHeelan - 21 May 2014
<!-- Person responsible for the page:
Either leave as is - the creator's name will be inserted;
Or replace the complete REVINFO tag (including percentages symbols) with a name in the form TwikiUsersName -->
%RESPONSIBLE% AmalVaidya
<!-- Once this page has been reviewed, please add the name and the date e.g. StephenHaywood - 31 Oct 2006 -->
%REVIEW% *Never reviewed*
Topic revision: r1 - 2016-12-06 - AmalVaidya
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback