Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Running a Grid Job using GANGA | ||||||||
Changed: | ||||||||
< < | Ganga is a frontend tool for job definition and management with access to all grid infrastucture supported by ATLAS. Detailed Information about GANGA can be found in http://documentation.hepcg.org/res/ap3/w_301106.pdf![]() | |||||||
> > | Ganga is a frontend tool for job definition and management with access to all grid infrastucture supported by ATLAS. Detailed Information about GANGA can be found in http://documentation.hepcg.org/res/ap3/w_301106.pdf![]() | |||||||
How to setup GANGA | ||||||||
Changed: | ||||||||
< < | (a) Setup Grid environment and GANGAThe following two lines set up the Grid interface and GANGA using the newest version available on AFS: | |||||||
> > | (a) Setup Grid environmentIn order to use GANGA you should run the following commands in a clean shell. Set up the grid environment and create a new proxy | |||||||
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
(b) Setup Athena | ||||||||
Changed: | ||||||||
< < | Set up your Athena environment as usual, for example under 15.6.0:
(c) Run GANGA | |||||||
> > | Set up Athena as you would normally (for me this is)
(c) Setup GANGANow source the GANGA setup
Running GANGA | |||||||
Change directory to the run directory of the whichever package you are working on and then start ganga with:
| ||||||||
Deleted: | ||||||||
< < | Using GANGA with the LCG backend | |||||||
Changed: | ||||||||
< < | The ganga command line is a python shell which can be used to submit jobs. A sample job script is shown here: | |||||||
> > | The ganga command line is a python shell which can be used to submit jobs. Sample job scripts are shown below. These can either be typed line by line into the GANGA shell or saved as a file and executed from the GANGA shell with
Using GANGA with the LCG backend | |||||||
Changed: | ||||||||
< < | j = Job() j.application = Athena() j.name='PTResolution.LowPT.SmallEta.J5' j.application.option_file=[ '/home/robinson/athena/15.6.0/PhysicsAnalysis/ForwardJets/run/jobOptions.PTResolution.LowPT.SmallEta.py' ] j.application.athena_compile = True j.application.atlas_release='15.6.0' j.application.prepare() j.inputdata=DQ2Dataset() j.inputdata.dataset=[ 'mc08.105014.J5_pythia_jetjet.merge.AOD.e344_s479_s520_r809_r838/' ] j.outputdata=DQ2OutputDataset() j.outputdata.outputdata=['PTResolution.root'] j.splitter=DQ2JobSplitter() j.splitter.numsubjobs=500 j.backend = LCG() j.backend.requirements.cloud='UK' j.submit() | |||||||
> > | config['LCG']['MatchBeforeSubmit'] = True j = Job() j.application = Athena() j.name ='JES.ESD.J1' j.application.option_file = [ '/home/robinson/athena/DiJets/15.6.7/PhysicsAnalysis/DiJets/share/jobOptions.ForwardJES.py' ] j.application.athena_compile = True j.application.atlas_release = '15.6.7' j.application.prepare() j.inputdata = DQ2Dataset() j.inputdata.dataset = [ 'mc09_7TeV.105010.J1_pythia_jetjet.recon.ESD.e468_s766_s767_r1206/' ] j.outputdata = DQ2OutputDataset() j.outputdata.outputdata = [ 'ForwardJES.root' ] j.splitter = DQ2JobSplitter() j.splitter.numsubjobs = 500 j.backend = LCG() j.backend.requirements.cloud = 'IT' j.submit() | |||||||
The most important options here are
| ||||||||
Line: 63 to 71 | ||||||||
If the data is not present, you can go here to request replication: http://panda.cern.ch:25980/server/pandamon/query?mode=ddm_req![]() The Athena version that you request must be present at all sites that your job is sent to. You can check which versions are available at which sites by running: | ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
OLD BUT MAY STILL BE RELEVANT |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Running a Grid Job using GANGAGanga is a frontend tool for job definition and management with access to all grid infrastucture supported by ATLAS. Detailed Information about GANGA can be found in http://documentation.hepcg.org/res/ap3/w_301106.pdf![]() | ||||||||
Line: 16 to 16 | ||||||||
Change directory to the run directory of the whichever package you are working on and then start ganga with:
| ||||||||
Changed: | ||||||||
< < | Using GANGA | |||||||
> > | Using GANGA with the LCG backend | |||||||
The ganga command line is a python shell which can be used to submit jobs. A sample job script is shown here: | ||||||||
Line: 33 to 33 | ||||||||
![]() | ||||||||
Added: | ||||||||
> > | Using the Panda backendThe Panda backend has to be used for jobs sent to US sites. It requires a slightly different form of job submission script. A sample job script is shown here: j = Job()j.application = Athena() j.name='PTResolution.LowPT.SmallEta.J5' j.application.option_file=[ '/home/robinson/athena/15.6.0/PhysicsAnalysis/ForwardJets/run/jobOptions.PTResolution.LowPT.SmallEta.py' ] j.application.athena_compile = True j.application.atlas_release='15.6.0' j.application.prepare() j.inputdata=DQ2Dataset() j.inputdata.dataset=[ 'mc08.105014.J5_pythia_jetjet.merge.AOD.e344_s479_s520_r809_r838/' ] j.outputdata=DQ2OutputDataset() j.splitter=DQ2JobSplitter() j.splitter.numsubjobs=500 j.backend=Panda() j.submit() Using the NorduGrid backendIf submitting jobs to NorduGrid in the Netherlands there is only one cloud. Therefore, the your script needs to be of the following form, changing the backend to 'NG'. j = Job()j.application = Athena() j.name='PTResolution.LowPT.SmallEta.J5' j.application.option_file=[ '/home/robinson/athena/15.6.0/PhysicsAnalysis/ForwardJets/run/jobOptions.PTResolution.LowPT.SmallEta.py' ] j.application.athena_compile = True j.application.atlas_release='15.6.0' j.application.prepare() j.inputdata=DQ2Dataset() j.inputdata.dataset=[ 'mc08.105014.J5_pythia_jetjet.merge.AOD.e344_s479_s520_r809_r838/' ] j.outputdata=DQ2OutputDataset() j.outputdata.outputdata=['PTResolution.root'] j.splitter=DQ2JobSplitter() j.splitter.numsubjobs=500 j.backend=NG() j.submit() | |||||||
*Useful GANGA python shell commands
| ||||||||
Line: 44 to 54 | ||||||||
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
Common GANGA Problems
The datasets belonging to the container that you want to run on must all be present on the same cloud (although not necessarily at the same site). You can check where datasets are available by running: | ||||||||
Changed: | ||||||||
< < |
Using the Panda backendThe Panda backend has to be used for jobs sent to US sites. It requires a slightly different form of job submission script. A sample job script is shown here: | |||||||
> > |
| |||||||
Changed: | ||||||||
< < | j = Job() j.application = Athena() j.name='PTResolution.LowPT.SmallEta.J5' j.application.option_file=[ '/home/robinson/athena/15.6.0/PhysicsAnalysis/ForwardJets/run/jobOptions.PTResolution.LowPT.SmallEta.py' ] j.application.athena_compile = True j.application.atlas_release='15.6.0' j.application.prepare() j.inputdata=DQ2Dataset() j.inputdata.dataset=[ 'mc08.105014.J5_pythia_jetjet.merge.AOD.e344_s479_s520_r809_r838/' ] j.outputdata=DQ2OutputDataset() j.splitter=DQ2JobSplitter() j.splitter.numsubjobs=500 j.backend=Panda() j.submit() | |||||||
> > | If the data is not present, you can go here to request replication: http://panda.cern.ch:25980/server/pandamon/query?mode=ddm_req![]() | |||||||
Changed: | ||||||||
< < | -- JamesRobinson - 13 Nov 2009 | |||||||
> > | The Athena version that you request must be present at all sites that your job is sent to. You can check which versions are available at which sites by running:
| |||||||
OLD BUT MAY STILL BE RELEVANT | ||||||||
Deleted: | ||||||||
< < | GANGA on NorduGridBy default GANGA submits your jobs to the LCG. Since GANGA version 4.3.0, you can also submit your jobs to NorduGrid using the new NG backend. More information on how to change your jobs from LCG to NG can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/GangaNGTutorial430![]() | |||||||
Sandbox fun* Input Sandbox: |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Running a Grid Job using GANGA | ||||||||
Changed: | ||||||||
< < | Ganga is a frontend tool for job definition and management with access to all grid infrastucture supported by ATLAS.
Detailed Information about GANGA can be found in http://documentation.hepcg.org/res/ap3/w_301106.pdf![]() | |||||||
> > | Ganga is a frontend tool for job definition and management with access to all grid infrastucture supported by ATLAS. Detailed Information about GANGA can be found in http://documentation.hepcg.org/res/ap3/w_301106.pdf![]() | |||||||
Changed: | ||||||||
< < | How to setup and use GANGA | |||||||
> > | How to setup GANGA | |||||||
(a) Setup Grid environment and GANGAThe following two lines set up the Grid interface and GANGA using the newest version available on AFS: | ||||||||
Deleted: | ||||||||
< < | ||||||||
| ||||||||
Deleted: | ||||||||
< < | ||||||||
(b) Setup Athena | ||||||||
Deleted: | ||||||||
< < | Set up your Athena environment as usual, for example under 12.0.6: * source ~/cmthome/setup.sh -tag=12.0.6 | |||||||
Added: | ||||||||
> > | Set up your Athena environment as usual, for example under 15.6.0:
| |||||||
(c) Run GANGA | ||||||||
Deleted: | ||||||||
< < | Start GANGA from the cmt or run directory of the Athena working area that has been setup before with just typing: ganga
To execute a script to submit a job in GANGA, type in GANGA command line (not GUI version):
execfile('/home/bernius/testarea/11.0.42/PhysicsAnalysis/AnalysisCommon/ttHHbb/run/mygangajob.py')
(This script can be found here: mygangajob.py)
The job can also be submitted by just typing: ganga mygangajob.py
For more information about submitting your own jobs see the GANGA tutorial:
https://twiki.cern.ch/twiki/bin/view/Atlas/GangaTutorial427![]() | |||||||
Added: | ||||||||
> > | Change directory to the run directory of the whichever package you are working on and then start ganga with:
Using GANGAThe ganga command line is a python shell which can be used to submit jobs. A sample job script is shown here: j = Job()j.application = Athena() j.name='PTResolution.LowPT.SmallEta.J5' j.application.option_file=[ '/home/robinson/athena/15.6.0/PhysicsAnalysis/ForwardJets/run/jobOptions.PTResolution.LowPT.SmallEta.py' ] j.application.athena_compile = True j.application.atlas_release='15.6.0' j.application.prepare() j.inputdata=DQ2Dataset() j.inputdata.dataset=[ 'mc08.105014.J5_pythia_jetjet.merge.AOD.e344_s479_s520_r809_r838/' ] j.outputdata=DQ2OutputDataset() j.outputdata.outputdata=['PTResolution.root'] j.splitter=DQ2JobSplitter() j.splitter.numsubjobs=500 j.backend = LCG() j.backend.requirements.cloud='UK' j.submit() The most important options here are
![]() *Useful GANGA python shell commands
Common GANGA Problems
The datasets belonging to the container that you want to run on must all be present on the same cloud (although not necessarily at the same site). You can check where datasets are available by running: | |||||||
GANGA on NorduGridBy default GANGA submits your jobs to the LCG. Since GANGA version 4.3.0, you can also submit your jobs to NorduGrid using the new NG backend. More information on how to change your jobs from LCG to NG can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/GangaNGTutorial430![]() | ||||||||
Deleted: | ||||||||
< < |
Some GANGA commands and things to know
| |||||||
Sandbox fun* Input Sandbox: | ||||||||
Added: | ||||||||
> > | | |||||||
| ||||||||
Changed: | ||||||||
< < |
| |||||||
> > |
| |||||||
|
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Running a Grid Job using GANGAGanga is a frontend tool for job definition and management with access to all grid infrastucture supported by ATLAS. | ||||||||
Line: 64 to 64 | ||||||||
lcg-infosites --vo atlas closeSE | ||||||||
Changed: | ||||||||
< < | Although it's a bit of black magic to work out what is actually associated with what. | |||||||
> > | Update: The ganga people now have this wiki page which looks extremely helpful:
https://twiki.cern.ch/twiki/bin/view/Atlas/DAGangaFAQ![]() | |||||||
What's easier than all this is to use NorduGrid instead which allows data to travel to the node where your job is (to some extent). On NorduGrid the splitter element of the job setup seems to be important and setting the number of subjobs to the number of files in the dataset seems to produce the best results (least failed subjobs). |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Added: | ||||||||
> > |
Running a Grid Job using GANGAGanga is a frontend tool for job definition and management with access to all grid infrastucture supported by ATLAS. Detailed Information about GANGA can be found in http://documentation.hepcg.org/res/ap3/w_301106.pdf![]() How to setup and use GANGA(a) Setup Grid environment and GANGAThe following two lines set up the Grid interface and GANGA using the newest version available on AFS:
(b) Setup AthenaSet up your Athena environment as usual, for example under 12.0.6: * source ~/cmthome/setup.sh -tag=12.0.6(c) Run GANGAStart GANGA from the cmt or run directory of the Athena working area that has been setup before with just typing: ganga To execute a script to submit a job in GANGA, type in GANGA command line (not GUI version): execfile('/home/bernius/testarea/11.0.42/PhysicsAnalysis/AnalysisCommon/ttHHbb/run/mygangajob.py') (This script can be found here: mygangajob.py) The job can also be submitted by just typing: ganga mygangajob.py For more information about submitting your own jobs see the GANGA tutorial: https://twiki.cern.ch/twiki/bin/view/Atlas/GangaTutorial427![]() GANGA on NorduGridBy default GANGA submits your jobs to the LCG. Since GANGA version 4.3.0, you can also submit your jobs to NorduGrid using the new NG backend. More information on how to change your jobs from LCG to NG can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/GangaNGTutorial430![]() Some GANGA commands and things to know
Sandbox fun* Input Sandbox:
Making your jobs actually workMost likely when you first try to submit grid jobs you will encounter lots of problems with your job being sent to a site where the dataset you asked for is empty. In general this is related to the fact that the resource broker doesn't really understand the concept of incomplete datasets and handles them badly. On the LCG your best bet is to try and find out where your dataset is available and send the job there yourself. The first port of call here is AMI: The ATLAS Metadata Interface (AMI): http://lpsc1168x.in2p3.fr:8080/opencms/opencms/AMI/www/index.html![]() Links
|