Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
DQ2 Thingsdq2 is handy software for copying files (things like AODs, ESDs, EVNTs, TAGs...) off the grid. To set it up, do this: | ||||||||
Changed: | ||||||||
< < | source /grid/LCG-share/UI/glite/external/etc/profile.d/grid-env.sh source /grid/LCG-share/DQ2/endusers/setup.sh.UCL-HEP | |||||||
> > | source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh | |||||||
voms-proxy-init -voms atlas | ||||||||
Added: | ||||||||
> > | export DQ2_LOCAL_SITE_ID=UKI-LT2-UCL-CENTRAL_LOCALGROUPDISK | |||||||
Then there are two main commands you'll probably feel like using: | ||||||||
Changed: | ||||||||
< < | dq2_ls -g | |||||||
> > | dq2-ls -g | |||||||
Changed: | ||||||||
< < | The dq2_ls lists files in a dataset. The dq2_get gets them. The -g tells it to list files in the global catalogue, -r says look remotely, -v says tell me what the hell you're doing. | |||||||
> > | The dq2-ls lists files in a dataset. The dq2-get gets them. The -g tells it to list files in the global catalogue, -r says look remotely, -v says tell me what the hell you're doing. | |||||||
Dataset names are either official things like mc12.005200.T1_McAtNlo_Jimmy.evgen.EVNT.v12000401 or your own job output like user.adamdavison.005667.ntuples.v6 You also sometimes want a whole dataset rather than just one or two files for testing, then you just do: | ||||||||
Changed: | ||||||||
< < | dq2_get -r -v | |||||||
> > | dq2-get -r -v | |||||||
And hope you've got enough free disk space. | ||||||||
Line: 60 to 59 | ||||||||
dq2-register-location users.jamesmonk.test IN2P3-CC_DATADISK | ||||||||
Changed: | ||||||||
< < | (you can find out where the original dataset was present by using dq2-list-dataset-replicas) | |||||||
> > | (you can find out where the original dataset was present by using
dq2-list-dataset-replicas) | |||||||
Since that's quite a lot to do if you plan on using data from very many datasets (just one FDR run was over 700 files) I have a little script that creates a new combined dataset for you. You basically just create a file called users.$USERNAME.whatever.you.want and list all the datasets you'd like in it. Then run ./user_dataset.sh users.$USERNAME.whatever.you.want and it creates a new dataset with the same name and contains all of the data from the datasets in that file. |