The aims of my work are:
Julian
28/02/02
$ mkdir ${HOME}/GCC
$ mv gcc-alt-2.95.2.tar.gz ${HOME}/GCC
$ cd ${HOME}/GCC ; gunzip gcc-alt-2.95.2.tar.gz ; tar xf gcc-alt-2.95.2.tar
$ gmake
If you get some error like this then chances are your current compiler is a glibc version necessitating the application of the aforementioned patch, which is applied thusly:
$ cd ${HOME}/GCC
$ cat glibc-2.2.patch | patch -p0
After which you can retry the gmake step. Then install the compiler and clean up.
$ gmake install
$ cd .. ; rm -r gcc-2.95.2 gcc-2.95.2.tar glibc-2.2.patch
After which you should have a working compiler taking up about 43MB. Now you just need somewhere for the matching version of standard C++ library, so download this file and untar it in your $HOME directory.
If you haven't got cvs available locally, download this source distribution from cvshome.org. into your ${HOME} and do:
$ gunzip cvs-1.11.1p1.tar.gz
$ tar xf gunzip cvs-1.11.1p1.tar
$ cd cvs-1.11.1p1
$ mkdir ${HOME}/CVS
$ ./configure --prefix=${HOME}/CVS
$ gmake ; gmake install
This sets up cvs in ${HOME}/CVS, but cvs is rather lightweight and you may as well just copy the executable into your ${HOME}/bin directory, which should then be in your path, and get rid of the ~/CVS directory if you like.
First, create a file ${HOME}/.cvsrc containing the following lines:
cvs -z9
diff -u
update -d
Next, create a file ${HOME}/.cvsup/auth containing:
atlas-sw.cern.ch:atlasclient@anywhere.:insider:
Next, put the following lines in your login script, such that you can guarantee that the environmental variables mentioned are always set in your shell. Which file you use is shell dependent.
# Atlas Settings
export SITEROOT=/home/jpp/MyAtlas
export CMTVERS=v1r10p20011126
export CMTBASE=${HOME}/CMT
export CMTROOT=${CMTBASE}/${CMTVERS}
export CMTATLAS=${SITEROOT}/Atlas/2.4.1
export CMTGAUDI=${SITEROOT}/Gaudi
source ${CMTROOT}/mgr/setup.sh
cmt config
source ${HOME}/setup.sh
export CVSROOT=:pserver:ATLASUSERID@atlas-sw.cern.ch:/atlascvs
Anyone with an Atlas account should be able to use the kserver, but that requires a computer with a correctly installed kauth executable. To use the pserver, as I did, you need to follow the instructions on this page to get access.
You may notice that the CMTROOT variable is now being defined twice, once at login and again when sourcing ${CMTROOT}/mgr/setup.sh. I prefer to comment out the later definition to keep only the login one to avoid any possible confusion in the future.
The SITEROOT variable should correspond to a directory on a disk with several GB available as this is where all of the software will sit.
You now need to execute
$ cvs login
and enter your password, which will then be stored in encrypted form in a local private file. Unless you get an error message here it has worked.
Finally, put a copy of this requirements file into your home directory. This file will be read when the cmt config is executed in your .bashrc or equivalent file. Edit it to correspond to where your have chosen for $SITEROOT and the compiler. The variable $CMTTEST defines where your local check outs of Atlas packages for local recompilation will be put. For simplicity I have it under $SITEROOT, but obviously it could be anywhere.
Now comment out the line in requirements specifying CVSROOT. This refers to your local copy of the Atlas CVS repository which you don't have yet, so make it read like:
# Put this at the end...
# set CVSROOT=/home/jpp/MyAtlas/AtlasCVS
set CVSROOT=:pserver:ATLASUSERID@atlas-sw.cern.ch:/atlascvs
At this point you should start a new window to make sure the setup files created so far are read in. You may even need to start a whole new session depending on how your shell treats login shells and which setup file you use.
$ cd ${SITEROOT}
$ mkdir Install ; cd Install
and download the following cvsup steering files to that directory: atlas.sup, ext.sup, gaudi.sup, sw.sup. Edit these files to change from /home/jpp/MyAtlas to whatever your chosen ${SITEROOT} is. Make sure all of the directories which are specified by prefix or base in these files exist, then execute the following:
cvsup -g atlas.sup >& atlas.txt
You should get output similar to this, but the details will change as the Atlas cvs repository changes. You can now start using your own copy of the cvs repository by changing the comment lines in your ${HOME}/requirements:
# Put this at the end...
set CVSROOT=/home/jpp/MyAtlas/AtlasCVS
# set CVSROOT=:pserver:ATLASUSERID@atlas-sw.cern.ch:/atlascvs
with $CVSROOT set for your environment. Now you can complete the cvs/cmt setup by getting the cvs plugin for cmt, according to the instructions on the cmt page. Basically you are creating an entry in your copy of the Atlas CVS repository which speficies where the plugin is, to avoid it looking in /afs/cern.ch/sw/contrib/CMT/cmtcvs/v1/i386_linux22/cmtcvs.exe. As Jim points out, this bit needs to be redone everytime you recheck out the Atlas CVS repository, and he discusses a potential solution. There is another problem with this procedure, which is that the permissions set in your local copy of the Atlas CVS repository will not allow you to commit your changes and give you a stupid "Insufficient Karma" error. You must get around this by editing the $SITEROOT/AtlasCVS/CVSROOT/commitavail file to include yourself with a line like this:
# These need to access everything and/or CVS administrative files
avail | alibrari, helge, poulard, carnault, schaffer, dquarrie,
jpp
With this change you should be able to commit to your local CVS.
Downloading the Software
You're now ready to start downloading the software with cvsup. The stage relies on a cvsupd server running on certain computers at CERN. The server which mirrors the cvs repository runs on atlas-sw.cern.ch and seems to be fairly reliable. This is the one used in atlas.sup. The remaining steps rely on Jim's private cvsupd server, which can crash, so it's worth knowing how to start your own by reading this aside. You can then just change the server specified in your sup file to correspond to the machine at cern where you started the server.
Run the following:
$ cvsup -g ext.sup >&ext.txt
$ cvsup -g gaudi.sup >& gaudi.txt
$ cvsup -g sw.sup >& sw.txt
My logfiles looked like this: ext.txt, gaudi.txt,
sw.txt,
but yours will be different if the files change at CERN.
Downloading Future Versions of the Software
It's worth noting here that if cvsupd continues to be used at CERN, it would be a very good idea to make the sup files specifying what is needed to make a remote installation part of each official release. Until that is done, you can fairly easily change these files for yourself to get new versions from your own server.
First, make sure that you have write access to all of your files with the following:
$ cd $SITEROOT
$ find . -exec chmod u+w {} \;
Next, you need to Objectivity libraries to link against when compiling. This should (hopefully) not be needed for releases 3.0.0 and beyond. For the moment, you can use this tar file I made and extract it into your /home/jpp/MyAtlas/Anaphe directory.
Next, link everything in your version of /home/jpp/MyAtlas/Anaphe into /home/jpp/MyAtlas/sw/lhcxx/specific/redhat61/gcc-2.95.2:
$ mkdir -p /home/jpp/MyAtlas/sw/lhcxx/specific/redhat61/gcc-2.95.2
$ ln -s /home/jpp/MyAtlas/Anaphe/* /home/jpp/MyAtlas/sw/lhcxx/specific/redhat61/gcc-2.95.2
Next create a link from your version of /home/jpp/MyAtlas/external to /home/jpp/MyAtlas/sw/contrib:
$ ln -s /home/jpp/MyAtlas/external /home/jpp/MyAtlas/sw/contrib
Next, you need to add the AIDA package if you want to compile any of AtlfastCode. AIDA should be added to one of the sup definitions on the cvsd server at CERN, but I just created this tarfile on a machine with AFS:
[cern] $ tar cf /tmp/AIDA.tar /afs/cern.ch/sw/contrib/AIDA
and unpacked it into /home/jpp/MyAtlas/external.
Next, make your equivalent of this directory:
mkdir -p /home/jpp/MyAtlas/external/Python/ExternalTools/Atlas-2.1/i386/Linux/lib
and copy in the following files:
/afs/cern.ch/atlas/offline/external/Python/ExternalTools/Atlas-2.1/i386/Linux/lib/libtcl8.0.so
/afs/cern.ch/atlas/offline/external/Python/ExternalTools/Atlas-2.1/i386/Linux/lib/libtk8.0.so
Next, you need to remove some symbolic links which point to /afs/cern.ch. Jim has written a script to remove some of them, which I've extended abit to cover more of the links. Edit my script to remove any reference to /home/jpp and execute it as follows:
$ find $SITEROOT/ -type l | jpp_link_fix.pl >& link.txt
The log file will contain some links which haven't been fixed. The log
file I got is here, and the missing links in it
didn't cause me a problem for the limited tests of the code which I did.
If all of the above went o.k., you should now be able to compile
and run new versions of Athena/Atlfast locally without any reference to
the network. Execute these steps:
$ cd $CMTTEST
$ cmt co TestRelease
$ cmt co Simulation/Atlfast/AtlfastCode
Edit $CMTTEST/TestRelease/TestRelease-XX-YY-ZZ/cmt/requirements to add a line which makes explicit reference to whatever version of AtlfastCode you were blessed with by CMT, which in my case required adding the line:
use AtlfastCode AtlfastCode-01-03-02 Simulation/Atlfast
Then you can issue the following commands to rebuild athena and run:
$ cd $CMTTEST/TestRelease/TestRelease-XX-YY-ZZ/cmt
$ cmt broadcast cmt config
$ cmt broadcast gmake
$ source setup.sh
$ cd ../run
$ cp $CMTTEST/Simulation/Atlfast/AtlfastCode/AtlfastCode-XX-YY-ZZ/share/*
.
$ athena
Athena should now run and produce some output like this. Note that at time of writing, the interaction between the latest release of the AtlfastCode-01-03-02 and release 2.4.1 was such that I had to change the following line in the jobOptions.txt file to get the job to run:
// ApplicationMgr.DLLs += { "PythiaGenerator"};
ApplicationMgr.DLLs += { "GeneratorModules"};
However, the distribution described in the previous section is clearly much more than should be needed to simply run the code.
Running athena immediately highlights potential problems for deploying
a lightweight run-time environment for, eg., parallel processing over a
Grid. A large and arbitrary number of scripts are searched for in different
directories, often with an immediate core dump if they are not found. The
shared libraries which will be needed by the application seems to only
be determinable at runtime. I considered two possible methods for creating
a runtime enviroment.
Turning to the second approach, I investigated how easily a tar file
could be created. The way in which athena determines which shared libraries
are required only at runtime prevented me from quickly finding a way of
constructing an entirely statically linked executable. So, I adopted more
of a brute force approach to see what was necessary to get a standalone
dynamic environment.
Copy all libraries from the development environment in AtlasCage/lib:
$ find ${SITEROOT}/ -type f -name \*.so -exec cp {} ~/AtlasCage/lib \;
This picks up most of the libraries, except for some Objectivity ones where the actual files end in .so.X, but athena actually looks for the .so links. Copying in the last few by hand gives this list of 292 libraries and a running environment totalling about 500MB.
Next I investigated how many of these libraries were actually needed, as 500MB seemed somewhat excessive. By compying in by hand only those libraries excplicitly required, I got this list of 72 libraries, totalling about 170MB. Hence the smallest runtime environment I could make is a 183MB tar file which gzips down to 38MB which you can download and run with
env -i ./run_athena.sh
$ ls lib/* | ./check_libs.pl > check_libs.txt
Checking this log for multiple instances of the same libraries highlights some potential problems, for example with multiple HTL libraries. This could lead to problems in finding the correct one to bundle with athena.exe. The procedure for building a runtime bundle needs to be intergrated with the Atlas framework to find out which versions of all different libraries are needed.
Trying to run this tarfile on other platforms did reveal problems. I particular, the fact that some parts of the code link against pthread makes it less portable than most executables.
Big thanks to UCL for letting me work here for afew weeks:
it was a great place to leave HEP from. In particular, thanks to John
Butterworth and John Couchman.