How to run ATLFAST via globus and condor

Download the script file globus-condor-atlfast.sh and edit it slightly. Namely, be sure to substitute YOUR email address on the line notify_user, because I don't want to receive notification that YOUR condor job has finished! :-)

You can also change the number of produced events on the line NEVENT, currently set to 1000. At the moment, that partition has about 3 GB free, and 1000 events seem to use about 1 MB and take about 3 min.

Then just run globus-condor-atlfast.sh, with no arguments. This will submit 4 atlfast jobs to the condor queue, each in its own directory.

Afterwards, you can pick up the ntuples (atlfast.ntup) and the pythia log files (demo.out) via gsiftp from ouhep1 in the subdirectory atlfast/job[1234].

Please give it a shot and let me know if it works, or doesn't, or if you have any suggestions for improvements. This is just a first attempt, and it's very clunky and rigged, but it SHOULD run - it does for me, anyway.

At this point it is only possible to submit 4 atlfast jobs at the same time - even though globus and condor could in principle handle more - since I hardwired each job to run in a different directory, otherwise all atlfast executables would be writing to the same output files - demo.out and atlfast.ntup.

I thought I had everything figured out with generic multiple jobs redirecting output through condor, but the redirection seems to be handled only at the end of the job, so the atlfast executables will still initially try to access the original file names, which will crash multiple jobs.


Horst Severini <hs@mail.nhn.ou.edu>
Last modified: Tue Apr 3 20:15:18 CDT 2001