You can also change the number of produced events on the line NEVENT, currently set to 1000. At the moment, that partition has about 3 GB free, and 1000 events seem to use about 1 MB and take about 3 min.
Then just run globus-condor-atlfast.sh, with no arguments. This will submit 4 atlfast jobs to the condor queue, each in its own directory.
Afterwards, you can pick up the ntuples (atlfast.ntup) and the pythia log files (demo.out) via gsiftp from ouhep1 in the subdirectory atlfast/job.
Please give it a shot and let me know if it works, or doesn't, or if you have any suggestions for improvements. This is just a first attempt, and it's very clunky and rigged, but it SHOULD run - it does for me, anyway.
At this point it is only possible to submit 4 atlfast jobs at the same time - even though globus and condor could in principle handle more - since I hardwired each job to run in a different directory, otherwise all atlfast executables would be writing to the same output files - demo.out and atlfast.ntup.
I thought I had everything figured out with generic multiple jobs redirecting output through condor, but the redirection seems to be handled only at the end of the job, so the atlfast executables will still initially try to access the original file names, which will crash multiple jobs.