The requirements file will need to be in the home directory of the (remote) account where your globus job is going to run, so you'll need to copy it there:
$ globus-rcp -p ~/requirements ouhep1.nhn.ou.edu:(Or just uncomment the line in run-atlfast-globus. Also, it's already in the grid account here on ouhep1, so you don't have to worry about it if you're running here.)
Then, in run-atlfast-globus, you'll need to set the gatekeeper, the working directory (relative to the remote $HOME directory -- be sure that directory actually exists on the gatekeeper!) in which you'd like your atlfast job to run, your BNL AFS userid, and the Release.
And finally, in atlfast-globus, you may have to twiddle with the klog and unlog path, if those commands aren't found on the machine you're trying to run. They're usually either in /usr/bin/ or /usr/afsws/bin/, but ...
You'll also need to customize in that script what kind of job you'd like to run (Pythia or Isajet), how many events, ...
Then you're ready to roll, just run run-atlfast-globus, which will prompt you for your BNL AFS password, and then it'll copy and run atlfast-globus (which has to be in the same directory, of course) on the gatekeeper, which will check out, compile, and run athena-atlfast.
If you'd like to submit the job rather than run it interactively, just change globus-job-run to globus-job-submit and add /jobmanager-condor (or -pbs or -lsf, ..., depending on what job scheduler you have available; here at OU we only have condor installed) to the gatekeeper in run-atlfast-globus.
Then you can get the status of your job with globus-job-status <jobID>, and the output and stderr with globus-job-get-output <jobID> and globus-job-get-output -err <jobID>.
If you have any problems or suggestions, please let me know.
If you want to run on other testbed sites, you need to be aware of several things: Athena for some reason requires the existence of libXm.so.1, which is an old version (LessTif/Motif1.2, as opposed to 2.0). You may be able to get away with making a soft link to libXm.so.2, I haven't tried that.
Also, it wants libstdc++-libc6.1-2.so.3, which is a version number that's not in the standard RedHat distribution as far as I know. The way to get around that is to make a soft link:
# ln -s libstdc++-2-libc6.1-1-2.9.0.so /usr/lib/libstdc++-libc6.1-2.so.3That does work, I've done it here.
Also, for the time being, the BNL scripts require access to both the /afs/usatlas.bnl.gov/ and /afs/rhic/ trees, until all instances of rhic have been eliminated.