July 11th 2005 OSG 0.2.1 installation notes - Karthik Installation instructions followed from: http://giis.ivdgl.org/twiki/bin/view/Provisioning/AtlasDeployment#Introduction http://osg.ivdgl.org/twiki/bin/view/Provisioning/OsgCEInstallGuide0dot2dot1 Logged into ouhep1 as root /etc/init.d/edg-crl-upgraded stop /etc/init.d/edg-gridmapfile-upgraded stop /etc/init.d/gris stop /etc/init.d/monalisa stop cp -p /etc/xinetd.d/globus-gatekeeper /etc/xinetd.d/gsiftp /etc/xinetd.d/gsiftp2 /etc/xinetd.d/save cp -p /etc/services /etc/services.save cp -p /etc/init.d/gris /etc/init.d/gris.save cp -p /etc/init.d/monalisa /etc/init.d/monalisa.save mv /etc/grid-security /etc/grid-security-osg-0.1.6 mkdir /etc/grid-security cp -p /etc/grid-security-osg-0.1.6/hostkey.pem /etc/grid-security cp -p /etc/grid-security-osg-0.1.6/hostcert.pem /etc/grid-security cp -p /etc/grid-security-osg-0.1.6/grid-mapfile /etc/grid-security mkdir /atlas2/software/osg ln -s /atlas2/software/osg /opt/osg export VDTSETUP_CONDOR_LOCATION=/usr/local/condor export VDTSETUP_CONDOR_CONFIG=/usr/local/condor/etc/condor_config export VDT_LOCATION=/opt/osg cd $VDT_LOCATION cd /opt/pacman/current . setup.sh cd /opt/osg pacman -get OSG:ce If you would like, we can set up a Globus jobmanager interface to Condor. This will allow Globus Gatekeeper to run jobs on Condor. Would you like to enable Globus jobmanager for Condor? Choices: y (yes), n(no), s (skip this question) y ............................ ............................ ............................ ............................ Downloading [site-verify.tar.gz] from [software.grid.iu.edu]... 26/26 kB downloaded... Untarring [site-verify.tar.gz]... =>installation done source setup.sh mv /etc/grid-security globus; ln -s /opt/osg/globus/grid-security /etc ln -s ../grid-security/certificates globus/share rm globus/TRUSTED_CA; ln -s grid-security/certificates globus/TRUSTED_CA cp -a /atlas2/software/osg-itb/globus/etc/ldap globus/etc # Add to /etc/xinetd.d/globus-gatekeeper and /etc/xinetd.d/gsiftp[2]: env = GLOBUS_TCP_PORT_RANGE=63001,65000 env += GLOBUS_TCP_SOURCE_RANGE=63001,65000 # (only in gsiftp2) # Add to globus/etc/globus-job-manager.conf: -globus-tcp-port-range 63001,65000 #Add to globus/etc/globus-user-env.[c]sh: export GLOBUS_TCP_PORT_RANGE="63001,65000" # (or setenv ...) mv /etc/xinetd.d/*vdtsave /etc/xinetd.d/save/ mv /etc/xinetd.d/*vdtsave /etc/xinetd.d/save/ /etc/init.d/xinetd restart /etc/init.d/edg-crl-upgraded start cd /atlas2/software/grid3appdata/app/etc cp -p grid3-locations.txt grid3-locations.txt.save cd - $VDT_LOCATION/vdt/setup/setup-cert-request q /* (host and ldap certs already there, otherwise request them with ./globus/bin/grid-cert-request -host ouhep1.nhn.ou.edu ./globus/bin/grid-cert-request -host ouhep1.nhn.ou.edu -service ldap */ ) cd monitoring/ ./configure-osg.sh Please specify your OSG SITE NAME [CHANGE-site-name]: OUHEP_OSG Please specify your OSG BASE_DIR [/atlas2/software/osg]: /opt/osg se specify your GRID3 APP_DIR [/app]: /atlas2/software/grid3appdata/app Please specify your GRID3 DATA_DIR [/data]: /atlas2/software/grid3appdata/data Please specify your GRID3 TMP_DIR [/scratch]: /atlas2/software/grid3appdata/shared-tmp Please specify your GRID3 TMP_WN_DIR [/tmp]: /myhome1/atlas/grid3/tmp Please specify the VO sponsor of this site [iVDGL]: usatlas:50 ivdgl:10 uscms:10 local:30 Please specify the Policy URL [POLICY_URL]: Please specify the Batch Queing to be used [condor]: Please review the information: Grid Site Name: OUHEP_OSG Grid3 Location: /opt/osg Application: /atlas2/software/grid3appdata/app Data: /atlas2/software/grid3appdata/data Shared Temp: /atlas2/software/grid3appdata/shared-tmp WorkerNode Temp: /myhome1/atlas/grid3/tmp VO sponsor: usatlas:50 ivdgl:10 uscms:10 local:30 JOB Manager: condor Is this information correct (y/n)? [n]:y cd - cd MIS-CI/ ./configure-misci.sh --------- Would you like to set up MIS-CI cron now ? (y/n) y At what frequency (in minutes) would you like to run MIS-CI ? [10] 30 Frequency 30 User ivdgl 00 * * * * /opt/osg-itb/MIS-CI/sbin/run-mis-ci.sh >& /dev/null 30 * * * * /opt/osg-itb/MIS-CI/sbin/run-mis-ci.sh >& /dev/null INFO removing existing MIS-CI crontab No crontab found in /atlas2/software/osg/MIS-CI/tmp/misci Doing nothing # DO NOT EDIT THIS FILE - edit the master and reinstall. # (/atlas2/software/osg-itb/MIS-CI/tmp/misci/crontab_ivdgl installed on Wed May 25 16:56:01 2005) # (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $) 00 * * * * /opt/osg-itb/MIS-CI/sbin/run-mis-ci.sh >& /dev/null 30 * * * * /opt/osg-itb/MIS-CI/sbin/run-mis-ci.sh >& /dev/null 17 * * * * /opt/osg-itb/MIS-CI/sbin/run-mis-ci-diskinfo-user.sh >& /dev/null crontab exists # DO NOT EDIT THIS FILE - edit the master and reinstall. # (/atlas2/software/osg-itb/MIS-CI/tmp/misci/crontab_ivdgl installed on Wed May 25 16:56:01 2005) # (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $) 00 * * * * /opt/osg-itb/MIS-CI/sbin/run-mis-ci.sh >& /dev/null 30 * * * * /opt/osg-itb/MIS-CI/sbin/run-mis-ci.sh >& /dev/null 17 * * * * /opt/osg-itb/MIS-CI/sbin/run-mis-ci-diskinfo-user.sh >& /dev/null Would you like to add MIS-CI crontab to this ? (y/n)y Need to fix /var/spool/cron/[ivdgl,root] and remove multiple entries, if left over from previous install cd - $VDT_LOCATION/vdt/setup/configure_monalisa.sh Please specify user account to run MonaLisa daemons as: [monalisa]: This is the name you will be seen by the world, so please choose a name that represents you. Make sure this name is unique in the MonaLisa environment. Please specify the farm name [ouhep1.nhn.ou.edu]: OUHEP_OSG Your Monitor Group name is important to group your site correctly in the global site list. OSG users should enter "OSG". Please enter your monitor group name [ouhep1.nhn.ou.edu]: OSG Please enter your contact name (your name) [root]: Horst Severini Contact email (your email) [root@ouhep1.nhn.ou.edu]: hs@nhn.ou.edu City (server's location) []: Norman, OK Country []: USA You can find some approximate values for your geographic location from: http://geotags.com/ or you can search your location on Google Location latitude (-90..90) [0]: 35.2070 Location longitude (-180..180) [0]: -97.4465 MonALISA can use various monitoring tools to present status. Enable support for the appropriate tool(s). Will you connect to a Ganglia instance (y/n)? [n]y On which host is Ganglia Running? [localhost]: ouhep0 On which port is Ganglia running? [8649]: Do you want to run VO_Modules (y/n)? [y]: Please specify Globus location: [/atlas2/software/osg/globus]: /opt/osg/globus Please specify CONDOR location: [/usr/local/condor]: Please specify PBS location: []: Please specify LSF location: []: Do you want to enable the MonALISA auto-update feature (y/n)? [n] Do you want to automatically startup the MonaLisa daemon (y/n)? [y] Please review the information: MonaLisa user: monalisa Farm name: OUHEP_OSG Monitor group: OSG Name: Horst Severini Email: hs@nhn.ou.edu City: Norman, OK Country: USA Latitude: 35.2070 Longitude: -97.4465 Ganglia: y (ouhep0:8649) VO_Modules: y Globus location: /opt/osg/globus Condor location: /usr/local/condor LSF location: PBS location: Auto-update: n Startup automatically: y Is this information correct (y/n)? [n]: y edit $VDT_LOCATION/MonaLisa/Service/CMD/site_env (fix PATH names) edit $VDT_LOCATION/MonaLisa/Service/VDTFarm/vdtFarm.conf (add *Tracepath{monTracepath, localhost, " "} ) /etc/init.d/MLD start chown -R daemon globus/etc/ldap /etc/init.d/gris start cp -p /etc/init.d/edg-gridmapfile-upgraded /etc/init.d/edg-gridmapfile-upgraded.save cp -p post-install/edg-gridmapfile-upgraded /etc/init.d/ echo gmf_local /opt/osg/edg/etc/grid-mapfile-local >> edg/etc/edg-mkgridmap.conf cp -p /atlas2/software/osg-itb/edg/etc/grid-mapfile-local edg/etc/ (add GridEx DN mapped to user grid in etc/grid-mapfile-local, if not there yet) # in edg/etc/edg-mkgridmap.conf, move MIS before GRASE and fix USATLAS /etc/init.d/edg-gridmapfile-upgraded start /sbin/chkconfig edg-gridmapfile-upgraded on As regular user: cd /opt/osg-itb/ . setup.sh cd - grid-proxy-init perl $VDT_LOCATION/verify/site_verify.pl --host=ouhep1.nhn.ou.edu Below is the output for the above command: =============================================================================== Info: Site verification initiated at Mon Jul 11 22:58:14 2005 GMT. =============================================================================== ------------------------------------------------------------------------------- ----------- Begin ouhep1.nhn.ou.edu at Mon Jul 11 22:58:14 2005 GMT ----------- ------------------------------------------------------------------------------- Checking prerequisites needed for testing: PASS Checking for a valid proxy for karunach@ouhep1.nhn.ou.edu: PASS Checking if remote host is reachable: PASS Checking for a running gatekeeper: YES; port 2119 Checking authentication: PASS Checking 'Hello, World' application: PASS Checking remote host uptime: PASS 5:58pm up 41 days, 4:12, 4 users, load average: 3.37, 2.79, 2.62 Checking remote Internet network services list: PASS Checking remote Internet servers database configuration: PASS Checking for GLOBUS_LOCATION: /atlas2/software/osg/globus Checking expiration date of remote host certificate: Nov 9 15:57:48 2005 GMT Checking for gatekeeper configuration file: YES /atlas2/software/osg/globus/etc/globus-gatekeeper.conf Checking for a running gsiftp server: YES; port 2811 Checking gsiftp (local client, local host -> remote host): PASS Checking gsiftp (local client, remote host -> local host): PASS Checking that no differences exist between gsiftp'd files: PASS Checking users in grid-mapfile: cdf,fermilab,fmri,gadu,grase,grid,hegarty,hs,ivdgl,karunach,mcfarm,mis,sdss,star,usatlas1,uscms01 Checking for remote globus-sh-tools-vars.sh: YES Checking configured grid services: PASS jobmanager,jobmanager-condor,jobmanager-fork,jobmanager-mis Checking scheduler types associated with remote jobmanagers: PASS jobmanager is of type fork jobmanager-condor is of type condor jobmanager-fork is of type fork jobmanager-mis is of type mis Checking for paths to binaries of remote schedulers: PASS Path to condor binaries is /usr/local/condor/bin Path to mis binaries is /atlas2/software/osg/MIS-CI/bin Checking remote scheduler status: PASS condor : 18 jobs running, 6 jobs idle/pending Checking for a running MDS service: YES; port 2135 Checking if Globus is deployed from the VDT: YES; version 1.3.6 Checking for Grid3 grid3-info.conf: YES Checking for Grid3 grid3-user-vo-map.txt: YES cdf users: cdf mis users: mis fermilab users: fermilab fmri users: fmri gadu users: gadu grase users: grase star users: star gridex users: grid usatlas users: usatlas1 ivdgl users: ivdgl uscms users: uscms01 sdss users: sdss Checking for Grid3 site name: OUHEP_OSG Checking for Grid3 $GRID3 definition: /opt/osg Checking for Grid3 $APP definition: /atlas2/software/grid3appdata/app Checking for Grid3 $DATA definition: /atlas2/software/grid3appdata/data Checking for Grid3 $TMP definition: /atlas2/software/grid3appdata/shared-tmp Checking for Grid3 $WNTMP definition: /myhome1/atlas/grid3/tmp Checking for Grid3 $APP existence: PASS Checking for Grid3 $DATA existence: PASS Checking for Grid3 $TMP existence: PASS Checking for Grid3 $APP writability: FAIL Checking for Grid3 $DATA writability: FAIL Checking for Grid3 $TMP writability: FAIL Checking for Grid3 $APP available space: 17.883 GB Checking for Grid3 $DATA available space: 17.883 GB Checking for Grid3 $TMP available space: 16.283 GB Checking for Grid3 additional site-specific variable definitions: YES MountPoints ATLAS_APP prod /atlas2/software/grid3appdata/app/atlas_app ATLAS_DATA prod /atlas2/software/grid3appdata/app/atlas_data ATLAS_LOC_1001 10.0.1 /atlas2/software/grid3appdata/app/atlas_app/atlas_rel/10.0.1 ATLAS_LOC_903 9.0.3 /atlas2/software/grid3appdata/app/atlas_app/atlas_rel/9.0.3 ATLAS_LOC_904 9.0.4 /atlas2/software/grid3appdata/app/atlas_app/atlas_rel/9.0.4 ATLAS_LOC_940 9.4.0 /atlas2/software/grid3appdata/app/atlas_app/atlas_rel/9.4.0 ATLAS_LOC_GCC 3.2 /atlas2/software/grid3appdata/app/atlas_app/gcc32 ATLAS_LOC_GCE prod /atlas2/software/grid3appdata/app/atlas_app/GCE-Server/gce-server ATLAS_LOC_KitVal prod /atlas2/software/grid3appdata/app/atlas_app/atlas_rel/kitval/KitValidation ATLAS_LOC_Trfs prod /atlas2/software/grid3appdata/app/atlas_app/Atlas-Trfs/atlas-trfs ATLAS_STAGE prod /atlas2/software/grid3appdata/app/atlas_data SAMPLE_LOCATION default /SAMPLE-path SAMPLE_SCRATCH devel /SAMPLE-path Checking for Grid3 execution jobmanager(s): ouhep1.nhn.ou.edu/jobmanager-condor Checking for Grid3 utility jobmanager(s): ouhep1.nhn.ou.edu/jobmanager Checking for Grid3 sponsoring VO: usatlas:50 ivdgl:10 uscms:10 local:30 Checking for Grid3 policy expression: NONE Checking for Grid3 setup.sh: YES Checking for Grid3 $Monalisa_HOME definition: /atlas2/software/osg/MonaLisa Checking for MonALISA configuration: PASS key ml_env vars: FARM_NAME = OUHEP_OSG FARM_HOME = /atlas2/software/osg/MonaLisa/Service/VDTFarm FARM_CONF_FILE = /atlas2/software/osg/MonaLisa/Service/VDTFarm/vdtFarm.conf SHOULD_UPDATE = false URL_LIST_UPDATE = http://monalisa.cacr.caltech.edu/FARM_ML,http://monalisa.cern.ch/MONALISA/FARM_ML key ml_properties vars: lia.Monitor.group = OSG lia.Monitor.useIPaddress = undef MonaLisa.ContactEmail = "hs@nhn.ou.edu" Checking for a running MonALISA: PASS MonALISA is ALIVE (pid 6836) MonALISA_Version = 1.2.34-200505181146 MonALISA_VDate = 2005-05-18 VoModulesDir = VoModules-v0.17 tcpServer_Port = 9002 storeType = emysqldb Checking for a running GANGLIA gmond daemon: PASS (pid 1538 1537 1536 ...) /usr/sbin/gmond name "OUHEP" owner "OU-HEP" Checking for a running GANGLIA gmetad daemon: NO gmetad does not appear to be running ------------------------------------------------------------------------------- ------------ End ouhep1.nhn.ou.edu at Mon Jul 11 22:59:21 2005 GMT ------------ ------------------------------------------------------------------------------- =============================================================================== Info: Site verification completed at Mon Jul 11 22:59:21 2005 GMT. END of installation notes (Installation of osg 0.2.1 successful)