Difference: CRABforHIDilepton (1 vs. 6)

Revision 62011-09-09 - HyunChulKim

 
META TOPICPARENT name="HyunChulsLog"

CRAB for CMS-HI Dilepton group

Prerequisites

  • DongHo summarize the prerequisites, please visit here.

Preparation

Real CRAB

  • Step 1 : Store data at castor (not publishing on DBS)
(Notice : If you access the data in castor by crab, please read this.)

  • Step 2 : Store data at MIT and publish on DBS

Tips for effective work

  • If you want to store data at castor, before submit CRAB you should do following first. (ex: stored directory : /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0)
    • rfrm -r /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfmkdir /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfchmod 775 /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
Changed:
<
<
(If the directory doesn't have the right to write by group, you will face to exit code 60307.)
>
>
(If the directory doesn't have the right to write by group, you will face to exit code 60307 like below.)
Added:
>
>
crab:  
ID    END STATUS            ACTION       ExeExitCode JobExitCode E_HOST
----- --- ----------------- ------------  ---------- ----------- ---------
1     Y   Retrieved         Cleared       0          60307       node74.datagrid.cea.fr
 
Added:
>
>
crab: ExitCodes? Summary >>>>>>>>> 1 Jobs with Wrapper Exit Code : 60307 List of jobs: 1 See https://twiki.cern.ch/twiki/bin/view/CMS/JobExitCodes for Exit Code meaning

drwxr-xr-x 0 hckim zh 0 Sep 06 15:44 StoreResults?-PyquenEvtGen_bJpsiMuMu_JPsiPt912_RECO_3111_v5 Should change like

drwxrwxr-x   0 hckim    zh                          0 Sep 06 15:44 StoreResults-PyquenEvtGen_bJpsiMuMu_JPsiPt912_RECO_3111_v5
 
  • At MIT, maximum capacity to process job is about 2000, so it will be good to reduce the number of job per task. In my experience, less than 200 is OK.

  • Because of unknown reason, sometimes crab submit can be failed with exit code 60307. That means the folder to store data doesn't have permit to write by group. At that time, please remove that folder, kill that job and create and submit new job. (Thay may depend on which machine do a job..I guess.)
  • If you face to exit code 8018, do following list. (ex : trouble job : 13, 119)
    • crab -get(or -getoutput) 13, 119
    • crab -resubmit 13, 119

  • When dataset which you accessed with CRAB is extended(that is, addes more event to same dataset), you need to do like following. * crab -extend : That created extended CRAB job added to already created jobs * crab -submit #extended job number : From this step, same as usual jobs

Check whether all the data is stored and published successfully

  • Check the storage at castor
  • Check the storage at MIT
    • store at /net/pstore01/d00/scratch/hckim/PrimaryDataset/User_Dataset_Name/PsetHash/
      • ex. /pnfs/cmsaf.mit.edu/t2bat/cms/store/user/hckim/MinimumBiasHI/JulyExercise10_MinimumBiasHI_dilepton_skim0/bde1f0b06d4fc6b6d44b106f0c9b396a
  • Check the publishment on DBS
Changed:
<
<
  • (NEW) If you want submit multiple root file, add following in crab.cfg
>
>
  • If you want submit multiple root file, add following in crab.cfg
 
    • remark output_files = test1.root
    • add get_edm_output = 1
Added:
>
>
  • (NEW) When you are using "VarParsing" in your pset python file, please add pycfg_params = noprint under the [CMSSW] section like below.

import FWCore.ParameterSet.Config as cms
process = cms.Process('Slurp')

process.source = cms.Source("PoolSource", fileNames = cms.untracked.vstring())
process.maxEvents = cms.untracked.PSet( input       = cms.untracked.int32(10) )
process.options   = cms.untracked.PSet( wantSummary = cms.untracked.bool(True) )

process.output = cms.OutputModule("PoolOutputModule",
    outputCommands = cms.untracked.vstring("keep *"),
    fileName = cms.untracked.string('outfile.root'),
)
process.out_step = cms.EndPath(process.output)

  • (NEW) If job is aborted, please test with crab -postMortem [jobid]. After this command, the CMSSW_[jobid].LoggingInfo file is made and in this file you can find the reason of abort. Reference is here. The result is like below.
[lxplus422] ~/scratch0/CMSSW_4_1_3/src/HiAnalysis/HiOnia/crab $ crab -c crab_0_110906_185936 -postMortem 1
crab:  Version 2.7.8 running on Fri Sep  9 16:38:32 2011 CET (14:38:32 UTC)

crab. Working options:
   scheduler           glite
   job type            CMSSW
   server              OFF
   working directory   /afs/cern.ch/user/h/hckim/scratch0/CMSSW_4_1_3/src/HiAnalysis/HiOnia/crab/crab_0_110906_185936/

crab:  Logging info for job 1: 
      written to /afs/cern.ch/user/h/hckim/scratch0/CMSSW_4_1_3/src/HiAnalysis/HiOnia/crab/crab_0_110906_185936/job/CMSSW_1.LoggingInfo 
...

[lxplus422] ~/scratch0/CMSSW_4_1_3/src/HiAnalysis/HiOnia/crab $ cd crab_0_110906_185936/job/
[lxplus422] ~/scratch0/CMSSW_4_1_3/src/HiAnalysis/HiOnia/crab/crab_0_110906_185936/job $ ls
CMSSW.py  CMSSW.py.pkl  CMSSW.sh  CMSSW_1.LoggingInfo
[lxplus422] ~/scratch0/CMSSW_4_1_3/src/HiAnalysis/HiOnia/crab/crab_0_110906_185936/job $ vi CMSSW_1.LoggingInfo

Event: Abort
- Arrived                    =    Tue Sep  6 19:07:44 2011 CEST
- Host                       =    wms212.cern.ch
- Level                      =    SYSTEM
- Priority                   =    asynchronous
- Reason                     =    The job cannot be submitted because the blparser service is not alive

...

 

Useful link

Changed:
<
<
>
>
  -- HyunChulKim - 07 Jul 2010

Revision 52010-10-14 - HyunChulKim

 
META TOPICPARENT name="HyunChulsLog"

CRAB for CMS-HI Dilepton group

Prerequisites

  • DongHo summarize the prerequisites, please visit here.

Preparation

Real CRAB

  • Step 1 : Store data at castor (not publishing on DBS)
(Notice : If you access the data in castor by crab, please read this.)

  • Step 2 : Store data at MIT and publish on DBS

Tips for effective work

  • If you want to store data at castor, before submit CRAB you should do following first. (ex: stored directory : /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0)
    • rfrm -r /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfmkdir /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfchmod 775 /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
(If the directory doesn't have the right to write by group, you will face to exit code 60307.)

  • At MIT, maximum capacity to process job is about 2000, so it will be good to reduce the number of job per task. In my experience, less than 200 is OK.

  • Because of unknown reason, sometimes crab submit can be failed with exit code 60307. That means the folder to store data doesn't have permit to write by group. At that time, please remove that folder, kill that job and create and submit new job. (Thay may depend on which machine do a job..I guess.)
  • If you face to exit code 8018, do following list. (ex : trouble job : 13, 119)
    • crab -get(or -getoutput) 13, 119
    • crab -resubmit 13, 119

  • When dataset which you accessed with CRAB is extended(that is, addes more event to same dataset), you need to do like following. * crab -extend : That created extended CRAB job added to already created jobs * crab -submit #extended job number : From this step, same as usual jobs

Check whether all the data is stored and published successfully

  • Check the storage at castor
  • Check the storage at MIT
    • store at /net/pstore01/d00/scratch/hckim/PrimaryDataset/User_Dataset_Name/PsetHash/
      • ex. /pnfs/cmsaf.mit.edu/t2bat/cms/store/user/hckim/MinimumBiasHI/JulyExercise10_MinimumBiasHI_dilepton_skim0/bde1f0b06d4fc6b6d44b106f0c9b396a
  • Check the publishment on DBS
Added:
>
>
  • (NEW) If you want submit multiple root file, add following in crab.cfg
    • remark output_files = test1.root
    • add get_edm_output = 1
 

Useful link

-- HyunChulKim - 07 Jul 2010

Revision 42010-07-30 - HyunChulKim

 
META TOPICPARENT name="HyunChulsLog"

CRAB for CMS-HI Dilepton group

Prerequisites

  • DongHo summarize the prerequisites, please visit here.

Preparation

Real CRAB

Changed:
<
<
  • Step 1 : Store data at castor and publish on DBS
>
>
  • Step 1 : Store data at castor (not publishing on DBS)
 (Notice : If you access the data in castor by crab, please read this.)

  • Step 2 : Store data at MIT and publish on DBS

Tips for effective work

  • If you want to store data at castor, before submit CRAB you should do following first. (ex: stored directory : /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0)
    • rfrm -r /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfmkdir /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfchmod 775 /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
(If the directory doesn't have the right to write by group, you will face to exit code 60307.)

  • At MIT, maximum capacity to process job is about 2000, so it will be good to reduce the number of job per task. In my experience, less than 200 is OK.

  • Because of unknown reason, sometimes crab submit can be failed with exit code 60307. That means the folder to store data doesn't have permit to write by group. At that time, please remove that folder, kill that job and create and submit new job. (Thay may depend on which machine do a job..I guess.)
  • If you face to exit code 8018, do following list. (ex : trouble job : 13, 119)
    • crab -get(or -getoutput) 13, 119
    • crab -resubmit 13, 119

  • When dataset which you accessed with CRAB is extended(that is, addes more event to same dataset), you need to do like following. * crab -extend : That created extended CRAB job added to already created jobs * crab -submit #extended job number : From this step, same as usual jobs

Check whether all the data is stored and published successfully

  • Check the storage at castor
  • Check the storage at MIT
    • store at /net/pstore01/d00/scratch/hckim/PrimaryDataset/User_Dataset_Name/PsetHash/
      • ex. /pnfs/cmsaf.mit.edu/t2bat/cms/store/user/hckim/MinimumBiasHI/JulyExercise10_MinimumBiasHI_dilepton_skim0/bde1f0b06d4fc6b6d44b106f0c9b396a
  • Check the publishment on DBS

Useful link

-- HyunChulKim - 07 Jul 2010

Revision 32010-07-28 - HyunChulKim

 
META TOPICPARENT name="HyunChulsLog"

CRAB for CMS-HI Dilepton group

Prerequisites

  • DongHo summarize the prerequisites, please visit here.

Preparation

Real CRAB

  • Step 1 : Store data at castor and publish on DBS
(Notice : If you access the data in castor by crab, please read this.)

  • Step 2 : Store data at MIT and publish on DBS

Tips for effective work

  • If you want to store data at castor, before submit CRAB you should do following first. (ex: stored directory : /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0)
    • rfrm -r /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfmkdir /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfchmod 775 /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
(If the directory doesn't have the right to write by group, you will face to exit code 60307.)

  • At MIT, maximum capacity to process job is about 2000, so it will be good to reduce the number of job per task. In my experience, less than 200 is OK.

  • Because of unknown reason, sometimes crab submit can be failed with exit code 60307. That means the folder to store data doesn't have permit to write by group. At that time, please remove that folder, kill that job and create and submit new job. (Thay may depend on which machine do a job..I guess.)
  • If you face to exit code 8018, do following list. (ex : trouble job : 13, 119)
    • crab -get(or -getoutput) 13, 119
    • crab -resubmit 13, 119
Added:
>
>
  • When dataset which you accessed with CRAB is extended(that is, addes more event to same dataset), you need to do like following. * crab -extend : That created extended CRAB job added to already created jobs * crab -submit #extended job number : From this step, same as usual jobs
  Check whether all the data is stored and published successfully
  • Check the storage at castor
  • Check the storage at MIT
    • store at /net/pstore01/d00/scratch/hckim/PrimaryDataset/User_Dataset_Name/PsetHash/
      • ex. /pnfs/cmsaf.mit.edu/t2bat/cms/store/user/hckim/MinimumBiasHI/JulyExercise10_MinimumBiasHI_dilepton_skim0/bde1f0b06d4fc6b6d44b106f0c9b396a
  • Check the publishment on DBS

Useful link

-- HyunChulKim - 07 Jul 2010

Revision 22010-07-07 - HyunChulKim

 
META TOPICPARENT name="HyunChulsLog"
Changed:
<
<
Pre-do
>
>

CRAB for CMS-HI Dilepton group

 
Changed:
<
<
Preparation
>
>

Prerequisites

Added:
>
>
  • DongHo summarize the prerequisites, please visit here.
 
Changed:
<
<
Real CRAB
>
>

Preparation

Added:
>
>
 
Changed:
<
<
Hint for good job
>
>

Real CRAB

Added:
>
>
  • Step 1 : Store data at castor and publish on DBS
(Notice : If you access the data in castor by crab, please read this.)

  • Step 2 : Store data at MIT and publish on DBS

Tips for effective work

 
  • If you want to store data at castor, before submit CRAB you should do following first. (ex: stored directory : /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0)
    • rfrm -r /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfmkdir /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfchmod 775 /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
Added:
>
>
(If the directory doesn't have the right to write by group, you will face to exit code 60307.)
 
  • At MIT, maximum capacity to process job is about 2000, so it will be good to reduce the number of job per task. In my experience, less than 200 is OK.
Changed:
<
<
  • Because of unknown reason, sometimes crab submit can be failed with exit code 60307. That means the folder to store data doesn't have permit to write by group. At that time, please remove that folder, kill that job and create and submit new job.
>
>
  • Because of unknown reason, sometimes crab submit can be failed with exit code 60307. That means the folder to store data doesn't have permit to write by group. At that time, please remove that folder, kill that job and create and submit new job. (Thay may depend on which machine do a job..I guess.)
 
  • If you face to exit code 8018, do following list. (ex : trouble job : 13, 119)
    • crab -get(or -getoutput) 13, 119
    • crab -resubmit 13, 119
Changed:
<
<
Real CRAB
>
>
Check whether all the data is stored and published successfully
Added:
>
>
  • Check the storage at castor
  • Check the storage at MIT
    • store at /net/pstore01/d00/scratch/hckim/PrimaryDataset/User_Dataset_Name/PsetHash/
      • ex. /pnfs/cmsaf.mit.edu/t2bat/cms/store/user/hckim/MinimumBiasHI/JulyExercise10_MinimumBiasHI_dilepton_skim0/bde1f0b06d4fc6b6d44b106f0c9b396a
  • Check the publishment on DBS
 
Changed:
<
<
Reference
>
>

Useful link

Added:
>
>
 
Added:
>
>
 

-- HyunChulKim - 07 Jul 2010

Revision 12010-07-07 - HyunChulKim

 
META TOPICPARENT name="HyunChulsLog"
Pre-do

Preparation

Real CRAB

Hint for good job

  • If you want to store data at castor, before submit CRAB you should do following first. (ex: stored directory : /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0)
    • rfrm -r /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfmkdir /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0
    • rfchmod 775 /castor/cern.ch/user/h/hckim/JulyExercise10/JulyExercise10_HardEnriched_Dilepton_skim0

  • At MIT, maximum capacity to process job is about 2000, so it will be good to reduce the number of job per task. In my experience, less than 200 is OK.

  • Because of unknown reason, sometimes crab submit can be failed with exit code 60307. That means the folder to store data doesn't have permit to write by group. At that time, please remove that folder, kill that job and create and submit new job.
  • If you face to exit code 8018, do following list. (ex : trouble job : 13, 119)
    • crab -get(or -getoutput) 13, 119
    • crab -resubmit 13, 119

Real CRAB

Reference

-- HyunChulKim - 07 Jul 2010

 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding KoreaCmsWiki? Send feedback