Skip to content

Commit

Permalink
Deployed 31b4be9 with MkDocs version: 1.6.0
Browse files Browse the repository at this point in the history
  • Loading branch information
kandrosov committed May 23, 2024
1 parent ea5694c commit 2753e1e
Show file tree
Hide file tree
Showing 2 changed files with 64 additions and 1 deletion.
63 changes: 63 additions & 0 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,33 @@
</span>
</a>

</li>

<li class="md-nav__item">
<a href="#how-to-run-nanoaod-nanoaod-skims-production" class="md-nav__link">
<span class="md-ellipsis">
How to run nanoAOD-&gt;nanoAOD skims production
</span>
</a>

</li>

<li class="md-nav__item">
<a href="#how-to-run-hhbtag-training-skim-ntuple-production" class="md-nav__link">
<span class="md-ellipsis">
How to run HHbtag training skim ntuple production
</span>
</a>

</li>

<li class="md-nav__item">
<a href="#how-to-run-histogram-production" class="md-nav__link">
<span class="md-ellipsis">
How to run Histogram production
</span>
</a>

</li>

</ul>
Expand Down Expand Up @@ -375,6 +402,33 @@
</span>
</a>

</li>

<li class="md-nav__item">
<a href="#how-to-run-nanoaod-nanoaod-skims-production" class="md-nav__link">
<span class="md-ellipsis">
How to run nanoAOD-&gt;nanoAOD skims production
</span>
</a>

</li>

<li class="md-nav__item">
<a href="#how-to-run-hhbtag-training-skim-ntuple-production" class="md-nav__link">
<span class="md-ellipsis">
How to run HHbtag training skim ntuple production
</span>
</a>

</li>

<li class="md-nav__item">
<a href="#how-to-run-histogram-production" class="md-nav__link">
<span class="md-ellipsis">
How to run Histogram production
</span>
</a>

</li>

</ul>
Expand Down Expand Up @@ -461,6 +515,15 @@ <h2 id="how-to-run-limits">How to run limits<a class="headerlink" href="#how-to-
</ul>
</li>
</ol>
<h2 id="how-to-run-nanoaod-nanoaod-skims-production">How to run nanoAOD-&gt;nanoAOD skims production<a class="headerlink" href="#how-to-run-nanoaod-nanoaod-skims-production" title="Permanent link">&para;</a></h2>
<div class="highlight"><pre><span></span><code>law<span class="w"> </span>run<span class="w"> </span>CreateNanoSkims<span class="w"> </span>--version<span class="w"> </span>prod_v1<span class="w"> </span>--periods<span class="w"> </span><span class="m">2016</span>,2016APV,2017,2018<span class="w"> </span>--ignore-missing-samples<span class="w"> </span>True
</code></pre></div>
<h2 id="how-to-run-hhbtag-training-skim-ntuple-production">How to run HHbtag training skim ntuple production<a class="headerlink" href="#how-to-run-hhbtag-training-skim-ntuple-production" title="Permanent link">&para;</a></h2>
<div class="highlight"><pre><span></span><code>python<span class="w"> </span>Studies/HHBTag/CreateTrainingSkim.py<span class="w"> </span>--inFile<span class="w"> </span><span class="nv">$CENTRAL_STORAGE</span>/prod_v1/nanoAOD/2018/GluGluToBulkGravitonToHHTo2B2Tau_M-350.root<span class="w"> </span>--outFile<span class="w"> </span>output/skim.root<span class="w"> </span>--mass<span class="w"> </span><span class="m">350</span><span class="w"> </span>--sample<span class="w"> </span>GluGluToBulkGraviton<span class="w"> </span>--year<span class="w"> </span><span class="m">2018</span><span class="w"> </span>&gt;<span class="p">&amp;</span><span class="w"> </span>EventInfo.txt
python<span class="w"> </span>Common/SaveHisto.txt<span class="w"> </span>--inFile<span class="w"> </span><span class="nv">$CENTRAL_STORAGE</span>/prod_v1/nanoAOD/2018/GluGluToBulkGravitonToHHTo2B2Tau_M-350.root<span class="w"> </span>--outFile<span class="w"> </span>output/skim.root
</code></pre></div>
<h2 id="how-to-run-histogram-production">How to run Histogram production<a class="headerlink" href="#how-to-run-histogram-production" title="Permanent link">&para;</a></h2>
<p>Please, see the file all_commands.txt (to be updated)</p>



Expand Down
2 changes: 1 addition & 1 deletion search/search_index.json
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"FLAF","text":"<p>FLAF - Flexible LAW-based Analysis Framework. Task workflow managed is done via LAW (Luigi Analysis Framework).</p>"},{"location":"#how-to-install","title":"How to install","text":"<ol> <li>Setup ssh keys:</li> <li>On GitHub settings/keys</li> <li> <p>On CERN GitLab profile/keys</p> </li> <li> <p>Clone the repository: <pre><code>git clone --recursive [email protected]:cms-flaf/Framework.git FLAF\n</code></pre></p> </li> </ol>"},{"location":"#how-to-load-environment","title":"How to load environment","text":"<p>Following command activates the framework environment: <pre><code>source env.sh\n</code></pre></p>"},{"location":"#how-to-run-limits","title":"How to run limits","text":"<ol> <li> <p>As a temporary workaround, if you want to run multiplie commands, to avoid delays to load environment each time run: <pre><code>cmbEnv /bin/zsh # or /bin/bash\n</code></pre> Alternatively add <code>cmbEnv</code> in front of each command. E.g. <pre><code>cmbEnv python3 -c 'print(\"hello\")'\n</code></pre></p> </li> <li> <p>Create datacards. <pre><code>python3 StatInference/dc_make/create_datacards.py --input PATH_TO_SHAPES --output PATH_TO_CARDS --config PATH_TO_CONFIG\n</code></pre> Available configurations:</p> <ul> <li>For X-&gt;HH&gt;bbtautau Run 2: StatInference/config/x_hh_bbtautau_run2.yaml</li> <li>For X-&gt;HH-&gt;bbWW Run 3: StatInference/config/x_hh_bbww_run3.yaml</li> </ul> </li> <li> <p>Run limits. <pre><code>law run PlotResonantLimits --version dev --datacards 'PATH_TO_CARDS/*.txt' --xsec fb --y-log\n</code></pre> Hints:</p> <ul> <li>use <code>--workflow htcondor</code> to submit on HTCondor (by default it runs locally)</li> <li>add <code>--remove-output 4,a,y</code> to remove previous output files</li> <li>add <code>--print-status 0</code> to get status of the workflow (where <code>0</code> is a depth). Useful to get the output file name.</li> <li>for more details see cms-hh inference documentation</li> </ul> </li> </ol>"}]}
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"FLAF","text":"<p>FLAF - Flexible LAW-based Analysis Framework. Task workflow managed is done via LAW (Luigi Analysis Framework).</p>"},{"location":"#how-to-install","title":"How to install","text":"<ol> <li>Setup ssh keys:</li> <li>On GitHub settings/keys</li> <li> <p>On CERN GitLab profile/keys</p> </li> <li> <p>Clone the repository: <pre><code>git clone --recursive [email protected]:cms-flaf/Framework.git FLAF\n</code></pre></p> </li> </ol>"},{"location":"#how-to-load-environment","title":"How to load environment","text":"<p>Following command activates the framework environment: <pre><code>source env.sh\n</code></pre></p>"},{"location":"#how-to-run-limits","title":"How to run limits","text":"<ol> <li> <p>As a temporary workaround, if you want to run multiplie commands, to avoid delays to load environment each time run: <pre><code>cmbEnv /bin/zsh # or /bin/bash\n</code></pre> Alternatively add <code>cmbEnv</code> in front of each command. E.g. <pre><code>cmbEnv python3 -c 'print(\"hello\")'\n</code></pre></p> </li> <li> <p>Create datacards. <pre><code>python3 StatInference/dc_make/create_datacards.py --input PATH_TO_SHAPES --output PATH_TO_CARDS --config PATH_TO_CONFIG\n</code></pre> Available configurations:</p> <ul> <li>For X-&gt;HH&gt;bbtautau Run 2: StatInference/config/x_hh_bbtautau_run2.yaml</li> <li>For X-&gt;HH-&gt;bbWW Run 3: StatInference/config/x_hh_bbww_run3.yaml</li> </ul> </li> <li> <p>Run limits. <pre><code>law run PlotResonantLimits --version dev --datacards 'PATH_TO_CARDS/*.txt' --xsec fb --y-log\n</code></pre> Hints:</p> <ul> <li>use <code>--workflow htcondor</code> to submit on HTCondor (by default it runs locally)</li> <li>add <code>--remove-output 4,a,y</code> to remove previous output files</li> <li>add <code>--print-status 0</code> to get status of the workflow (where <code>0</code> is a depth). Useful to get the output file name.</li> <li>for more details see cms-hh inference documentation</li> </ul> </li> </ol>"},{"location":"#how-to-run-nanoaod-nanoaod-skims-production","title":"How to run nanoAOD-&gt;nanoAOD skims production","text":"<pre><code>law run CreateNanoSkims --version prod_v1 --periods 2016,2016APV,2017,2018 --ignore-missing-samples True\n</code></pre>"},{"location":"#how-to-run-hhbtag-training-skim-ntuple-production","title":"How to run HHbtag training skim ntuple production","text":"<pre><code>python Studies/HHBTag/CreateTrainingSkim.py --inFile $CENTRAL_STORAGE/prod_v1/nanoAOD/2018/GluGluToBulkGravitonToHHTo2B2Tau_M-350.root --outFile output/skim.root --mass 350 --sample GluGluToBulkGraviton --year 2018 &gt;&amp; EventInfo.txt\npython Common/SaveHisto.txt --inFile $CENTRAL_STORAGE/prod_v1/nanoAOD/2018/GluGluToBulkGravitonToHHTo2B2Tau_M-350.root --outFile output/skim.root\n</code></pre>"},{"location":"#how-to-run-histogram-production","title":"How to run Histogram production","text":"<p>Please, see the file all_commands.txt (to be updated)</p>"}]}

0 comments on commit 2753e1e

Please sign in to comment.