-
Notifications
You must be signed in to change notification settings - Fork 0
/
CPTAnalysis.tex
1739 lines (1436 loc) · 182 KB
/
CPTAnalysis.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
%//------ Section 03 -------------------------------------------------------------------------------------------------
\chapter{Mass measurements of multi-strange baryons in pp collisions at \sqrtS = 13 TeV}
\label{chap:CPTAnalysis}
%//-----------------------------------------------------------------------//
The first analysis conducted in this thesis aims at measuring the masses and mass differences between particle and anti-particle of multi-strange baryons. The focus is on \rmXiM, \rmAxiP, \rmOmegaM and \rmAomegaP. This chapter provides a description of the different elements needed to achieve this goal. \\
Once the context has been introduced (\Sec\ref{sec:IntroductionCPT}), the exploited data samples are presented in \Sec\ref{sec:DataSampleCPT}. This is followed by a detailed description and discussion of the various ingredients involved in the analysis (\Sec\ref{sec:AnalysisOfHyperonMasses}): the track, V0 and cascade candidate selections, and finally, the principles of the mass measurement. By design, such a measurement depends on the different elements of the analysis. Therefore, each of them must be studied in order to identify those affecting the mass extraction and account it in the final results. This review of the analysis is at the heart of \Sec\ref{sec:SystStudy}. Finally, this chapter comes to an end with a summary of the different systematic biases and associated uncertainties, and a discussion of the final results in \Sec\ref{sec:FinalResultsCPT}.
\section{Introduction}
\label{sec:IntroductionCPT}
%Symmetries certainly stand as one of the most fruitful concepts in Physics. They are of two kinds: continuous --- such as the global translations in both space and time, or the Lorentz transformations --- and discrete --- for example, the space- (P) and time- (T) inversions, the charge conjugation (C), and their combined transformation given by CPT. In particular, the Lorentz and CPT symmetries are connected by the so-called CPT theorem which states that any local Lorentz-invariant quantum field theory must also (under some extra requirements) be CPT invariant \cite{cptstatus}. Consequently, the CPT violation implies the breaking of the Lorentz symmetry, and vice versa\footnote{In fact, there is another option; to allow for CPT to be violated, either the Lorentz symmetry must be broken -- as in the string theory \cite{string} or the Standard-Model Extension \cite{sme} -- or some of the other extra assumptions of the CPT theorem must be dropped, namely the energy positivity, local interactions, finite spin, etc \cite{cptimplieslorentz}\cite{cptsymmetryantitsviolation}. } \cite{sozzi}. Another implication involves the relation between the properties of matter and antimatter: due to the charge conjugation linking particles to antiparticles, the CPT symmetry imposes that they share the same invariant mass, energy spectra, lifetime, coupling constants, etc \cite{cptsymmetryantitsviolation}. Most of the experimental checks of CPT invariance stem from these physical consequences.
As discussed in \Sec\ref{subsec:Theory}, the Standard Model is built upon a set of symmetries, each being either discrete -- such as the combination of the charge conjugation (C), parity (P) and time reversal (T), known as the CPT transformation -- or continuous -- for example, the Lorentz transformations that includes rotations and boosts. In particular, the Lorentz and CPT symmetries are connected by the so-called CPT theorem which establishes that any unitary, local Lorentz-invariant quantum field theory must be CPT invariant \cite{kosteleckyStatusCPT1998}. Consequently, the CPT violation would imply the breaking of the Lorentz symmetry, and vice versa\footnote{In fact, another option exists; to allow for the CPT violation, either the Lorentz symmetry must be broken -- as in the case of string theory \cite{kosteleckySpontaneousBreakingLorentz1989} or the Standard-Model Extension \cite{colladayLorentzviolatingExtensionStandard1998} -- or some of the other additional assumptions of the CPT theorem must be dropped, namely the energy positivity \cite{abersDiseasesInfiniteComponentField1967}, local interactions \cite{carruthersIsospinSymmetryTCP1968}, finite spin \cite{oksakInvalidityTCPtheoremInfinitecomponent1968}, etc \cite{lehnertCPTSymmetryIts2016, greenbergCPTViolationImplies2002}.} \cite{sozziTestsDiscreteSymmetries2019}. Another implication involves the relation between the properties of matter and antimatter: due to the charge conjugation linking particles to antiparticles, the CPT symmetry imposes that they share the same invariant mass, mass spectra, lifetime, coupling constants, etc \cite{lehnertCPTSymmetryIts2016}. Most of the experimental checks of CPT invariance stem from this last point, which imposes several constraints on the anti-particle properties. \\
The Particle Data Group (PDG) \cite{particledatagroupReviewParticlePhysics2022} compiles a large variety of CPT tests from many experiments and with different degrees of precision; so far, no CPT violation has been observed. The most stringent test involves the \rmKzero-\rmAKzero mixing process, which depends on the mass and lifetime differences of these two states. In this way, assuming no other source of CPT violation in the decay of neutral kaons, these two quantities have been bounded \cite{particledatagroupReviewParticlePhysics2022, angelopoulosK0OverlineK1999} to
\begin{equation}
2 \frac{\mid m_{\rmKzero} - m_{\rmAKzero} \mid}{m_{\rmKzero} + m_{\rmAKzero}} < 6 \times 10^{-19} \quad , \quad 2 \frac{\mid \Gamma_{\rmKzero} - \Gamma_{\rmAKzero} \mid}{\Gamma_{\rmKzero} + \Gamma_{\rmAKzero}} = (8 \pm 8) \times 10^{-18}.
\end{equation}
These indirect limits are much stronger than the ones extracted from direct tests. For example, in the hyperon sector, the precision on relative mass difference is typically of a few $10^{-5}$. In the latter case, it should be mentioned that there is still some room for improvements, and most particularly concerning the mass difference measurements between particle and anti-particle in the multi-strange baryon sector. The only test of this kind dates back to 2006 \cite{abdallahMassesLifetimesProduction2006} for the \rmXiM and \rmAxiP, and from 1998 \cite{chanMeasurementPropertiesOverline1998} for the \rmOmegaM and \rmAomegaP. The former was achieved by exploiting 3.25 million hadronic decays of the \rmZzero recorded by the DELPHI detector at LEP-1; the latter was obtained on the E756 spectrometer at Fermilab, using an 800-\gmom proton beam on a beryllium target. However, both studies suffer from low statistics: approximately 2500(2300) reconstructed \rmXiM (\rmAxiP) and about 6323(2607) reconstructed \rmOmegaM (\rmAomegaP) were used.\\
\begin{table}[t]
\centering
\begin{tabular}{>{\centering\arraybackslash}b{1.5cm}@{\hspace{0.3cm}} >{\centering\arraybackslash}b{1.75cm}@{\hspace{0.3cm}} >{\centering\arraybackslash}b{2.85cm}@{\hspace{0.3cm}} >{\centering\arraybackslash}b{3.6cm}@{\hspace{0.3cm}} >{\centering\arraybackslash}b{2.5cm}@{\hspace{0.3cm}} >{\centering\arraybackslash}b{1cm}@{\hspace{0.3cm}}}
\noalign{\smallskip}\hline\noalign{\smallskip}
Particle & Quark content & Mass (\mmass) & Relative mass difference & Dominant decay channel & B.R.\\
\noalign{\smallskip}\hline \noalign{\smallskip}
\rmKzeroS & $d \bar{s}$ & $497.611 \pm 0.013$ & $< 6 \times 10^{-19}$ & \piPlus \piMinus & 69.20\%\\
\noalign{\smallskip}\hline \noalign{\smallskip}
\rmLambda (\rmAlambda) & $u d s$ ($\bar{u}\bar{d}\bar{s}$) & $1115.683 \pm 0.006$ & $\left(-0.1 \pm 1.1\right) \times 10^{-5}$ & \proton \piMinus (\pbar \piPlus) & 63.9\% \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\rmXiM (\rmAxiP) & $dss$ ($\bar{d}\bar{s}\bar{s}$) & $1321.71 \pm 0.07$ & $\left(-2.5 \pm 8.7\right) \times 10^{-5}$ & \rmLambda \piMinus (\rmAlambda \piPlus) & 99.9\% \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\rmOmegaM (\rmAomegaP) & $sss$ ($\bar{s}\bar{s}\bar{s}$) & $1672.45 \pm 0.23$ & $\left(-1.44 \pm 7.98\right) \times 10^{-5}$ & \rmLambda \rmKminus (\rmAlambda \rmKplus) & 67.8\%\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\caption{A few characteristics, as of 2023, of the \rmLambda, \rmXi, \rmOmega hyperons and the \rmKzeroS meson: quark content, mass, relative mass difference values with their associated uncertainties, dominant decay channel as well as the corresponding branching ratio \cite{particledatagroupReviewParticlePhysics2022}.}\label{tab:V0CascPDGMass}
\end{table}
In comparison, all the pp collisions at a centre-of-mass energy of 13 \tev collected by ALICE throughout the LHC Run-2 contain about 2 500 000 \rmXi and 133 000 \rmOmega, with little background. Therefore, in this thesis, the measurement of the mass difference of \rmXiM and \rmAxiP, and \rmOmegaM and \rmAomegaP hyperons is performed. It relies on data samples much larger than those exploited previously. These direct measurements of the mass difference should offer a test of the CPT invariance to an unprecedented level of precision in the multi-strange baryon sector. The absolute masses are updated as well, with a precision substantially better than the past measurement, currently listed in the PDG and used in the calculation of world average values. The latter are presented in \tab\ref{tab:V0CascPDGMass}.
Furthermore, concerning the \rmLambda hyperon and \rmKzeroS meson, the PDG quotes a precision of a few \kmass on the mass value, and about $1 \times 10^{-5}$ on the relative mass difference value\footnote{This only concerns the relative mass difference between \rmLambda and \rmAlambda. As mentioned above, such quantity is much smaller by fourteen orders of magnitude in the case of \rmKzero.}. Abundantly produced, these two hadrons also exhibit an irresistible feature in the context of this thesis: both decay into a V0 in their dominant decay channel, and so can be identified in a similar manner as cascades using topological reconstruction. For those two reasons -- high precision on the PDG mass values, and similar decay topology as cascade --, the analysis is reproduced on \rmLambda and \rmKzeroS, both being used as a benchmark for the measurement.\\
In the following, the term \textit{mass difference} always refers to the \emph{relative} one -- unless indicated otherwise --, namely the mass \emph{difference} over the mass \emph{average}, $2 \left(\mMassPart{part.} - \mMassApart{part.} \right)/\left(\mMassPart{part.} + \mMassApart{part.}\right)$.
\section{Data samples and event selection}
\label{sec:DataSampleCPT}
\subsection{The data samples}
\label{subsec:DataSamples}
All the data samples employed for this measurement originates from the second campaign of data taking, the LHC Run-2. These samples comprise different collision systems at various energies, mainly pp collisions at \sqrtS = 13 \tev and Pb-Pb collisions at \sqrtSnn = 5.02 \tev. Based on the elements in \Sec\ref{subsec:HyperonAndALICE}, the analysis exploits the pp collisions as they provide a less dense collision environment, expectedly easier to reconstruct and thus more controllable. All these pp events have been collected during three data taking periods: between April and October 2016, May and November 2017, April and October 2018 (\Sec\ref{subsec:acceleratorprogramme}, \tab\ref{tab:LHCRunProgramm}).
Considering the target precision on the mass and mass difference values, it is crucial to have a fine comprehension of the data reconstruction to keep it well under control. For that reason, the analysis uses data in ESD format as they contain all the information related to event building, thus offering the possibility to replay \textit{offline} the V0 and cascade vertexings/formations. As mentioned in \Sec\ref{subsubsec:DataFormats}, the first full reconstruction cycle (\Sec\ref{subsubsec:computingmodel}), performed right after the recording of the data, produces ESD files labelled as \textit{pass-1}. Since then, other reconstruction cycles have been carried out, each iteration bringing its share of improvements or fixes. The events analysed for this measurement originate from the second reconstruction cycle, the \emph{pass-2}, which offers better tracking performances: same and consistent version of analysis software over all the data taking periods leading to more uniform performances, better SPD and TPC alignments, improved TPC reconstruction and finer description of the distortions within the TPC gas.
Each period consists in fact of dozens or hundreds of \textit{runs}, corresponding to sequences of events recorded in an uninterrupted manner\footnote{Throughout the data taking, it is more or less frequent to interrupt the data collection, \ie stop the run. This usually occurs when a detector encounters an error, unfixable while collecting data. Broadly speaking, a period regroups a set of runs that have been recorded within the same data taking conditions.}. The lists of appropriated runs for physics analysis are defined by the ALICE Data Preparation Group (DPG). As its name suggests, the latter oversees the preparation, reconstruction, quality assurance of both collected and simulated data, as well as the upkeep of the analysis tools including the event and track selections \cite{alicecollaborationALICEDataPreparation2023}. The list of runs employed in this study follows the DPG's recommendations for an analysis using central barrel detectors and requiring hadron PID. For a run to be in that list, all the detectors related to the tracking and PID must be operational -- \ie SPD, SDD, SSD (ITS), TPC, TOF --, as well as those in charge of triggering, that are the V0 and T0. Note that it does not mean that the PID performances are optimal, nor that the full acceptance of each detector is covered.\\
Besides the real data sample, the measurement also relies on simulated data in order to estimate and optimize the performances of the analysis. To each run corresponds its simulated counterpart, anchored on pass-2 data, as described in \Sec\ref{subsubsec:MCData}. All the exploited MC productions employ \Pythiaeight (version 8.2, tune: Monash 2013) as event generator. For the transport and interaction with the material of the ALICE detector, most of them use \GeantThree; although \GeantFour describes more accurately hadronic interactions at very low momentum and is better maintained, only a few of simulations rely on it, because of its higher consumption of computing resources \cite{barendsGeant4ValidationStudy2017}.
Since both abundant (\rmKzeroS, \rmLambda and to a certain extent, \rmXi) and rare\break species (\rmXi and \rmOmega) are being studied, one may resort to two kinds of simulations: general-purpose MC productions for the first ones, and enriched MC productions for the others. Here, the enriched simulations have been obtained by selecting the events that include, at least, a \rmKzeroS, \rmLambdaPM, \rmXiPM or \rmOmegaPM in $\abspseudorap < 1.2$. It turns out that most of the studies carried out in the present analysis use the latter simulations because of i)~the enrichment in strangeness, ii)~they cover all the periods of the considered LHC Run-2 data, and iii) they use \GeantFour.
Furthermore, this analysis also makes use of the track references in the simulation. As mentioned in \Sec\ref{subsubsec:MCData}, these correspond to the MC information of the considered track at the location where it crosses a given detection plane. Thereby, they allow for comparing the reconstructed track properties with the actual/generated ones at any point along the particle trajectory\footnote{Strictly speaking, this comparison cannot be done at any point since the track reference is only available where the particle traverses a sensitive volume.}. Although the track references are effectively stored for only 10\% of the production\footnote{This is done in order to spare some disk space.}, this comparison is proving invaluable to control the tracking in ALICE.\\
In total, the exploited data sample counts about 2.6 billion minimum bias events at \sqrtS = 13 \tev, and approximately 600 million events in the associated MC productions.
\subsection{The event selection}
\label{subsec:EventSelection}
As mentioned in \Sec\ref{subsec:TriggerSystem}, the analysis focuses on minimum-bias and/or high-multiplicity events. More precisely, the respective trigger configurations correspond to the MB$_{\rm AND}$ and/or HM$_{\rm VZERO}$. Not all the events passing these trigger selections are considered; additional cuts are applied in order to filter out only those of \say{good} quality, suitable for a physics analysis. \\
During the data acquisition (DAQ), the event-builder proceeds to the event reconstruction based on the sub-events from all contributing detectors. It may happen, however, that the output of a detector cannot be transmitted due to the associated data channel being closed\footnote{There are different reasons for the data channel to be closed. At the beginning or the end of each run, a specific procedure is performed on all detectors in order to effectively initiate the start or stop of the run. In particular, the \say{End Of Run} procedure has to close all the data channel connecting the event-builder and the sub-detectors -- \ie the GDCs and LDCs respectively (\Sec\ref{subsec:TriggerSystem}) --, but such termination can occur sooner in the case of a connection time-out for example.} \cite{alicecollaborationTriggerDataAcquisition2004}. The event-builder still reconstructs the event, although it is tagged as \say{incomplete DAQ} due to the missing informations. Such events are rejected in the present work.\\
There exists three types of reconstructed primary vertex in ALICE, from the highest to the lowest quality: one estimated using the global ITS-TPC tracks (\Sec\ref{subsubsec:FinalVertexDet}), another based on the SPD tracklets (\Sec\ref{subsubsec:PreliminaryVertex}), and the last one built from the TPC standalone tracks in a similar way as the former. By default, only the \say{best} available reconstructed primary vertex is considered.
Nevertheless, to ensure that the event has a vertex of a sufficiently good quality, the analysis relies exclusively on the first two aforementioned primary vertices. This means requiring the presence of, at least, the one reconstructed using tracklets\footnote{As mentioned in \Sec\ref{subsubsec:PreliminaryVertex}, the event cannot be built without the primary vertex based on SPD tracklets. Hence, by construction, the presence of such vertex is guaranteed in the event.}. Moreover, the resolution of the latter in the longitudinal direction should not exceed 0.25 \cm. In cases when both SPD tracklets and global ITS-TPC track vertices are available, their positions along the beam axis must coincide within a 0.5-\cm window.
As a prerequisite for guaranteeing a uniform reconstruction efficiency, particles must remain within the acceptance of all the central detectors involved in their reconstruction, that is $\abspseudorap < 0.9$. For particles originating from the interaction point, this condition implies a constraint on the longitudinal position of the primary vertex: the absolute distance between the interaction point and the centre of ALICE should be below 10 \cm along the beam axis\footnote{Note that there is no selection of such nature concerning the \emph{transverse} position of the primary vertex, except that it must be located inside the beam pipe.}. \\
A key element of the event quality concerns the pile-up level. The latter occurs when there are two or more collisions coming from the same bunch crossing -- this is the \textit{in-bunch} pile-up -- and/or from different bunch crossings occuring within the readout time of the detectors -- also called \textit{out-of-bunch} pile-up. One approach to remove both types of pile-up consists in rejecting events with multiple reconstructed primary vertices. This selection depends on the nature of the best primary vertex available.
\begin{itemize}
\item[$\bullet$] If it is the one reconstructed using ITS-TPC tracks, the event selection algorithm checks the presence of another primary-like vertex of reasonably good quality ($\rmChiSquareNDF < 5$, with $NDF$ the number of degrees of freedom), formed out of at least five tracks, and separated from the first one by more than $15 \sigma$\footnote{Here, $\sigma$ denotes the uncertainty on the distance between the two vertices.}. If such vertex exists, the event is discarded.
\item[$\bullet$] Otherwise, it corresponds to the one built from SPD tracklets. To maximise the selection efficiency, the cuts adapt to the tracklet multiplicity. Hence, if a second vertex is found to be away from the first one by more than 0.8 \cm along the beam axis, with at least three, four or five associated tracklets for a total number of reconstructed tracklets (\rmNTracklet) inferior to 20, $20 < \rmNTracklet \leq 50 $ and \rmNTracklet > 50 respectively, then the event is rejected.
\end{itemize}
Along the same line, the two innermost layers of the ITS can help to identify the remaining beam-induced background events -- that have not been removed by the MB$_{\rm AND}$ trigger selection -- and pile-up events. As mentioned in \ref{subsubsec:PreliminaryVertex}, a tracklet is formed out of pair of clusters found in the two SPD layers, separated by an angle of 0.01 radian at most. Therefore, if the number of clusters increases, so does the amount of reconstructed tracklets. However, in the case of beam-gas event, there should be many clusters but only a small number of tracklets could be formed using the previous definition. In pile-up events, only the tracklets associated with the primary vertex are considered; for that reason, the number of clusters should be relatively larger than expected at such tracklet multiplicity \cite{alicecollaborationALICEPhysicsForum2016}. In this way, based on this correlation between the number of SPD clusters and tracklets, the remaining events flagged as background or pile-up are rejected. \\
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{Figs/Chapter5/EventSelection.eps}
\caption{Fraction of rejected events in the present data sample for each event selection independently of the others: trigger selections (MB$_{\rm AND}$ and/or HM$_{\rm VZERO}$), incomplete DAQ, consistency between the global track and SPD tracklet vertices, longitudinal position of the primary vertex ($\mid \Delta z \mid < 10 $ \cm), pile-up removal for SPD tracklet and ITS-TPC track vertices, correlation between SPD tracklets and clusters.}
\label{fig:EvtSelection}
\end{figure}
\Fig\ref{fig:EvtSelection} provides the fraction of rejected events as a function of the above selections in pp collisions at \sqrtS = 13 \tev.\\
\section{Analysis of the hyperon masses}
\label{sec:AnalysisOfHyperonMasses}
\subsection{Track selections}
\label{subsec:TrackSelections}
The identification of V0s and cascades strongly depends on the reconstruction quality of the daughter tracks, and more precisely on their momentum resolution and trajectory. For that reason, the strange particle reconstruction relies exclusively on ITS-TPC combined tracks, since they offer the best momentum resolution as discussed in \Sec\ref{subsubsec:TrackReco} and shown in \fig\ref{fig:MomResolution}. In order to ensure an excellent momentum resolution as well as a fine estimation of the particle trajectory, various selection criteria are applied on the daughter tracks.\\
The analysis concentrates exclusively on tracks comprised within the pseudo-rapidity region $\abspseudorap < 0.8$. The latter corresponds to the acceptance volume of all the central detectors, which provides a uniform reconstruction efficiency. Moreover, any track containing ITS and/or TPC shared clusters is rejected, as they potentially correspond to wrongly assigned clusters that could bias the tracking quality.
Tracks belonging to a \textit{kink} vertex are discarded from the analysis, as they most certainly do not originate from a cascade decay and thus represent an additional source of combinatorial background. A kink usually happens when a charged particle decays into a neutral and a charged particle, such as $\Kplusmin \rightarrow \rmNeutrinoMu \muPlusMinus$. The former being undetected, they are identified by forming pairs of tracks, that intersect in space with a large angle and share the same electric charge.
Each track should have passed the final refit in the TPC. This means that its parameters have been estimated successfully in the TPC during the third stage of the tracking, when the track is re-propagated inwards to their distance of closest approach to the primary vertex (\Sec\ref{subsubsec:TrackReco}). To guarantee a good momentum resolution and a stable particle identification (PID) based on the energy deposit (\dEdx) in the TPC, the tracks need to be associated to at least 70 readout pad rows in the TPC out of 159 possible in total. These selections eliminate the contribution of short tracks and, incidentally, pairs of tracks formed out of the clusters from a single actual particle.\\
The reconstruction of V0s and cascades presented in \chap\ref{chap:V0CascReconstruction} does not resort to any kind of selections on the nature of the daughter particles, apart from their electric charge. This yields \textit{de facto} to an outstanding amount of background candidates. One way of suppressing the latter with a minimal cost in terms of signal candidates consists in using the PID information provided by the TPC. In practice, the idea is to reject every association that involves tracks inconsistent with the expected identities for either a \rmKzeroS, \rmLambdaPM, \rmXiPM or \rmOmegaPM decay.
As explained in \Sec\ref{subsubsec:TPC}, a track can be labeled as a pion, proton or kaon by making use of the PID estimator in \eq\ref{eq:PIDEstimator}, \Nsigma, which evaluates the difference between the measured \dEdx and the expected one under a given particle mass hypothesis in units of relative resolution. The separation power of such estimator evolves with the particle momentum which, in turn, influences the selection threshold and has some implications in terms of purity and efficiency: the tighter the selection on \Nsigma, the higher the purity but at the price of a smaller efficiency; conversely, a looser cut on \Nsigma deteriorates the purity in favour of a higher efficiency.
The identification strategy adopted here consists in selecting only the tracks compatible with their expected mass hypothesis within \Nsigma = $\pm 3$ at most. This selection is applied on \emph{each} decay daughters, irrespective of their momentum or the one of the mother particle. Considering the \rmXiM or \rmOmegaM case, this imposes that:
\begin{itemize}
\item[$\bullet$] the bachelor track must be consistent with the \rmPiMinus or \rmKMinus mass hypothesis, in the case of \rmXiM or \rmOmegaM respectively,
\item[$\bullet$] the positive track needs to be compatible with a proton hypothesis,
\item[$\bullet$] and the negative track has to agree with energy loss band of the pion.
\end{itemize}
In the case of \rmAxiP or \rmAomegaP, one needs to swap the electric charge of the decay daughters, namely the positive track needs to be compatible with a pion hypothesis and the negative track, an anti-proton. For the \rmKzeroS, both positive and negative tracks should be compatible with the pion hypothesis.
\subsection{V0s and cascades selections}
\label{subsec:V0CascSelections}
\subsubsection{Topological and kinematic selections}
Once the events and tracks have been selected, the topological reconstruction of V0s and cascades comes into play, as explained in \chap\ref{chap:V0CascReconstruction}. However, not all the candidates are considered in the analysis. As suggested in \Sec\ref{subsec:HyperonAndALICE}, ALICE is well suited for studying hyperons but only at mid-rapidity. This means that the V0s and cascades are reconstructed in the rapidity window $\absrap < 0.5$.
\begin{table}[t]
\centering
\begin{tabular}{c|c|c}
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Candidate variable & Selections \rmLambdaPM & Selections \rmKzeroS \\
\noalign{\smallskip}\hline \noalign{\smallskip}
V0 \pT interval (\gmom) & \multicolumn{2}{c}{1 < \pT < 5} \\
V0 rapidity interval & \multicolumn{2}{c}{\absrap < 0.5} \\
Competing mass rejection (\gmass) & > 0.010 & > 0.005 \\
MC association (MC only) & \multicolumn{2}{c}{Correct identity assumption} \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\bf Track variable & Selections \rmLambdaPM & Selections \rmKzeroS \\
\noalign{\smallskip} \hline \noalign{\smallskip}
Pseudo-rapidity interval & \multicolumn{2}{c}{\abspseudorap < 0.8} \\
TPC refit & \multicolumn{2}{c}{\CheckGr} \\
Nbr of crossed TPC readout rows & \multicolumn{2}{c}{ > 70} \\
$\Nsigma^{\rm TPC}$ & \multicolumn{2}{c}{< 3} \\
\multirow{ 2}{*}{Out-of-bunch pile-up rejection} & \multicolumn{2}{c}{at least one track with} \\
& \multicolumn{2}{c}{ITS-TOF matching} \\
Anterior ITS cluster rejection & \multicolumn{2}{c}{> 1 $\sigma_{\rm R}$} \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Topological variable & Selections \rmLambdaPM & Selections \rmKzeroS \\
\noalign{\smallskip}\hline \noalign{\smallskip}
V0 decay radius (\cm) & \multicolumn{2}{c}{> 0.5}\\
V0 lifetime (\cm) & \multicolumn{2}{c}{< 3 $\times$ \cTau}\\
V0 cosine of pointing angle & \multicolumn{2}{c}{> 0.998}\\
DCA proton to prim. vtx (\cm) & > 0.06 & - \\
DCA pion to prim. vtx (\cm) & \multicolumn{2}{c}{> 0.06} \\
% DCA V0 to prim. vtx (\cm) & < 1 & < 0.06 \\
DCA between V0 daughters (std dev.) & \multicolumn{2}{c}{< 1} \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabular}
\caption{Summary of the topological and track selections, as well as the associated cut values, used in the reconstruction of \rmLambdaPM and \rmKzeroS in pp events at \sqrtS = 13 \tev. The \textit{competing mass rejection} refers to the removal of the background contamination from other mass hypotheses (\Sec\ref{subsubsec:InvariantMassSelection}). In the \rmLambdaPM case, this consists in comparing the invariant mass under the assumption of a \rmPiPlus\rmPiMinus and the PDG mass of \rmKzeroS, that is the quantity $\mid\mInv[\rm hyp.\ \rmKzeroS] - \mPDG[\rmKzeroS]|$. When reconstructing \rmKzeroS candidates, the selection variable becomes $\mid\mInv[\rm hyp.\ \rmLambda] - \mPDG[\rmLambda]|$.}\label{tab:V0Selections}
\end{table}
The above selections on the track quality in TPC exclude the possibility of studying the particles of interest at low momentum ($\pT \leq 0.6$ \gmom). At such \mbox{values}, V0s and cascades decay into very low momentum tracks, that can only be reconstructed via the ITS standalone tracking. Even when these tracks reach the TPC, they form short tracks and are thus rejected (\Sec\ref{subsec:TrackSelections}). As a matter of fact, in order to secure a reasonably good momentum resolution on the decay daughters, this analysis only considers candidates from 1 to 5 \gmom. On one hand, \eq\ref{eq:Gluckstern} indicates that the momentum resolution deteriorates at low momentum ($\pT \leq 1$ \gmom) due to their relatively \say{short} track length, \say{small} number of clusters and the dominant contribution of multiple scattering. On the other hand, at high \pT ($\pT \geq 5$ \gmom), the resolution also decreases as a consequence of less pronounced track curvature.\\
\begin{table}[p]
\centering
\begin{tabular}{c|c|c}
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Candidate variable & Selections \rmXiPM & Selections \rmOmegaPM \\
\noalign{\smallskip}\hline \noalign{\smallskip}
Cascade \pT interval (\gmom) & \multicolumn{2}{c}{1 < \pT < 5} \\
Cascade rapidity interval & \multicolumn{2}{c}{\absrap < 0.5} \\
Competing mass rejection (\gmass) & - & > 0.008 \\
MC association (MC only) & \multicolumn{2}{c}{Correct identity assumption} \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Track variable & Selections \rmXiPM & Selections \rmOmegaPM \\
\noalign{\smallskip}\hline \noalign{\smallskip}
Pseudo-rapidity interval & \multicolumn{2}{c}{\abspseudorap < 0.8} \\
TPC refit & \multicolumn{2}{c}{\CheckGr} \\
Nbr of crossed TPC readout rows & \multicolumn{2}{c}{ > 70} \\
$\Nsigma^{\rm TPC}$ & \multicolumn{2}{c}{< 3} \\
\multirow{ 2}{*}{Out-of-bunch pile-up rejection} & \multicolumn{2}{c}{at least one track with} \\
& \multicolumn{2}{c}{ITS-TOF matching} \\
Anterior ITS cluster rejection & \multicolumn{2}{c}{> 1 $\sigma_{\rm R}$} \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Topological variable & Selections \rmXiPM & Selections \rmOmegaPM \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{3}{l}{\textbf{V0}} \\
V0 decay radius (\cm) & > 1.2 & > 1.1\\
V0 cosine of pointing angle & \multicolumn{2}{c}{> 0.97}\\
|$m$(V0) - \mPDG[\rmLambda]| (\gmass) & \multicolumn{2}{c}{< 0.008} \\
DCA proton to prim. vtx (\cm) & \multicolumn{2}{c}{> 0.03} \\
DCA pion to prim. vtx (\cm) & \multicolumn{2}{c}{> 0.04} \\
DCA V0 to prim. vtx (\cm) & \multicolumn{2}{c}{> 0.06} \\
DCA between V0 daughters (std dev) & \multicolumn{2}{c}{< 1.5} \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{3}{l}{\textbf{Cascade}} \\
Cascade decay radius (\cm) & > 0.6 & > 0.5 \\
Cascade lifetime (\cm) & \multicolumn{2}{c}{< 3 $\times$ \cTau}\\
DCA bachelor to prim. vtx (\cm) & \multicolumn{2}{c}{> 0.04} \\
DCA between cascade daughters (std dev.) & \multicolumn{2}{c}{< 1.3} \\
Cascade cosine of pointing angle & \multicolumn{2}{c}{> 0.998} \\
Bachelor-proton pointing angle (rad) & \multicolumn{2}{c}{> 0.04} \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabular}
\caption{Summary of the topological and track selections, as well as the associated cut values, used in the reconstruction of \rmXiPM and \rmOmegaPM in pp events at \sqrtS = 13 \tev. The \textit{competing mass rejection} refers to the removal of the background contamination from other cascade hypothesis (\Sec\ref{subsubsec:InvariantMassSelection})}\label{tab:CascadeSelections}
\end{table}
To further remove the contribution from out-of-bunch pile-up events, it is required for at least one of the daughter tracks to either have a cluster in the innermost ITS layers\footnote{Technically, it is requested to have passed the final refit in the ITS and to have a hit in one of the two SPD layers.} or match with a hit in the TOF. The former uses the fast readout time of the SPD to limit the pile-up to tracks produced in collisions within~$\pm$~300~\nsec, that is $\pm$ 12 bunch crossings\footnote{Keep in mind that, in ALICE during the LHC Run-2, the average number of collisions per bunch crossing is not about 30-50 as for ATLAS and CMS, or 1-2 for LHCb; it is smaller than 1-5\%, \ie a low trend in terms of pile-up.}; the latter exploits the highly precise timing information of the TOF to identify the bunch crossing from which the particle originates, with an efficiency of approximately 70 to 80\% for intermediate- or high-\pT particles and drops rapidly for lower momentum due to mismatches \cite{alicecollaborationALICEDPGPileup2021}. This selection has been thoroughly studied in the context of a strange particle production analysis \cite{alicecollaborationMultiplicityDependenceMulti2020}; it was shown that applying this ITS-TOF matching condition on at least one of the decay daughters is sufficient to eliminate most of the remaining pile-up contamination.\\
Moreover, the reconstruction procedure presented in \chap\ref{chap:V0CascReconstruction} corresponds to a so-called \emph{offline} reconstruction: V0s and cascades are formed by combining tracks, that have already been reconstructed during the event reconstruction (\Sec\ref{subsec:EventReco}). However, during the tracking stage, there is no way to know \textit{a priori} that they actually originate from a hyperon; they are thus reconstructed as any other track in the event. As a consequence, there is no causality check\footnote{There is, however, a causality check performed in the cascade reconstruction in order to ensure that the V0 decay point does not sit downstream from the cascade decay position.} against assigned ITS clusters anterior to the V0 and/or cascade decays. Due to the possible bias that might be introduced in the invariant mass of the mother particle, all the daughter tracks updated with an ITS cluster \emph{below} the associated decay point by more than~1~$\sigma_{\rm R}$\footnote{$\sigma_{\rm R}$ refers to the resolution on the radial decay position of the V0 or cascade.} are discarded. This requirement applies for both V0 and cascade candidates.
In summary, \tabs\ref{tab:V0Selections} and \ref{tab:CascadeSelections} provide a list of the track and topological selections employed in the reconstruction of V0s and cascades respectively, as well as the numerical cut values. Note the tight cut on the cosine of pointing angle of the cascade candidate; this is discussed later in \Sec\ref{subsec:MassExtraction}.
\subsubsection{Structure in the invariant mass spectrum of cascades}
\label{subsubsec:InvMassStructure}
Among the topological selections listed in \tab\ref{tab:CascadeSelections}, one of them has not been introduced and discussed in \chap\ref{chap:V0CascReconstruction}, namely the cut on the pointing angle formed by the bachelor and the positive particles. Contrarily to the other selections, this one is not standard in ALICE; it has been introduced in 2020 by \cite{silvadealbuquerqueMultistrangeHadronsPb2019}. At that time, a structure in the invariant mass distribution of \rmXi and \rmOmega, similar to the one in \figs\ref{fig:WrongPA}, was observed in Pb-Pb collisions. It turned out that the bump background, between 1.28 and 1.31 \gmass on \figs\ref{fig:XiMinusWrongPA} and \ref{fig:XiPlusWrongPA}, originates from an erroneous track association in the cascade reconstruction.
A V0 decays into a baryon \proton/\pbar and a \rmPiMinus/\rmPiPlus, depending on whether this is a \rmLambda or \rmAlambda. In the situation where another negative/positive track in the event passes close by the proton/anti-proton, the reconstruction algorithm may interpret that as a V0~decay; this track plays the role of the negative/positive daughter particle of a \rmLambdaPM, and the proton/anti-proton corresponds to its positive/negative daughter particle. On the other hand, the remaining \rmPiMinus/\rmPiPlus daughter of the actual \rmLambdaPM is combined to other particles, and most likely to the previously ill-formed V0. In such case, it acts like the bachelor particle of a cascade decay. In other words, while the actual topology is depicted in \fig\ref{fig:WrongV0}, it is reconstructed as a cascade, as illustrated in \fig\ref{fig:TrueV0}.
The analysis in \cite{silvadealbuquerqueMultistrangeHadronsPb2019} investigated different strategies in order to remove this background contamination. In the end, the best option consists in rejecting candidates with a \emph{small} pointing angle for the dummy V0, \ie the pointing angle formed by the V0 made of the bachelor and the proton, as shown in \fig\ref{fig:WrongPACut}.
\begin{figure}[t]
\hspace*{-1.5cm}
\subfigure[]
{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassXiMinus_WrongPA.eps}
\label{fig:XiMinusWrongPA}
}
\subfigure[]
{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassXiPlus_WrongPA.eps}
\label{fig:XiPlusWrongPA}
}
\hspace*{-1.5cm}
\subfigure[]
{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassOmegaMinus_WrongPA.eps}
\label{fig:OmegaMinusWrongPA}
}
\subfigure[]
{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassOmegaPlus_WrongPA.eps}
\label{fig:OmegaPlusWrongPA}
}
\caption{Invariant mass distribution of \rmXiM (a), \rmAxiP (b), \rmOmegaM (c) and \rmAomegaP (d) in pp collisions at \sqrtS~=~13~\tev. These have been obtained using the cuts in \tab\ref{tab:CascadeSelections} (red markers), and also without the bachelor-proton pointing angle selection (black markers). This comparison shows that the latter selection removes a structure in the invariant mass distribution while preserving the population under the peak. Notice the log-scale on the y-axis, that puts into perspective the signal and background levels.}
\label{fig:WrongPA}
\end{figure}
%\begin{figure}[t]
% \begin{minipage}[t]{.3\textwidth}
%% \centering
%\hspace*{-2.cm}
% \includegraphics[width=1.25\textwidth]{Figs/Chapter5/WrongV0.eps}
% \subcaption{Image 1.}
% \label{fig:WrongV0}
% \end{minipage}
%% \hfill
% \begin{minipage}[t]{.3\textwidth}
%% \centering
% \hspace*{-0.5cm}
% \includegraphics[width=1.35\textwidth]{Figs/Chapter5/TrueV0.eps}
%% \subcaption{Image 1.}
% \label{fig:TrueV0}
% \end{minipage}
% \begin{minipage}[t]{.3\textwidth}
%% \centering
% \hspace*{1.5cm}
% \includegraphics[width=1.45\textwidth]{Figs/Chapter5/WrongPACut.png}
%% \subcaption{Image 1.}
% \label{fig:WrongPACut}
% \end{minipage}
% \caption{Illustrations of a \rmLambda decaying into a proton and a pion, with another pion passing close to the proton (a), identified as a cascade decay topology and reconstruted as such (b). (c) Distribution of the pointing angle formed by the bachelor and proton tracks for true associated \rmXi and for candidates in the background structure in the invariant mass distributions ("bump").}
%\end{figure}
\begin{figure}[t]
\centering
%\hspace*{-2.0cm}
\subfigure[]
{
\includegraphics[width=0.4\textwidth]{Figs/Chapter5/WrongV0.eps}
\label{fig:WrongV0}
}
\subfigure[]
{
\includegraphics[width=0.45\textwidth]{Figs/Chapter5/TrueV0.eps}
\label{fig:TrueV0}
}\\
\subfigure[]
{
\includegraphics[width=0.5\textwidth]{Figs/Chapter5/WrongPACut.png}
\label{fig:WrongPACut}
}
\caption{Illustrations of a \rmLambda decaying into a proton and a pion, with another pion passing close to the proton (a), identified as a cascade decay topology and reconstruted as such (b). (c) Distribution of the pointing angle formed by the bachelor and proton tracks for true associated \rmXi and for candidates in the background structure in the invariant mass distributions ("bump").}
\label{fig:WrongTopology}
\end{figure}
\subsection{Mass measurement}
\label{subsec:MassExtraction}
\subsubsection{Principles of the mass extraction}
\label{subsubsec:PrinciplesOfMassExtraction}
Out of all the candidates passing the above selection criteria, there contain true V0s/cascades -- depending on the particle of interest -- and background candidates. Taken individually, they are undistinguishable. The separation of these two can only be achieved statistically, based on the analysis of the invariant mass spectrum.
The invariant mass of each candidate is calculated, as explained in \Sec\ref{subsubsec:CascadeFormation} and \Sec\ref{subsubsec:InvariantMassSelection}, and sorted according to their electric charge in order to separate the particles from the anti-particles. The V0s being electrically neutral, they follow a different approach: since the \rmKzeroS decays into two particles of the same nature -- a \rmPiPlus and a \rmPiMinus --, it is hopeless to try separating particles and anti-particles. This is not the case of \rmLambda and \rmAlambda, though. However, it may happen that the same V0 candidate passes the particle and anti-particle selections in \tab\ref{tab:V0Selections}. To avoid such double-counting, each candidate needs to go through the \rmLambda selections first. If it satisfies all conditions, it is labelled as \rmLambda and we move to the next candidate. Otherwise, it is checked against the requirements for a \rmAlambda baryon.
On one hand, most of the background candidates originate from a random association of two or three tracks. Those tracks being uncorrelated, the corresponding invariant mass spectrum should be flat or decreasing with the invariant mass value. On the other hand, the invariant mass of true V0s/cascades should be close to the tabulated mass \mPDG, such that there emerges an overpopulated region taking the shape of a peak. \Figs\ref{fig:InvMassCascades} show the invariant mass spectra of \rmXi and \rmOmega. One can see that the signal for each species sits on top of a small background.\\
To isolate the signal from the background, a fit of the invariant mass spectra is performed using a sum of two functions: one for modelling the signal peak, the other for describing the background. Several functions can be considered, as discussed later in \Sec\ref{subsubsec:SignalShape}. In \figs\ref{fig:InvMassCascades}, the peak is represented by a triple Gaussian and the background by an exponential function. Whatever the chosen functions are, the fitting procedure is performed with the maximum (log-)likelyhood method.
If the procedure manages to converge, this fit allows to measure the mass of the considered particle: it corresponds to the centre of the invariant mass peak, given by the position of the maximum of the signal function denoted as $\mu$. The width of the peak -- the parameter $\sigma$ -- provides an estimation of the experimental resolution on the mass. The uncertainties on both quantities come from the errors returned by the fitting procedure.
From these parameters, two regions of interest can be delimited:
\begin{itemize}
\item[$\bullet$] the peak region, containing all the signal\footnote{More precisely, considering the definition of the peak region in this analysis, it should contain approximately 99.99995\% (\ie a $5 \sigma$ significance level) of the true V0s/cascades measured.} and some background, is defined within $\left[ \mu - 5 \sigma ; \mu + 5 \sigma \right]$;
\item[$\bullet$] the side-bands region, solely constituted of background, consists in two bands of the same width\footnote{As a side note: the two side-bands do not need to be of the same size, but it avoids dealing with a scaling factor when comparing their total area to the one in the peak region. Most often, they have different widths because of an asymmetry in the invariant mass distribution, such as the structure reported in \Sec\ref{subsubsec:InvMassStructure} \cite{alicecollaborationProductionLightflavorHadrons2021}.}, surrounding the peak region and covering the range $\left[ \mu - 12 \sigma ; \mu - 7 \sigma \right] \bigcup \left[ \mu + 7 \sigma ; \mu + 12 \sigma \right]$.
\end{itemize}
Hence, the amount of raw signal and background can be evaluated. The peak ($S+B$) and background ($B$) populations are estimated by counting the
number of candidates in their respective regions. The raw signal ($S$) in the peak region is obtained by subtracting the background from the peak population, that is\break $S=(S+B)-B$.
\begin{figure}[p]
%\centering
\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassXiMinus.eps}
\label{fig:XiMinus_TripleGaussian}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassXiPlus.eps}
\label{fig:XiPlus_TripleGaussian}
}
\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassOmegaMinus.eps}
\label{fig:OmegaMinus_TripleGaussian}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassOmegaPlus.eps}
\label{fig:OmegaPlus_TripleGaussian}
}
\caption{Invariant mass distributions of \rmXiM (a), \rmAxiP (b), \rmOmegaM (c) and \rmAomegaP (d) hyperons in pp collisions at \sqrtS = 13 \tev. Here, the peak is modelled by a triple Gaussian, and the background by an exponential function. Each distribution comes with an additional panel representing the consistency between the data and the fit model, in the form of a ratio per invariant mass bin. The error bars encompass the uncertainties on both quantities.}
\label{fig:InvMassCascades}
\end{figure}
In \figs\ref{fig:InvMassCascades}, all the fits are of reasonably good quality\footnote{One may argue that, in the case of the \rmXiM, the reduced $\chi^{2}$ is relatively high. However, the comparison of the bottom panels of the \rmXiM and \rmAxiP allows to conclude that it certainly comes from a slightly worst description of the background.}. The bottom panels show that the data-model discrepancy does not exceed 5\% for the most precise points, \ie those in the peak region. The mass peak sits on a small background: 1~298~838~$\pm$~1202~\rmXiM (1~229~531~$\pm$~1168~\rmAxiP) and 67~210~$\pm$~285~\rmOmegaM (66~199~$\pm$~281~\rmAomegaP) were reconstructed with purities above 90\%, as shown in \tab\ref{tab:FitQuantities}.
\begin{table}[h]
\centering
\begin{tabular}{b{5.35cm}@{\hspace{1cm}} b{2cm}@{\hspace{0.5cm}} b{2cm}@{\hspace{0.5cm}} b{1.5cm}@{\hspace{0.5cm}} b{1.5cm}@{\hspace{0.1cm}}}
\noalign{\smallskip}\hline\noalign{\smallskip}
\bf Particle & \rmXiM & \rmAxiP & \rmOmegaM & \rmAomegaP \\
\noalign{\smallskip}\hline \noalign{\smallskip}
Reduced $\chi^2$ & 2.474 & 1.692 & 1.500 & 1.826\\
Raw signal, $S$ & 1 298 838 & 1 229 531 & 67 210 & 66 199\\
Background, $B$ & 75209 & 67 328 & 6784 & 6231 \\
$S/B$ & 17.3 & 16.4 & 9.91 & 10.63 \\
Purity, $S/(S+B)$ & 94.5\% & 94.2\% & 90.8\% & 91.4\% \\
Signal significance, $S/\sqrt{S+B}$ & 1108 & 1076 & 247 & 246 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\caption{Results from the fit of the invariant mass distributions in \fig\ref{fig:InvMassCascades} concerning the overall samples of \rmXiM, \rmAxiP, \rmOmegaM and \rmAomegaP. Therefore, this table reports the reduced $\chi^{2}$, raw signal, background, ratio $S/B$, purity and signal significance.}\label{tab:FitQuantities}
\end{table}
\subsubsection{Shape of the peak functions}
\label{subsubsec:SignalShape}
Since the mass extraction depends on the peak description, it is crucial to identify functional forms that reproduce accurately its shape. Different functions have been studied in MC simulations, based solely on true V0/cascade candidates. Thus, the invariant mass spectrum contains no background candidates and follows approximately a quasi-Gaussian distribution centred on the injected mass, which usually corresponds to the PDG mass value. The objective here is to define a list of functions, that describe correctly the shape of the invariant mass peak and are characterised by a reasonably good reduced $\chi^{2}$. Two types of functional forms are considered: symmetric and asymmetric functions.
\paragraph{Symmetric function:} Due to the detector smearing, the core of the invariant mass distribution exhibits a quasi-Gaussian shape; in that respect, one may favour symmetric functions. The tails of the distribution, however, are usually not Gaussian-like, and thus not well described by this class of functions. This is due to the contribution of particles with different transverse momentum; as the \pT resolution varies with the transverse momentum and relates to the width of the invariant mass peak, the measured distribution consists in fact in an infinite sum of invariant mass distribution, each with a different width. Always with the aim of employing a symmetric function, the solution thus consists to take an infinite sum of Gaussians with a common mean\footnote{A more unusual approach would be to consider an infinite sum of Gaussians, each with a different mean. This would be relevant if the mass measurement is biased, in such a way that mass changes with momentum for example. In such case, a non-trivial question arises as of what value to take as a final mass measurement. As of today, there is still no clear answer.}. In the present analysis, it has been observed that three Gaussians (\eq\ref{eq:Gaus}) already offer a reasonably good fit quality. Another option is to resort to slightly modified versions of a Gaussian, such that it provides a better description of the tails of the distribution (\eq\ref{eq:ModifiedGaus}).
\begin{itemize}
\item[$\bullet$] \textbf{Triple-Gaussian}:
\begin{equation}
\dNdX{\mInv[]} = A_{1} \cdot \exp \left[ - \dfrac{ (\mInv[] - \mu )^2}{ 2 \sigma_{1}^2} \right] + A_{2} \cdot \exp \left[ - \dfrac{ (\mInv[] - \mu )^2}{ 2 \sigma_{2}^2} \right] + A_{3} \cdot \exp \left[ - \dfrac{ (\mInv[] - \mu )^2}{ 2 \sigma_{3}^2} \right]
\label{eq:Gaus}
\end{equation}
with $A_{1}$, $A_{2}$, $A_{3}$ the amplitudes of the first, second and third Gaussian, $\mu$ the common mean value, and $\sigma_{1}$, $\sigma_{2}$, $\sigma_{3}$ the width of the first, second and third Gaussian\footnote{In case of a fit with a triple-Gaussian function, it is the weighted width that is considered for the definition of the peak and side-bands regions. The weighting factors for $\sigma_{1}$, $\sigma_{2}$, $\sigma_{3}$ are determined based on the relative contribution of each Gaussian in the fit,\break \ie$\sigma^{2} = \frac{A_{1}}{A_{1}+A_{2}+A_{3}} \sigma_{1}^{2} + \frac{A_{2}}{A_{1}+A_{2}+A_{3}} \sigma_{2}^{2} + \frac{A_{3}}{A_{1}+A_{2}+A_{3}} \sigma_{3}^{2}$}.
%
%\item \textbf{Double Gaussian} : it consists in a sum of two Gaussian functions with different parameters but the mean value which is common.
% \begin{equation}
% \dNXdX{\rmXiPM(\rmOmegaPM)}{\mInvIdx{\rmLambdaPM \piPlusMinus (\rmLambdaPM \Kplusmin)}} = A_{1} \cdot \exp \left[ - \dfrac{ (\mInvIdx{\rmLambdaPM \piPlusMinus (\rmLambdaPM \Kplusmin)} - \mu )^2}{ 2 \sigma_{1}^2} \right] + A_{2} \cdot \exp \left[ - \dfrac{ (\mInvIdx{\rmLambdaPM \piPlusMinus (\rmLambdaPM \Kplusmin)} - \mu )^2}{ 2 \sigma_{2}^2} \right]
% \end{equation}\label{eq:DoubleGaus}
% where $A_1$ and $A_2$ are the respective amplitudes of the two Gaussian, $\mu$ corresponds to the center of the peak (common for the two Gaussian), and their widths are denoted as $\sigma_1$ and $\sigma_2$.
%
\item[$\bullet$] \textbf{Modified Gaussian} \cite{atlascollaborationKs02012}:
\begin{equation}
\dNdX{\mInv} = A \cdot \exp \left[ - \frac{1}{2} u^{1 + \frac{1}{1+ 0.5 u}} \right] \quad ; \quad u = \left\rvert \frac{\mInv - \mu }{\sigma} \right\rvert
\end{equation}\label{eq:ModifiedGaus}
with $A$ the normalisation, $\mu$ the mean, and $\sigma$ the width.\\
\end{itemize}
\paragraph{Asymmetric function:} Previous functions are all different flavours of Gaussian, and so are all symmetric. However, this is not necessarily the case for the tails of the invariant mass distribution. In such case, an asymmetric function seems more suited for describing the peak. Among those appear the Bukin function \cite{bukinFittingFunctionAsymmetric2007, nielPreciseMeasurementsCharmed2021}, that is a modified Novosibirsk distribution, constructed from the convolution of a Gaussian distribution and an exponential one. It is typically used to fit the invariant mass of \rmJpsi.
\begin{itemize}
\item[$\bullet$] \textbf{Bukin}:
\begin{equation}
\dNdX{\mInv} =
\begin{cases}
A \cdot \exp \left[ \rho_{\rm L} \frac{(u-x_{\rm L})^2}{(\mu-x_{\rm L})^2} - \ln(2) + 4 \cdot \ln(2) \frac{(u-x_{\rm L})}{2 \sigma \sqrt{ 2 \ln 2 }} \cdot \frac{\xi}{\sqrt{\xi^2+1} + \xi} \frac{\sqrt{\xi^2+1}}{(\sqrt{\xi^2+1}-\xi)^2} \right], \ u\leq x_{\rm L} \\
\\
A \cdot \exp \left[ -\ln(2) \cdot \left( \frac{ \ln(1 + 4 \xi \sqrt{\xi^2+1} \frac{u - \mu}{2 \sigma \sqrt{2 \ln 2}}) }{ \ln( 1 + 2 \xi (\xi - \sqrt{\xi^2+1})) } \right)^2 \right], \ x_{\rm L} < u < x_{\rm R} \\
\\
A \cdot \exp \left[ \rho_{\rm R} \frac{(u-x_{\rm R})^2}{(\mu-x_{\rm R})^2} - \ln(2) + 4 \cdot \ln(2) \frac{(u-x_{\rm R})}{2 \sigma \sqrt{ 2 \ln 2 }} \cdot \frac{\xi}{\sqrt{\xi^2+1} + \xi} \frac{\sqrt{\xi^2+1}}{(\sqrt{\xi^2+1}-\xi)^2} \right], \ u \geq x_{\rm R}
\end{cases}
\end{equation}\label{eq:Bukin}
with
\begin{equation}
x_{\rm L, R} = \mu + \sigma \sqrt{ 2 \ln 2 } \left( \frac{ \xi}{ \sqrt{\xi^2 + 1 } } \mp 1 \right)
\end{equation}
where $u$ coincides with \mInv, $A$ is the normalisation parameter, $\mu$ and $\sigma$ are the mean and the width of the peak, $\xi$ is an asymmetry parameter, $\rho_{\rm L}$ and $\rho_{R}$ are left and right exponential tail coefficients \cite{verkerkeRooFitUsersManual2008}.
\item[$\bullet$] \textbf{Double-sided crystal ball} \cite{atlascollaborationSearchResonancesDiphoton2016}:
\begin{equation}
\dNdX{\mInv} =
\begin{cases}
A \cdot \left(\frac{n_{\rm L}}{\alpha_{\rm L} (n_{\rm L} - \alpha_{\rm L}^{2} - u \alpha_{\rm L})}\right)^{n_{L}} \exp \left[ -0.5 \alpha_{\rm L}^{2} \right] , \ u < -\alpha_{\rm L} \\
\\
A \cdot \exp \left[ -0.5 u^{2} \right], \ -\alpha_{\rm L} \leq u \leq \alpha_{\rm R} \\
\\
A \cdot \left(\frac{n_{\rm R}}{\alpha_{\rm R} (n_{\rm R} - \alpha_{\rm R}^{2} + u \alpha_{\rm R})}\right)^{n_{R}} \exp \left[ -0.5 \alpha_{\rm R}^{2} \right] , \ u < \alpha_{\rm R}
\end{cases}
\end{equation}\label{eq:DoubleSidedCrystalBallFunction}
with $u$ equals $\left(\mInv - \mu\right)/\sigma_{L}$ for $\mInv - \mu < 0$ and $\left(\mInv - \mu\right)/\sigma_{R}$ for $\mInv~-~\mu~>~0$, $A$ is the normalisation parameter, $\mu$ is the peak position, $\sigma_{\rm L}$ and $\sigma_{\rm R}$ parametrise the position where the peak starts to follow a power law towards the low and high mass values respectively, of exponents $n_{\rm L}$ and $n_{\rm R}$.
\end{itemize}
To each particle should be associated, at least, two functional forms for the modelisation of the peak: a symmetric function and an asymmetric one. Therefore, after several tests, it turns out that the functions offering the best description of the invariant mass peak are the triple-Gaussian and the Bukin. In addition, the fit tends to converge more easily with the latter function than with the double-sided crystal ball function. Consequently, only these two functions will be considered in the following.
\subsubsection{Shape of the background functions}
\label{subsubsec:BackgroundShape}
The origin of the data sample purity has to be found in the (very) tight cut on the cosine of pointing angle of the cascade candidate in \tab\ref{tab:CascadeSelections}. As a matter of fact, this selection has been tuned to reach such level of purity. Contrarily to the peak shape, the form of background is \textit{a priori} less well-known. For that reason, it is essential to control the level of background, and most particularly its profile, such that it can be modeled by one of the expected functional form.
For the background, different functional forms are considered :
\begin{itemize}
\item \textbf{Constant}: one may suspect the combinatorial background to be \textit{a priori} unstructured. In such case, it should follow a uniform distribution, and thus can be approximated by a constant function.
\item \textbf{Linear}: the previous description can be refined by considering that the number of tracks decreases with momentum. Consequently, the mis-association of low-momentum tracks should dominate the combinatorial background at the low-invariant mass values, whereas the high values originate from tracks with higher momentum. Hence, the background reduces with the invariant mass value. This decrease may be parametrised, at first order, by a linear function.
\item \textbf{Exponential}: alternatively, the background can also be described by an exponential function.
\item \textbf{Second order polynomial}: In case the background turns out not to be purely combinatorial but has a physics origin -- for instance, particles produced from the interaction with the detector material --, the latter may have a specific structure, that needs to be described by more parameters than in the above functions. To that end, a second order polynomial is also considered for modeling the background.
\end{itemize}
Since the exploited simulations contain only pure samples of strange hadrons, the study of the most appropriate background shapes for each of the considered particles has to be performed in the real data\footnote{As a matter of fact, even if the exploited MC simulations would contain some background, there is no guarantee that they provide the same background as in the real data.}. To obtain an invariant mass distribution consisting only of background candidates, the peak is removed by cutting out all the entries falling in an invariant mass region of $\mPDG \pm 10$ \mmass. The obtained invariant mass spectrum is then fitted with each of the above functional forms, in order to identify those describing accurately the background.
For \rmKzeroS, \rmLambda, \rmXi and \rmOmega, the best parametrisations of the background turn out to be a linear function and an exponential one. Thereby, only these forms will be considered in the following.\\
In total, there are two functions for modeling the peak, and two functions for the description of the background. All the combinations between these two pairs of functional forms have been tested: the sum of a triple-Gaussian function and an exponential one offers the best description. Therefore, the latter will provide our mass measurement; the other associations of peak and background functions will be used for the study of the systematic uncertainties.
\subsubsection{Correction on the extracted mass}
\label{subsubsec:CorrectionOnTheExtractedMass}
Although the functions in \Sec\ref{subsubsec:SignalShape} describe well the invariant mass peak, the extracted mass does not agree with the PDG mass (see \tab\ref{tab:V0CascPDGMass}), as shown in \figs\ref{fig:InvMassCascades}. This seemingly bias may stem from several reasons. It can be due to the way data are processed, that might overestimate the reconstructed mass in a systematic manner. The analysis, and particularly the employed selections, may introduce a distortion in the invariant mass distribution, resulting in a different mass than the expected one. The fit procedure could also be the origin of such inconsistency; for instance, one of the tails may pre-dominate the procedure and drive the parameters in a certain direction.
Anyhow, in order to correct for any bias due to the data processing, the analysis or the fit procedure, an offset is applied on the extracted mass in simulated events such that it coincides with the injected value, which is always set to the corresponding PDG mass in our simulations. It follows that this correction is then reported on the measured mass in real data. However, such a correction assumes a good agreement between the data and MC. To ensure that, the simulations are re-weighted to match the raw \pT spectrum from the data.
This re-weighting procedure starts off by extracting the raw \pT spectrum in the data. Similarly to the estimation of the amount of raw signal in \Sec\ref{subsubsec:PrinciplesOfMassExtraction}, the latter is given by subtracting the \pT spectrum in the side-bands region from the one in the peak region. It is then compared to the injected transverse momentum distribution of true V0/cascade candidates; the ratio of the \pT spectra in the data and MC provides the weighting factors.
\begin{figure}[!t]
%\centering
\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/RawPtSpectra_Xi.eps}
\label{fig:PtSpectraXi}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/WeightingFactors_Xi.eps}
\label{fig:WeightFactorsXi}
}
\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/RawPtSpectra_Omega.eps}
\label{fig:PtSpectraOmega}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/WeightingFactors_Omega.eps}
\label{fig:WeightFactorsOmega}
}
\caption{On the left: raw \pT spectra of \rmXiM and \rmAxiP (a), and \rmOmegaM and \rmAomegaP (c) hyperons in the data in full marker, and in simulations in open markers. On the right: weighting factors for \rmXiPM (b) and \rmOmegaPM (d), employed to match the \pT spectra in the data and MC. The error bars encompass only the statistical uncertainties.}
\label{fig:PtSpectra}
\end{figure}
Once the simulated data have been re-weighted, the mass offset observed in MC with respect to the injected mass is assessed, corrected and taken into account in the mass measurement in real data. \Tab\ref{tab:MCMassOffset} presents these corrections as well as the corrected mass values, \ie those measured in real data after correction of the initial offset in MC. From these derive the (relative) mass difference between particle and anti-particle, given by
\begin{equation}
\frac{\Delta \mu}{\mu}= 2 \cdot \frac{\mu_{\textsc{part.}}-\mu_{\overline{\textsc{part.}}}}{\mu_{\textsc{part.}}+\mu_{\overline{\textsc{part.}}}}.
\label{eq:MassDifference}
\end{equation}
Its (statistical) uncertainty is obtained via propagation of the ones on the mass values, assuming there is no correlation between the particle and anti-particle measurements -- \textit{a priori} correct, since $\mu_{\textsc{part.}}$ and $\mu_{\overline{\textsc{part.}}}$ have been extracted independently\footnote{The facts that i) the particle and anti-particle do not share the same data sample\break (see \Sec\ref{subsubsec:PrinciplesOfMassExtraction}), and ii) the fitting procedure is run separately guarantee the independence of the mass measurements.}--,
\begin{equation}
\sigma_{\Delta \mu /\mu }= 4 \cdot \sqrt{ \left(\frac{-\mu_{\overline{\textsc{part.}}}}{\left(\mu_{\textsc{part.}} + \mu_{\overline{\textsc{part.}}} \right)^{2}}\right)^{2} \sigma_{\mu_{\textsc{part.}}}^{2} + \left(\frac{\mu_{\textsc{part.}}}{\left(\mu_{\textsc{part.}} + \mu_{\overline{\textsc{part.}}} \right)^{2}}\right)^{2} \sigma_{\mu_{\overline{\textsc{part.}}}}^{2} }.
\label{eq:MassDifferenceUncertainty}
\end{equation}
\Tab\ref{tab:MCMassDiffOffset} shows the mass difference for \rmXi and \rmOmega, in the data and MC, as well as the corrected value.
\begin{table}[!t]
\centering
\footnotesize
\begin{tabular}{>{\raggedleft\arraybackslash}b{2.5cm}@{\hspace{0.5cm}} >{\raggedleft\arraybackslash}b{2.5cm}@{\hspace{0.5cm}} >{\raggedleft\arraybackslash}b{2.5cm}@{\hspace{0.5cm}} >{\raggedleft\arraybackslash}b{2.5cm}@{\hspace{0.5cm}} >{\raggedleft\arraybackslash}b{2.5cm}@{\hspace{0.5cm}}}
\noalign{\smallskip}\hline\noalign{\smallskip}
\bf Particle & \bf \rmXiM & \bf \rmAxiP & \bf \rmOmegaM & \bf \rmAomegaP \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{5}{l}{(In \mmass)} \\
Offset in data & $0.215 \pm 0.002$ & $0.267\pm 0.002$ & $0.139\pm 0.008$ & $0.123 \pm 0.008$ \\
Offset in MC & $-0.075 \pm 0.003$ & $-0.072\pm 0.003$ & $0.040\pm 0.005$ & $0.027 \pm 0.005$ \\
Corrected mass & $1322.000 \pm 0.003$ & $1322.049 \pm 0.005$ & $1672.549 \pm 0.008$ & $1672.546 \pm 0.008$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\caption{Measurements of the mass offset (the difference between the reconstructed and injected masses) with respect to the PDG value (coinciding with the injected mass in MC) in the data and MC, as well as the final masses of $\Xi^{-}$, $\overline{\Xi}^{+}$, $\Omega^{-}$, $\overline{\Omega}^{+}$ after correction of that offset in MC. The uncertainties on the mass values correspond only to the statistical ones. These measurements have been obtained using the selections in \tab\ref{tab:CascadeSelections}, a triple-Gaussian for the peak modelisation and a linear function for the background (in the data only).}\label{tab:MCMassOffset}
\end{table}
\begin{table}[!t]
\centering
% \footnotesize
\begin{tabular}{b{7.5cm}@{\hspace{0.5cm}} b{3cm}@{\hspace{0.5cm}} b{3cm}@{\hspace{0.5cm}}}
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Particle & \bf \rmXi & \bf \rmOmega\\
\noalign{\smallskip}\hline \noalign{\smallskip}
Mass difference offset in data ($\times 10^{-5}$) & $3.94 \pm 0.22$ & $-0.97 \pm 0.68$ \\
Mass difference offset in MC ($\times 10^{-5}$)& $-0.23 \pm 0.33$ & $-0.78 \pm 0.43$ \\
Corrected mass difference ($\times 10^{-5}$) & $3.71 \pm 0.22$ & $-0.18 \pm 0.68$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\caption{Measurements of the mass difference in the data and MC, as well as the final mass difference for $\Xi^{\pm}$ and $\Omega^{\pm}$ using the corrected mass values in \tab\ref{tab:MCMassOffset}. The uncertainties on the mass differences correspond only to the statistical ones. These measurements have been obtained using the selections in \tab\ref{tab:CascadeSelections}, a triple-Gaussian for the peak modelisation and a linear function for the background (in the data only).}
\label{tab:MCMassDiffOffset}
\end{table}
\section{Study of the systematic effects}
\label{sec:SystStudy}
A study of the systematic effects -- also called \textit{systematic study} in the particle physicist's jargon -- consists in reviewing an analysis via the test of its different elements. As its name suggests, it involves identifying the sources of systematic uncertainties that might affect the values of the extracted mass and their corresponding uncertainties. Usually, this is achieved by repeating the analysis with a few \say{minor} changes, hoping that no effect will be observed in the results. In such case, meaning that the obtained values are consistent, then one could argue that the analysis is free of systematic effects and under control: no additional measure are required. On the contrary, a significant deviation in the analysis results indicates the presence of a systematic effect, that should be treated seriously.
In practice, one needs to define what \say{small} and \say{large} deviations mean. If an analysis is performed in two different ways: the first approach gives the result $a_1$ with an uncertainty $\sigma_1$ ; the second $a_2$ with an uncertainty $\sigma_2$. The difference between the results is given by $\Delta = a_1 - a_2$ and the error on the difference by\footnote{The formula given here corresponds, in fact, to the case where two measurements are done on a set and a subset of the same dataset, which is typically the case here, unless specificied otherwise.} $\sigma_{\Delta} = \sqrt{ |\sigma_{1}^{2} - \sigma_{2}^{2} | }$. If the ratio $\Delta/\sigma_{\Delta}$ is greater than a certain threshold value -- denoted \sigmaBarlow and to be defined by the analyser --, this points out a systematic effect that requires further investigation. This approach is known as the \textit{Barlow criterion}.
As in cooking, what separates the good systematic study from the lesser good one is the choice of the seasoning, namely the choice of the threshold value. The larger the \sigmaBarlow, the more systematic effects would slip under the radar; conversely, the smaller the threshold, the higher the sensitivity to the systematic effects. Since the targeted precision on the mass and mass difference values is very low, the systematic effects must be well under control. Therefore, in the context of this analysis, the contribution of a potential source of systematics is said to be significant for $\sigmaBarlow\simeq 1$. \\
However, the presence of a systematic effect does not necessarily imply a systematic uncertainty. In fact, there are two possibilities. Either a systematic correction can be applied and the error on that correction will be quoted as the systematic uncertainty, or the correction may be difficult (or impossible) to derive and therefore the systematic uncertainty will have to fully encompass the imprecision induced to the systematic effect.
This treatement of the systematic biases corresponds to the one proposed by Roger Barlow \cite{barlowSLUOLecturesStatistics2000, barlowSystematicErrorsFacts2002}. The following section presents the list of systematic sources studied for this analysis, with their estimated uncertainties or corrections.
\subsection{Topological and track selections}
\label{subsec:SystTopoAndTrackSelections}
\subsubsection{Influence on the mass extraction}
\label{subsubsec:SystTopoMass}
As explained in \Sec\ref{subsec:TopoReco}, the identification of the charged \rmXi and \rmOmega baryons relies on their characteristic cascade decay. The reconstruction of this decay topology revolves around, first, the association of two tracks to form \rmLambda candidates, and then these are matched with the remaining secondary tracks. In order to reduce the induced combinatorial background, various topological and kinematic cuts are used. The choice of the employed cut values may obviously be the source of a bias. Such a systematic effect can be revealed by observing how a different set of selections affects the mass and its uncertainty.\\
The standard approach consists in varying individually each selection, while keeping the others at their reference value. Although it allows to address the bias induced by a given cut, this does not take into account the possible correlations between topological variables. For instance, a higher cut on cascade decay radius also implies that the \rmLambda daughter decays further away in the detector. To tackle that, one would need to build a matrix containing the correlation factors for each pair of selection variables. Since the cascade identification relies here on a set of seventeen selections, this boils down to determining a symmetric matrix of dimension $15 \times 15$.
\begin{table}[t]
\hspace*{-1.cm}
\begin{tabular}{c|c|c}
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Track variable & Variation range & Signal variation \rmXiM (\rmAxiP) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
Nbr of crossed TPC readout rows & $> \left[ 70 ; 90 \right]$ & 1\% (1\%)\\
$\Nsigma^{\rm TPC}$ & $<\left[ 1 ; 3 \right] $ & 60\% (60\%)\\
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Topological variable & Variation range & Signal variation \rmXiM (\rmAxiP) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{3}{l}{\textbf{V0}} \\
V0 decay radius (\cm) & $> \left[ 1.2 ; 8 \right]$ & 11\% (11\%)\\
V0 cosine of pointing angle & $> \left[ 0.97 ; 0.998 \right]$ & 10\% (10\%)\\
|$m$(V0) - \mPDG[\rmLambda]| (\gmass) & $< \left[ 0.002 ; 0.007 \right]$ & 18\% (18\%)\\
DCA proton to prim. vtx (\cm) & $> \left[ 0.04 ; 0.5 \right]$ & 28\% (28\%)\\
DCA pion to prim. vtx (\cm) & $> \left[ 0.04 ; 0.95 \right]$ & 10\% (10\%)\\
DCA V0 to prim. vtx (\cm) & $> \left[ 0.06 ; 0.2 \right]$ & 12\% (12\%)\\
DCA between V0 daughters (std dev) & $< \left[ 0.4 ; 1.2 \right]$ & 12\% (12\%) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{3}{l}{\textbf{Cascade}} \\
Cascade decay radius (\cm) & $> \left[ 0.5 ; 2.5 \right]$ & 11\% (11\%)\\
Cascade Lifetime (\cm) & $< \left[ 1.6 ; 3.40 \right]$ \cTau & 40\% (40\%)\\
DCA bachelor to prim. vtx (\cm) & $> \left[ 0.04 ; 0.5 \right]$ & 15\% (15\%) \\
DCA between the cascade daughters (std dev) & $< \left[ 0.25 ; 1.2 \right]$ & 12\% (12\%)\\
Cascade cosine of pointing angle & $> \left[ 0.995 ; 0.9995 \right]$ & 14\% (14\%)\\
Bachelor-proton pointing angle (rad) & $> \left[ 0.02 ; 0.05 \right]$ & 11\% (11\%) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabular}
\caption{Summary of the variation ranges on the topological and track selections employed in the \rmXiM and \rmAxiP reconstructions. The last column indicates the \textit{maximum} induced signal variation; for more details, look at \fig\ref{fig:SignalVariation_TopoSel_XiMinus} and \fig\ref{fig:SignalVariation_TopoSel_XiPlus}.}\label{tab:SystematicSelectionsXi}
\end{table}
However, a different approach is followed here. To go over the correlations between each variable, the sets of selections are randomly generated according to uniform laws\footnote{An alternative approach has also been tried along the \say{natural} distribution of each selection variable, rather than the uniform distribution. In the end, both approaches yield to consistent systematic uncertainties (within a few \kmass). The extra complexity and CPU cost of the alternative way have weighed in, given the fact that the randomisations here are part and parcel of the default analysis flow (see later), and will be resorted to many times. Therefore, the uniform randomisation has been retained as default option for all what is coming next.}, that spans over a certain variation ranges. The critical point of this study resides in the choice of the variation ranges, where a careful balance must be found: it should not be too \say{severe} at the risk of losing all the signal, or too \say{gentle} to cause any significant shift. It is considered as satisfactory when the induced signal shift reaches approximately, at least, 10\%\footnote{Note that this condition is applied for each topological cuts. For other selections, it may be difficult to satisfy such criterion as they act on the background rather than the signal. This is the case, for example, with the competing mass rejection that could never reach the 10\% signal variation threshold, even with an excessively vast range of variation.}. \Tabs\ref{tab:SystematicSelectionsXi} and \ref{tab:SystematicSelectionsOmega}, list the considered selection variables, with their variation range as well as the induced signal variation\footnote{The signal variations have been estimated by varying each selection individually, while keeping all other selections to their values in \tab\ref{tab:CascadeSelections}.} for \rmXi and \rmOmega respectively. As for the \rmKzeroS and \rmLambda, this is summarised in \tabs\ref{tab:SystematicSelectionsK0s} and \ref{tab:SystematicSelectionsLambda}. \\
\begin{table}[t]
\hspace*{-1.cm}
\begin{tabular}{c|c|c}
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Candidate variable & Range & Signal variation \rmOmegaM (\rmAomegaP) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
Competing mass rejection (\gmass) & $> \left[ 0.006 ; 0.010 \right]$ & 0.9\% (0.9\%)\\
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Track variable & Range & Signal variation \rmOmegaM (\rmAomegaP) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
Nbr of crossed TPC readout rows & $> \left[ 70 ; 90 \right]$ & 2.5\% (2.5\%)\\
$\Nsigma^{\rm TPC}$ & $< \left[ 1 ; 3 \right] $ & 60\% (60\%)\\
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Topological variable & Range & Signal variation \rmOmegaM (\rmAomegaP) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{3}{l}{\textbf{V0}} \\
V0 decay radius (\cm) & $> \left[ 1 ; 5.5 \right]$ & 11\% (11\%)\\
V0 cosine of pointing angle & $> \left[ 0.97 ; 0.998 \right]$ & 17\% (17\%)\\
|$m$(V0) - \mPDG[\rmLambda]| (\gmass) & $< \left[ 0.002 ; 0.007 \right]$ & 17\% (17\%)\\
DCA proton to prim. vtx (\cm) & $> \left[ 0.04 ; 0.5 \right]$ & 34\% (34\%)\\
DCA pion to prim. vtx (\cm) & $> \left[ 0.04 ; 0.75 \right]$ & 10\% (10\%) \\
DCA V0 to prim. vtx (\cm) & $> \left[ 0.06 ; 0.2 \right]$ & 14\% (14\%)\\
DCA between V0 daughters (std dev) & $< \left[ 0.4 ; 1.2 \right]$ & 11\% (11\%)\\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{3}{l}{\textbf{Cascade}} \\
Cascade decay radius (\cm) & $> \left[ 0.5 ; 1.6 \right]$ & 12\% (12\%)\\
Cascade Lifetime (\cm) & $< \left[ 1.6 ; 3.40 \right]$ \cTau & 14\% (14\%)\\
DCA bachelor to prim. vtx (\cm) & $> \left[ 0.05 ; 0.2 \right]$ & 13\% (13\%)\\
DCA between the cascade daughters (std dev) & $< \left[ 0.15 ; 1.2 \right]$ & 12\% (12\%)\\
Cascade cosine of pointing angle & $> \left[ 0.995 ; 0.9995 \right]$ & 17\% (17\%)\\
Bachelor-proton pointing angle & $> \left[ 0.02 ; 0.05 \right]$ & 13\% (13\%)\\
\noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabular}
\caption{Summary of the variation ranges on the topological and track selections employed in the \rmOmegaM and \rmAomegaP reconstructions. The last column indicates the \textit{maximum} induced signal variation; for more details, look at \fig\ref{fig:SignalVariation_TopoSel_OmegaMinus} and \fig\ref{fig:SignalVariation_TopoSel_OmegaPlus}.}\label{tab:SystematicSelectionsOmega}
\end{table}
The analysis is repeated for each randomly generated set of cuts $i$, as detailed in \Sec\ref{subsec:MassExtraction}, meaning that a mass $\mu_{i}$ and its uncertainty $\sigma_{i}$ are extracted from the fit of the corresponding invariant mass distribution in the data and MC. However, only the values passing the following criteria are retained:
\begin{itemize}
\item[$\bullet$] the fitting procedure must have converged;
\item[$\bullet$] to ensure a good fit quality, its reduced $\chi^{2}$ needs to be relatively close to the unity, $\rmChiSquareNDF < 3$;
\item[$\bullet$] the uncertainties on the mass value are expected to be below the \mmass. Since the \rmXi and \rmOmega masses are of the order of \gmass, a $\sigma_{\mu_{i}}$ at the level of 0.1\% of $\mu_{i}$ represents an uncertainty greater than 1 \mmass. In order to remove outliers, it is required that $\sigma_{\mu_{i}}/\mu_{i} < 0.1\%$.
\end{itemize}
Under these conditions and over a sufficiently large number of sets of cuts, the distributions $\mu_{i}$ and $\sigma_{\mu_{i}}$ can be built. These offer the opportunity to re-qualify the mass and its uncertainties, \ie what will become the default strategy for this analysis outcome:
\begin{itemize}
\item[$\bullet$] the \textit{measured mass} corresponds to the mean value of the $\mu_{i}$ distribution,
\item[$\bullet$] the \textit{systematic uncertainty} due to the candidate selections is the standard deviation of the $\mu_{i}$ distribution,
\item[$\bullet$] and the \textit{statistical uncertainty} is given by the mean value of the $\sigma_{\mu_{i}}$ distribution.
\end{itemize}
As opposed to most analyses, this re-definition allows to circumvent the dependence on a reference set of cuts, making the analysis \textit{in principle} more robust.\\
\begin{figure}[!p]
%\centering
\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsNbrOfCutSets\_Xi.eps}
\label{fig:MassVsNbrOfCutSetsXi}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsNbrOfCutSets\_Omega.eps}
\label{fig:MassVsNbrOfCutSetsOmega}
}
\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/StatErrVsNbrOfCutSets\_Xi.eps}
\label{fig:StatErrVsNbrOfCutSetsXi}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/StatErrVsNbrOfCutSets\_Omega.eps}
\label{fig:StatErrVsNbrOfCutSetsOmega}
}
\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/SystErrVsNbrOfCutSets\_Xi.eps}
\label{fig:SystErrVsNbrOfCutSetsXi}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/SystErrVsNbrOfCutSets\_Omega.eps}
\label{fig:SystErrVsNbrOfCutSetsOmega}
}
\caption{Relative measured mass as well as its statistical and systematic uncertainties in pp collisions at \sqrtS = 13 \tev as a function of the number of cut sets, for \rmXi in (a), (c), (e) and \rmOmega in (b), (d), (f) respectively. The quantities on the y-axis are relative to the value taken as the final measurement. In this case, it corresponds to the quantity for 20 000 different sets of cuts. Here, the peak is modelled by a modified Gaussian, and the background by a first order polynomial. The error bars represent the uncertainty on the evaluation of the mean or standard deviation.}
\label{fig:MassVsNentries}
\end{figure}
%\clearpage
The above quantities being extracted from a finite sample, one could expect them to depend on the number of cut sets. The stability of the results with the amount of sets employed has been studied and is shown on \fig\ref{fig:MassVsNentries}. At first, the mass value, its statistical and systematic uncertainties fluctuate with the number of cut sets, until they reach a plateau region at approximately 5000-6000 different sets of cuts. Such amount should thus suffice to perform the mass measurement. However, in order to a guarantee an excellent stability, 20 000 sets are being used.
The output results of this procedure are presented in \tab\ref{tab:SystTopoKineSelections}.
\begin{table}[h]
\hspace*{-0.4cm}
\begin{tabular}{cccc|ccc}
% \begin{tabular}{b{2cm}@{\hspace{0.5cm}} b{3cm}@{\hspace{0.5cm}} b{2cm}@{\hspace{0.5cm}} b{2cm}@{\hspace{0.5cm}} b{5cm}@{\hspace{0.5cm}} b{3cm}@{\hspace{0.5cm}} b{3cm}@{\hspace{0.5cm}}}
\noalign{\smallskip}\hline \noalign{\smallskip}
\bf Particle & \bf Measured & \multicolumn{2}{c|}{\bf Uncertainty} & \bf Measured & \multicolumn{2}{c}{\bf Uncertainty}\\
& \bf mass & \bf stat. & \bf syst. & \bf mass difference & \bf stat. & \bf syst.\\
& (\mmass) & (\mmass) & (\mmass) & ($\times 10^{-5}$) & ($\times 10^{-5}$) & ($\times 10^{-5}$) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\rmKzeroS & 497.737 & 0.003 & 0.010 & / & / & / \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\rmLambda & 1115.618 & 0.002 & 0.011 & \multirow{2}{*}{4.78} & \multirow{2}{*}{0.17} & \multirow{2}{*}{0.14} \\
\rmAlambda & 1115.671 & 0.002 & 0.012 & & & \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\rmXiM & 1321.728 & 0.004 & 0.016 & \multirow{2}{*}{3.95} & \multirow{2}{*}{0.37} & \multirow{2}{*}{0.39} \\
\rmAxiP & 1321.780 & 0.004 & 0.019 & & & \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\rmOmegaM & 1672.536 & 0.014 & 0.015 & \multirow{2}{*}{-1.31} & \multirow{2}{*}{1.14} & \multirow{2}{*}{0.76} \\
\rmAomegaP & 1672.514 & 0.014 & 0.015 & & & \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabular}
\caption{Measured masses and mass differences of \rmKzeroS, \rmLambda, \rmXi and \rmOmega, accompanied by their statistical and systematic (due to the topological and kinematic selections) uncertainties. Here, the measurements have been performed with a triple-Gaussian for the signal and a first order polynomial for the background.}\label{tab:SystTopoKineSelections}
\end{table}
\subsubsection{Influence on the mass difference mass}
In \tab\ref{tab:SystTopoKineSelections}, the mass difference have been obtained taking the independently measured mass values of the particle and the anti-particle from the above procedure (\Sec\ref{subsubsec:SystTopoMass}), and using \eq\ref{eq:MassDifference}. The uncertainties are then propagated to obtain the statistical and systematic uncertainties on the mass difference. It does not result directly from the aforementioned procedure. In that sense, the mass difference measurement is \textit{indirect}. It carries the full systematic uncertainties from the particle and anti-particle mass values. By extracting the mass difference in a more \textit{direct} way -- similarly to what is done for the mass in \Sec\ref{subsubsec:SystTopoMass} --, part of the uncertainties from the particle and anti-particle masses would cancel out in the difference, resulting in a smaller systematic uncertainty.\\
To that end, an additional step needs to be introduced in the previous strategy in \Sec\ref{subsubsec:SystTopoMass}. For each set of cuts $i$, both particle and anti-particle masses -- $\mu_{i, \textsc{part.}}$ and $\mu_{i, \overline{\textsc{part.}}}$ -- are extracted as well as their uncertainties, $\sigma_{i, \textsc{part.}}$ and $\sigma_{i, \overline{\textsc{part.}}}$. From these, the computation of the mass difference is performed,
\begin{equation}
\frac{\Delta \mu_{i}}{ \mu_{i} } = 2 \cdot \frac{\mu_{i, \textsc{part.}}-\mu_{i, \overline{\textsc{part.}}}}{\mu_{i, \textsc{part.}}+\mu_{i, \overline{\textsc{part.}}}},
\end{equation}
and the uncertainties are propagated in order to get the one on the mass difference,
\begin{equation}
\sigma_{\Delta \mu_{i} /\mu_{i} }= 4 \cdot \sqrt{ \left(\frac{-\mu_{i, \overline{\textsc{part.}}}}{\left(\mu_{i, \textsc{part.}} + \mu_{i, \overline{\textsc{part.}}} \right)^{2}}\right)^{2} \sigma_{\mu_{i, \textsc{part.}}}^{2} + \left(\frac{\mu_{i, \textsc{part.}}}{\left(\mu_{i, \textsc{part.}} + \mu_{i, \overline{\textsc{part.}}} \right)^{2}}\right)^{2} \sigma_{\mu_{i, \overline{\textsc{part.}}}}^{2} }.
\end{equation}\\
Similarly to the mass extraction, the mass difference and its uncertainties are calculated from the $\Delta \mu_{i}/ \mu_{i}$ and $\sigma_{\Delta \mu_{i} /\mu_{i} }$ distributions over $N$ different set of cuts:
\begin{itemize}
\item[$\bullet$] the \textit{measured mass difference} corresponds to the mean value of the $\Delta \mu_{i}/ \mu_{i}$ distribution,
\item[$\bullet$] the \textit{systematic uncertainty} due to the candidate selections is the standard deviation of the $\Delta \mu_{i}/ \mu_{i}$ distribution,
\item[$\bullet$] and the \textit{statistical uncertainty} is given by the mean value of the $\sigma_{\Delta \mu_{i} /\mu_{i} }$ distribution.
\end{itemize}
\begin{table}[t]
\centering
\begin{tabular}{cccc}
\noalign{\smallskip}\hline \noalign{\smallskip}
Particle & Mass difference & \multicolumn{2}{c}{Uncertainty}\\
& ($\times 10^{-5}$) & statistical ($\times 10^{-5}$) & systematic ($\times 10^{-5}$) \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{4}{l}{\bf \rmLambda} \\
Indirect & \bf 4.54 & 0.75 & 1.50 \\
Direct & \bf 4.68 & 0.77 & 0.79 \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{4}{l}{\bf \rmXi} \\
Indirect & \bf 4.54 & 0.75 & 1.50 \\
Direct & \bf 4.68 & 0.77 & 0.79 \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{4}{l}{\bf \rmOmega} \\
Indirect & \bf 0.48 & 1.74 & 1.57 \\
Direct & \bf 0.53 & 1.75 & 1.19 \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabular}
\caption{Comparison between \textit{direct} and \textit{indirect} mass difference values of \rmXi and \rmOmega baryons, with their statistical and systematic uncertainties. Here, both direct and indirect measurements have been performed with a modified Gaussian for the peak and a first order polynomial for the side-bands.}\label{tab:SystMassDifference}
\end{table}
The results on the directly extracted mass difference are presented in \tab\ref{tab:SystMassDifference}. Although the values obtained directly are consistent with the indirect ones, the associated systematic uncertainties are smaller by approximately 48\% for \rmXi and 25\% for \rmOmega. Due to this gain in precision, from now on, the mass difference will always be extracted \say{directly}.
\subsection{Stability of the results}
\label{subsec:StabilityResults}
All the elements of the analysis being now introduced, it is essential to control the stability of the results. In other words, it consists to adapt and calibrate the analysis, in order to ensure that the presented measurements can be trusted and do not fluctuate over time, space, momentum, etc. This requires a fine and thorough inspection of what happens throughout the data acquisition and reconstruction. If needed, these shall be tuned in such a way, for instance, that the momentum calibration is satisfactory; or at least, one should identify a region in time, space, momentum, etc, where the latter requirement would be fullfilled.
The measurement of the mass \textit{a priori} relies on a countless number of parameters, some of them being possibly correlated. This analysis focuses on seven possible dependencies on the mass. For the sake of brievety, only figures related to one or two particles will be presented in this manuscript.
\subsubsection{Dependence on the data taking periods}
\label{subsubsec:DataTakingDependence}
As mentioned above, an important check involves the stability of the results over time, that is as a function of the data taking periods. \Sec\ref{subsec:DataSamples} specifies that all the pp collisions recorded in the 2016, 2017 and 2018 data taking periods are considered. This corresponds to 37 periods collected in different magnetic field configurations for the L3 solenoid magnet\footnote{For almost all the periods, the L3 solenoid and the dipole magnets share the magnetic field polarity, that is $(+,+)$ or $(-,-)$. Each rule has its exception: one data taking periods in 2018 has been collected with the dipole magnet off.} ($B = + 0.5, -0.5, -0.2$ T), TPC gas composition (Ar/CO$_{2}$ for 2016 and 2018; Ne/CO$_{2}$/N$_{2}$ for 2017), and trigger modes (\say{CENT} or \say{FAST}). They are designated by a tag made of two numbers -- corresponding to the last digits of the data taking year -- and a letter, labelling for the period.
\Figs\ref{fig:MassVsPeriodsXi} and \ref{fig:MassVsPeriodsOmega} show the measured mass of \rmXi and \rmOmega hyperons respectively, as a function of the data sample. A striking feature on these figures is the fact that all the values seem to be systematically off by about 250 \kmass for the double strange baryons and 150 \kmass for the triple strange particles. This originates from a momentum bias occuring in the V0 and cascade reconstruction, which is addressed later in \Sec\ref{subsubsec:DecayRadiusDependence}. Once it is corrected, the mass measurements lie within the PDG uncertainties.
\begin{landscape}
\begin{figure}[p]
\centering
%\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=1.45\textwidth]{Figs/Chapter5/MassVsPeriod2\_Xi.eps}
\label{fig:MassVsPeriodsXi}
} \\
%\hspace*{-1.5cm}
\subfigure[]{
\includegraphics[width=1.45\textwidth]{Figs/Chapter5/MassVsPeriod2\_Omega.eps}
\label{fig:MassVsPeriodsOmega}
}
\caption{Measured mass of the \rmXiM and \rmAxiP (top), and \rmOmegaM and \rmAomegaP baryons (bottom) as a function of the \textbf{data taking period}. These values have been obtained based on 20 000 different sets of selections (\Sec\ref{subsec:SystTopoAndTrackSelections}). Hence, the uncertainties correspond to the quadratic sum of the statistical and systematic uncertainties due to the candidate and track selections. The periods with a magnetic field of $B = +0.5$~T are indicated with blue circles, those with the opposite polarity are shown in red squares, and finally the data sample collected in a configuration of $B~=~-0.2$~T are represented in black diamonds. Moreover, the "/C" and "/F" tags are here to signify "CENT" and "FAST" trigger modes respectively.}
\label{fig:MassVsPeriods}
\end{figure}
\end{landscape}
The mass measurements in periods collected with $B =-0.2$~T stand out from the rest of the values. This behaviour is attributed to the lower magnetic field, which results in a deterioration of the momentum resolution. The \say{FAST} configuration -- \ie events collected without the two middle layers of the ITS, the SDDs -- exhibits a similar pattern. The latter is most certainly due to the missing SDD information; without these constraints, the probability to incorrectly assigned a cluster to a track increases. As a consequence, the track quality in the ITS, as well as the tracking efficiency, drop but also the track momentum gets biased. This point has been cross-checked by repeating the analysis in pp collisions at \sqrtS = 5.02 \tev with $B = \pm 0.5$~T\footnote{For comparison, the exploited data sample of pp collisions at \sqrtS = 13 \tev counts about 2.6~billons minimum-bias events while, for the one at \sqrtS = 5.02 \tev, it amounts to approximately 520 millions minimum-bias events.}, in \say{CENT} and \say{FAST} modes. In the former configuration, the results agreed with those obtained at 13 \tev (for the same magnetic field polarity) whereas, in the latter case, the previous trend was again observed, pointing indeed towards a problem related to the missing SDD information. Therefore, the data sample taken in a magnetic field of $B = -0.2$ T and/or collected with the \say{FAST} trigger mode are discarded for the rest of the analysis.
Finally, concerning the periods with opposite polarities, the results shows a very good agreement. A fit with a constant function (not shown on the figure) displays a $\chi^2$ probability greater than 90\%.
\subsubsection{Dependence on the decay radius}
\label{subsubsec:DecayRadiusDependence}
A critical aspect of the analysis is to make sure to have a satisfactory calibration of the momentum. A miscalibration of the latter typically originates either from an imprecision on the magnetic field or imperfect energy loss corrections. The former being addressed in \Sec\ref{subsubsec:ImprecisionMagneticField}, this section thus concentrates on the second point.
Miscalculation of the energy losses can arise at two different levels: on one hand, the actual amount of material budget may not be properly accounted for in the detector geometry. In other words, there could be a significant misknowledge on the amount of material budget in the detector. \Sec\ref{subsubsec:ImperfectEnergyLossCorrections} is devoted to this aspect. On the other hand, the calculation of the energy loss corrections could be erroneous. A hint of the latter can be found by looking at the dependence of the measured mass on the decay radius, \fig\ref{fig:MassVsRadius}.\\
\begin{figure}[h]
%\centering
\hspace*{-2.cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadius\_Xi.eps}
\label{fig:MassVsRadiusXi}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadius\_XiMC.eps}
\label{fig:MassVsRadiusXiMC}
}
\hspace*{-2.cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadius\_Omega.eps}
\label{fig:MassVsRadiusOmega}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadius\_OmegaMC.eps}
\label{fig:MassVsRadiusOmegaMC}
}
\caption{Measured mass of the \rmXi (top) and \rmOmega baryons (bottom), in the data (left) and in MC (right), as a function of the \textbf{cascade decay radius}. The average radial position for each ITS layer is indicated with dotted lines. Note that, for the purpose of the comparison, the MC is \textit{not} re-weighted (\Sec\ref{subsubsec:CorrectionOnTheExtractedMass}). In both cases, the results have been obtained through a fit with a triple-Gaussian function for the invariant mass peak and, only in the data, an exponential function for the background.}
\label{fig:MassVsRadius}
\end{figure}
First of all, the measured mass exhibits an unexpected behaviour with the decay radius: it abruptly drops whenever the particle of interest decays in the vicinity of an ITS layer. Furthermore, this trend is well reproduced in simulated data. The \fig\ref{fig:RadiusResolVsRadius} shows the resolution on the cascade decay radius as a function of the radial position. Slighlty above the edge of an ITS detector, this resolution degrades abruptly in such a way that the \rmXi and \rmOmega candidates tend to be reconstructed below the detection layer. This underestimation of the decay radius leads to a bias in the energy loss corrections and the opening angle (detailed later in \Sec\ref{subsubsec:OpAngleDependence}), thus lowering the measured mass. For that reason, the regions in the ITS corresponding to these dips will be discarded from now on.\\
\begin{figure}[t]
%\centering
\hspace*{-2.cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadiusResol\_XiMC.eps}
\label{fig:RadiusResolVsRadiusXi}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadiusResol\_OmegaMC.eps}
\label{fig:RadiusResolVsRadiusOmega}
}
\caption{Resolution on the radial position of the \rmXi (top) and \rmOmega (bottom) decay point in MC, as a function of the \textbf{cascade decay radius}. The average radial position for each ITS layer is indicated in dotted line. Here, the MC data have \textit{not} been re-weighted. In both cases, the results have been obtained through a fit with a triple-Gaussian function for the invariant mass peak and an exponential function for the background.}
\label{fig:RadiusResolVsRadius}
\end{figure}
Furthermore, whatever the particle of interest, the measured mass in \fig\ref{fig:MassVsRadius} increases significantly with the decay radius by about 1 \mmass for the \rmXi, in both data and MC. It turns out that this trend results from several approximations in the implementation of the energy loss corrections in the ALICE framework. There are three of them, classified from the most to the \say{least} significant.
\begin{enumerate}
\item As explained in \Sec\ref{subsubsec:TrackReco}, in the final stage of the tracking, all tracks are propagated inwards to their DCA to the primary vertex, taking into account stochastic processes such as energy losses. While this makes sense for primary tracks, it introduces a bias for secondary ones. Being a decay product, the inward propagation of a secondary track should stop at the decay point, where its parameters are related to the mother particle. Instead, at each propagation step between the secondary and primary vertices, the track receives additional energy from \dEdx-corrections (footnote \ref{footnote:EnergyLoss}). This excess of energy builds up with the decay point position; the further away the secondary vertex is, the more biased the track parameters are. Nevertheless, at this stage of the event reconstruction, there is no way to distinguish a primary from a secondary particle\footnote{Concerning V0 decays, there is indeed no way to identify a secondary particle at this stage of the reconstruction using the so-called \textit{offline} reconstruction, presented \chap\ref{chap:V0CascReconstruction}. However, there exists another approach, dubbed \textit{on-the-fly}, that performs the track finding, track fitting and V0 vertexing simultaneously. Although it has been checked that on-the-fly V0s do not exhibit the mass dependence on the radial position of the decay point, they can not be used in the analysis as there exists no on-the-fly cascade.}. For that reason, this bias is expected to be removed later, during the V0 and cascade reconstruction. However, as mentioned in \Sec\ref{subsubsec:V0Formation} (footnote \ref{footnote:EnergyLossV0CascVertexing}), the propagation of daughter tracks from the location of the DCA to the primary vertex to the V0/cascade decay point is performed with no energy loss corrections. This means that the energy previously added during the final inward propagation of the tracking between the secondary and primary vertices, has not been subtracted, leading to additional energy/momentum in the track parameters at the
secondary decay position and thus to an offset in the invariant mass.
\item The energy loss calculation relies on the same parametrisation of the Bethe-Bloch formula (\eq\ref{eq:BetheBloch}) as \GeantThree and \GeantFour\footnote{Although \GeantThree and \GeantFour are two different version of \textsc{Geant} software series, their treatement of the energy losses of a charged particle in a medium remains the same.}. For the parameters related to material, they are using the database in \cite{geant4Geant4MaterialDatabase2022}. However, as explained in \Sec\ref{subsubsec:TrackReco}, the particle energy losses are calculated and corrected assuming that all the materials are made of Si in the ITS volume (including the beam pipe) and Ne in the TPC. This approximation leads inevitably to a systematic misevaluation of the actual energy losses, and thus to bias in the invariant mass.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{Figs/Chapter5/FractionOfPIDForTracking.eps}
\caption{Fraction of V0 and cascade candidates with the correct mass hypothesis, during the initial track propagation in the event building, for all the associated daughter tracks.}
\label{fig:FractionOfPIDForTracking}
\end{figure}
\item Along the same line, the Bethe-Bloch formula in \eq\ref{eq:BetheBloch} also depends on the particle traversing the material and, in particular, its charge, momentum and mass. While the Kalman filter provides the first two, the last one comes from the measurement of the energy deposit in the TPC volume, which offers a preliminary particle identification. There is no guarantee, though, that the latter coincides with the expected mass hypothesis for a \rmKzeroS, \rmLambdaPM, \rmXiPM or \rmOmegaPM decay. For instance, \Sec\ref{subsubsec:TrackReco} explains that the pion mass is taken as default value. As a matter of fact, only a fraction of the candidates has the correct mass hypothesis for both decay daughters as shown in \fig\ref{fig:FractionOfPIDForTracking}. If the mass hypothesis used in the energy loss calculation turns out to be incorrect, the wrong amount of energy loss correction is applied.
\end{enumerate}
There are different ways to address these issues. The approach followed in this analysis consists in i) replaying the track propagation in order to remove the previous energy loss corrections, and ii) re-applying them with the correct mass hypothesis, appropriate material parameters and stopping at the secondary decay position. The \fig\ref{fig:SchemeRetroCorrection} gives a description of this procedure, also called \textit{retro-corrections}.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{Figs/Chapter5/Schema-RetroCorrections.eps}
\caption{Pictural representation of the fix on the energy loss corrections applied on the proton daughter of a \rmLambdaPM. The general idea breaks off in two stages: removing the previous \dEdx-corrections below the TPC inner wall (1. and 2.), and re-applying them appropriately (3.). The first stage starts with the propagation of the track parameters, initially at the decay position, to its DCA to the primary vertex without accounting for energy loss (1.). Then, the track is propagated to the TPC inner wall (2.) as performed during the final stage of the tracking (\Sec\ref{subsubsec:TrackReco}). In the second stage, the energy loss corrections are re-applied with the correct mass hypothesis -- here, the proton mass -- and stopping at the secondary vertex position (3.). Modified version of the figure from \cite{maireTrackReconstructionPrinciple2011}.}
\label{fig:SchemeRetroCorrection}
\end{figure}
The procedure starts off with the track parameters at the V0/cascade decay point. They are extrapolated to its point of closest approach to the primary vertex, without accounting for energy losses (\fig\ref{fig:SchemeRetroCorrection}, 1.). This basically means undoing the track propagation in \Sec\ref{subsubsec:V0Formation} and recovering the track parameters as they were before the V0/cascade reconstruction. From this point, the track is propagated to its position at the TPC inner wall, in the exact same conditions as in the final stage of the tracking (\Sec\ref{subsubsec:TrackReco}): same mass hypothesis, same consideration on the detector material. This means that, at each step, the track looses the identical amount of energy which was previously added. At the TPC inner wall, the aforementioned energy loss corrections \textit{in the ITS} have been fully\break removed (\fig\ref{fig:SchemeRetroCorrection}, 2.). As most of the material budget comes from the ITS, the wrong energy loss corrections in the TPC can be ignored in first approximation. This last point was later verified with a propagation up to the TPC outer wall; no~significant change could have been observed.
The second stage takes over with the re-application of the energy loss corrections. From the TPC inner wall, the track parameters are propagated to the secondary vertex position with the appropriate mass hypothesis and the adequate material, in order to correct the right amount of energy losses this time (\fig\ref{fig:SchemeRetroCorrection}, 3.).\\
\begin{figure}[p]
%\centering
\hspace*{-2.cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadius\_XiWithRetroCorr\_MC.eps}
\label{fig:MassVsRadiusXiMCRetroCorr}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadius\_XiWithRetroCorr.eps}
\label{fig:MassVsRadiusXiRetroCorr}
}
\hspace*{-2.cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadius\_OmegaWithRetroCorr\_MC.eps}
\label{fig:MassVsRadiusOmegaMCRetroCorr}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassVsRadius\_OmegaWithRetroCorr.eps}
\label{fig:MassVsRadiusOmegaRetroCorr}
}
\caption{Measured mass of the \rmXi (top) and \rmOmega baryons (bottom), in MC (left) and in the data (right), as a function of the \textbf{cascade decay radius} with the retro-corrections on (red) and off (blue). The regions close to ITS layers have been removed, as explained in \Sec\ref{subsubsec:DecayRadiusDependence}. The solid and dashed lines represent a fit with a constant function. Note that, for the purpose of the comparison, the MC is \textit{not} re-weighted (\Sec\ref{subsubsec:CorrectionOnTheExtractedMass}). In both cases, the results have been obtained through a fit with a triple-Gaussian function for the invariant mass peak and, only in the data, an exponential function for the background.}
\label{fig:MassVsRadiusAfterRetrocorrection}
\end{figure}
\Fig\ref{fig:MassVsRadiusAfterRetrocorrection} shows the application of this procedure in the data and MC. The retro-corrections significantly reduces the mass offset with the decay radius. Most importantly, in MC, the trend with the radius has disappeared and now follows a flat distribution. To quantify it, the measurements have been fitted with a constant function; the latter agrees very well with the injected mass of \rmXi and displays a $\chi^{2}$-probability of at least 26\%. This validates that the energy losses are now properly taken into account. In the data, a slight trend with radius can still be observed. This will flatten in the next sections in such a way that, in the end, the residual dependence on the radius can be considered as negligible.
\subsubsection{Dependence on momentum}
\label{subsubsec:MassDependenceOnPt}
Although the invariant mass expression in \eq\ref{eq:CascInvMass} involves only the momentum vector of the decay daughters, it can be re-written to show the \textit{explicit} dependence on the total momentum in \eq\ref{eq:InvMassPtotDependenceCasc},
\begin{align}
M_{\rm candidate}^2( \textrm{casc.}) &= \Big(\sqrt{ \textbf{p}_{\rm V0}^2 + m_{\rmLambda}^2} + \sqrt{ \textbf{p}_{\rm bach.}^2 + m_{\rm bach.}^2}\Big)^2 - ( \textbf{p}_{\rm V0} + \textbf{p}_{\rm bach.})^2 \\
&= \Big(\sqrt{ p_{\rm V0}^2 + m_{\rmLambda}^2} + \sqrt{ p_{\rm bach.}^2 + m_{\rm bach.}^2}\Big)^2 - \left( p_{\rm V0}^{2} + p_{\rm bach.}^{2} + 2 \cdot p_{\rm V0} \cdot p_{\rm bach.} \cos \theta \right),
\label{eq:InvMassPtotDependenceCasc}
\end{align}
and in particular, the \textit{explicit} dependence on the transverse and longitudinal momenta in \eq\ref{eq:InvMassPtPzDependenceCasc},
\begin{equation}
\begin{split}
M_{\rm candidate}^2( \textrm{casc.}) &= \Big(\sqrt{ p_{\rm T, V0}^2 + p_{\rm z, V0}^2 + m_{\rmLambda}^2} + \sqrt{ p_{\rm T, bach.}^2 + p_{\rm z, bach.}^2 + m_{\rm bach.}^2}\Big)^2 \\
&\quad - \big( p_{\rm T, V0}^{2} + p_{\rm T, bach.}^{2} + 2 \cdot p_{\rm T, V0} \cdot p_{\rm T, bach.} \cos \theta_{xy}\\
&\quad + p_{\rm z, V0}^{2} + p_{\rm z, bach.}^{2} + 2 \cdot p_{\rm z, V0} \cdot p_{\rm z, bach.} \cos \theta_{z} \big),
\end{split}
\label{eq:InvMassPtPzDependenceCasc}
\end{equation}
where $\theta$, $\theta_{xy}$ and $\theta_z$ are the opening angles in 3D, in the transverse plane and in the longitudinal direction, defined in the laboratory frame.
It becomes clear that the invariant mass depends on both momenta and opening angles. Any systematic effect on those variables would immediately bias the invariant mass distributions, and thus the measured mass.
\begin{figure}[!p]
%\centering
\hspace*{-2.cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassXiVsPt.eps}
\label{fig:MassVsPtXi}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/InvMassOmegaVsPt.eps}
\label{fig:MassVsPtOmega}
}
\caption{Measured mass of the \rmXi (top) and \rmOmega baryons (bottom) as a function of the \textbf{transverse momentum}. The dashed line represents the transverse momentum threshold, where the mass values can be considered as stable. In both cases, the results have been obtained through a fit with a triple-Gaussian function for the invariant mass peak and, only in the data, an exponential function for the background.}
\label{fig:MassVsPt}
\end{figure}
\begin{figure}[!p]
%\centering
\hspace*{-2.cm}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassXi\_pz\_Aside\_MC.eps}
\label{fig:MassVsPzXiMC}
}
\subfigure[]{
\includegraphics[width=0.6\textwidth]{Figs/Chapter5/MassXi\_pz\_Aside.eps}
\label{fig:MassVsPzXi}
}
\caption{Measured mass of the \rmXi hyperons as a function of the \textbf{longitudinal momentum}. The solid and dashed lines represent a fit with a constant function. In both cases, the results have been obtained through a fit with a triple-Gaussian function for the invariant mass peak and, only in the data, an exponential function for the background.}
\label{fig:MassXiVsPz}
\end{figure}
\Fig\ref{fig:MassVsPt} shows the measured mass of the \rmXi and \rmOmega baryons as a function of the transverse momentum. At low \pT, the measured masses change rapidly with the transverse momentum, due to multiple scattering and (asymmetric) energy loss fluctuations. The latter becomes less dominant at intermediate \pT, and so this scaling reduces such that a flat dependence is reached at intermediate or high transverse momentum.
In order to ensure stable measurements with \pT, the analysis should be performed in this plateau region. Although the \rmKzeroS and \rmLambda follow the same V0 decay topology, their decay kinematics are different. This also holds for the \rmXi and \rmOmega baryons. Thereby, the position of this stability region has to be identified separately for each particle. For instance, the data points above $\pT > 2.4$ \gmom for the \rmXi in \fig\ref{fig:MassVsPtXi} and $\pT > 1.4$ \gmom for the \rmOmega in \fig\ref{fig:MassVsPtOmega} show little variations with the transverse momentum, and are all contained with a $1\sigma$-interval around the final measurement, after accounting for all the other sources of systematic effects. Therefore, in this region, the measurement can be considered as under control.\\
Along the same line, the influence of the longitudinal momentum on the measured mass has been checked. It is presented in \fig\ref{fig:MassXiVsPz}. Both in the data and in MC, the dependence remains relatively small, such that it can be considered as negligible in the considered (pseudo-)rapidity interval.
\subsubsection{Dependence on the opening angles}
\label{subsubsec:OpAngleDependence}
As discussed above, the invariant mass depends on the opening angle between the decay products. Due to the multiple scattering, the latter may increase or decrease, thus biasing the estimation of the decay vertex position (as observed in \fig\ref{fig:MassVsRadius}) and the measured mass.
Therefore, different opening angles in the laboratory frame are being considered:
\begin{itemize}
\item[$\bullet$] \textbf{the opening angle in 3 dimensions}, also called \textit{3D opening angle}. \\
There are two ways to compute this quantity, depending on whether the value must be signed or unsigned. Here, it has been decided that value of the opening angle would be unsigned. It can be calculated from the momentum vectors of the positive and negative decay daughters:
\begin{align}
&{\bf p_{\rm pos.}} \cdot {\bf p_{\rm neg.}} = p_{\rm pos.} p_{\rm neg.} \cos\left(\theta\right) \\
\Rightarrow \qquad &\theta = \arccos \frac{ \left( {\bf p_{\rm pos.}} \cdot {\bf p_{\rm neg.}} \right)}{ p_{\rm pos.} \ p_{\rm neg.}}
\label{eq:OpeningAngle3D}
\end{align}