Next Article in Journal
Novel Exact Solution for the Bidirectional Sixth-Order Sawada–Kotera Equation
Previous Article in Journal
Editorial to the Special Issue “Solar Wind Structures and Phenomena: Origins, Properties, Geoeffectiveness, and Prediction”
Previous Article in Special Issue
Searches for Lepton Flavor Violation in Tau Decays at Belle II
 
 
Article
Peer-Review Record

Mu2e Run I Sensitivity Projections for the Neutrinoless μe Conversion Search in Aluminum

by Mu2e Collaboration
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Submission received: 25 October 2022 / Revised: 19 December 2022 / Accepted: 21 December 2022 / Published: 13 January 2023
(This article belongs to the Special Issue Charged Lepton Flavor Violation)

Round 1

Reviewer 1 Report

Dear authors,

             many compliments for the very interesting and relevant study, and for the well written and comprehensive paper. I recommend the publication after a minor revision taking into account my comments below.

Best Regards.

-------------------------------------------------------------------

General comments

- The normalization of the measurement relies on the absolute measurement of the muon flux and the absolute estimate of the signal efficiency. It implies in turn a good understanding of the muon flux measurement and a good modelling of the detector response. It seems to me that these issues are not sufficiently discussed. Although I understand that it can be difficult to make predictions in this sense before data are available, I strongly suggest to add a short, dedicated paragraph in the sensitivity section, giving some indication of the needs and assumptions behind your estimates of the associated systematic uncertainties.

- Some plots have very small axis labels, axis titles and/or legend, namely Fig. 2, 16, 17. Please increase the font size of these pictures. Increasing all pictures' font sizes would be indeed appreciated.

Specific comments

— Line 20-21: “significantly lower than the sensitivity of any current or planned experiment” seems belittling. BR's below 10^{-50} are essentially unobservable.

- Line 210-211: Please avoid using the Geant4 jargon ("ShieldingM", "physics list"). I suggest to either quote the most relevant features od the ShieldingM physics list or put a reference.

- Line 237: what is the physics motivation to use such uniform distribution?

- Line 289: what is the advantage of using a (fast) Kalman fit instead of a global chi2 fit if material effects are neglected?

- Line 362-370: is the ANN-based selection trained on DIO or signal MC? How do you plan to extract the signal efficiency for the ANN-based selection? What is the gain in sensitivity coming from the ANN-based selection, considering the improved resolution, the loss of efficiency and the potential systematic uncertainty deriving from an inaccurate estimate of this loss?

- Line 387: typo

- Line 531-551: you show that the template fit of the Michel spectrum gives N2 = 9.7, and you set a systematics comparing with N2 = 8.5, coming from a direct fit of the resolution function at 52.8 MeV/c. On the other hand, the fit of the resolution function at 105 MeV/c gives 6.5 (fig. 11). So, I would say that the dominant uncertainty comes from the factor to be applied to correct this inaccuracy in the estimate of N2, that will need to be scaled by something like 6.5/9.7 ~ -33% going from 52.8 to 105 MeV/c. Isn't it? It would be also worth to investigate the correlations among N2 and the other fitted parameters. If there are large correlations, the difference in the resolution shape could be much lower than what the N2 uncertainty alone could suggest.

- Equation between line 882 and line 883: although this definition of the numerical value of the 5sigma p-value is correct, it can be confusing here, because you are treating a Poisson process. Also, the sentence is not very rigorous. I suggest to rephrase: "Standard for HEP, a discovery is defined as a measurement yielding a significant, "5σ", deviation form the expected background, corresponding to a p-value of 2.87e-7.", without any formula.

- Line 887-889: it is not clear to me what you mean by "average deviation". Are you taking the average of the n-sigma significances? Are you taking the average of the p-values? In the average, how do you treat experiments with N=0 (that is a negative deviation with respect to the expected background)? I see that all these things are discussed in Ref. [62], nonetheless a clearer statement is needed here.

- Line 917: does this efficiency include the acceptance of the detector, or just the reconstruction and selection of electrons in some nominal acceptance? In any case, "selection efficiency" can be  misleading, I would suggest "reconstruction and selection efficiency".

 

 

Author Response

> Dear authors,

> many compliments for the very interesting and relevant study, and for
> the well written and comprehensive paper. I recommend the publication
> after a minor revision taking into account my comments below.

> Best Regards.

We would like to thank the reviewer for the detailed and in-depth questions -
it was a pleasure to answer them! 

-------------------------------------------------------------------

> General comments

> - The normalization of the measurement relies on the absolute
>   measurement of the muon flux and the absolute estimate of the signal
>   efficiency. It implies in turn a good understanding of the muon flux
>   measurement and a good modelling of the detector response. It seems
>   to me that these issues are not sufficiently discussed. Although I
>   understand that it can be difficult to make predictions in this
>   sense before data are available, I strongly suggest to add a short,
>   dedicated paragraph in the sensitivity section, giving some
>   indication of the needs and assumptions behind your estimates of the
>   associated systematic uncertainties.

Response: With the only exception being the normalization of the muon flux,
the sources of considered systematic uncertainties are discussed in the manuscript.

We agree that expanding the discussion of systematic uncertainties
would be beneficial and added the following paragraph to Section 8.3:

"Systematic uncertainties on the DIO and the signal acceptance are dominated by the uncertainty
on the momentum scale. Reducing the uncertainty on the momentum scale below 100 keV/c
would help reducing those uncertainties. In-situ measurement of events with cosmic muons will
greatly reduce the uncertainty on the cosmic flux normalization. Direct M2e measurement
of the RPC cross section will eliminate the uncertainty on the RPC background related
to the pion production cross section. A 10\% uncertainty of the absolute muon flux normalization
could be achieved by the measurement of the RMC cross section on Al, a published uncertainty
on which is slightly better than 10\% \cite{RMC_1999_PhysRevC.59.2853}."
  
> - Some plots have very small axis labels, axis titles and/or legend,
>   namely Fig. 2, 16, 17. Please increase the font size of these
>   pictures. Increasing all pictures' font sizes would be indeed
>   appreciated.

Response:

The font size of labels and titles in Fig 2.16,17 have been increased

> Specific comments

> — Line 20-21: “significantly lower than the sensitivity of any current
> or planned experiment” seems belittling. BR's below 10^{-50} are
> essentially unobservable.

Response:

We agree with the reviewer's comment, and it can be even strengthened:
currently experimentally unobservable are CLFV BR's below 10^-18,
and there is no vision of how to reach the level of 10^-25.

That however doesn't make the referred statement on lines 20-21 scientifically
inaccurate.

> - Line 210-211: Please avoid using the Geant4 jargon ("ShieldingM",
>   "physics list"). I suggest to either quote the most relevant
>   features of the ShieldingM physics list or put a reference.

Response:

An important part of documenting the scientific results is an accurate description
of the procedures used. Geant4 implements a multitude of [partially overlapping]
physics models, referred to there as 'physics lists'.
And, yes, it is the Geant4 internal jargon.
Results of Geant4 simulations depend on the choice of a particular 'physics list',
and the differences between the simulations using different 'physics lists'
(combinations of physics models) sometimes are large on a scale relevant for Mu2e.

Therefore, specifying a chosen combination of models is highly relevant, 
and that forces the choice made in the manuscript. 

So although we fully share the sentiment of the comment, the word "ShieldingM"
has to be referenced - there is no alternative concise and non-confusing way
to adequately specify the simulation procedures used.

We have implemented a compromise solution - the jargonism has been moved 
to the references.

> Line 237: what is the physics motivation to use such uniform distribution?

Response :

In essence, it is an attempt to make a conservative assumption given 
the lack of published experimental data.
Photons emitted in the nuclear muon capture are a source of extra occupancy.
An interaction of a 10 MeV photon in the tracker results in more extra hits
than interaction of a 1 MeV photon. For E(photon) above few MeV,
the photon spectrum has to be falling with energy.
So, for a given total normalization, an assumption of a flat spectrum
seems to be a conservative one.

In addition, photon interactions in the tracker produce low energy and low pT
electrons and positrons, uncorrelated in time with the signal particle.
Hits from such electrons and positrons have a characteristic pattern 
and are efficiently removed before the track reconstruction starts.

> - Line 289: what is the advantage of using a (fast) Kalman fit instead
>   of a global chi2 fit if material effects are neglected?

  
Response: There is a number of relatively small advantages in doing that - see below.
However, an integral over all advantages is large.

- costs: implementation of another track fitting algorithm, even if it were deemed better,
  comes with costs - the cost of implementation and the cost of maintenance;

- simplicity: if the same fitting algorithm can be used without introducing
  an additional penalty, the software infrastructure of the experiment becomes simpler;

- performance: even if multiple scattering is neglected, N inversions of a 5x5 matrix
  could be faster than an inversion of a NxN matrix;

- memory footprint of the reconstruction executable: adding another track fitting
  algorithm would increase the executable footprint in memory, which 
  might have had an impact on the overall performance.

  
> - Line 362-370: is the ANN-based selection trained on DIO or signal
>   MC? How do you plan to extract the signal efficiency for the
>   ANN-based selection? What is the gain in sensitivity coming from the
>   ANN-based selection, considering the improved resolution, the loss
>   of efficiency and the potential systematic uncertainty deriving from
>   an inaccurate estimate of this loss?

Response: there is a number of questions asked here, we will answer them one by one/

1) The ANN-based selection has been trained on a MC sample of electrons with the flat
   momentum distribution in the range 97-107 MeV/c.

2) As in any other experiment, the Mu2e trigger system includes multiple calibration triggers.
   The efficiency of the ANN-based track selection will be determined and validated using
   DIO electrons collected with a (pre-scaled) calorimeter-only trigger.

3) We observe, that for the same background level, the ANN-based selection outperforms
   selections usually referred to as "the box cuts' by 10-15%.
   Clearly, this is a MC-based statement, and it also goes without saying that 
   in the running experiment the very first selections will be based on the box-type cuts.
   As the detector is understood better and better, and its modeling becomes 
   more and more accurate, one could proceed with more advanced track selections.
   With that said, the inputs of the track quality ANN are the variables with 
   well understood behavior and correlations, so it is not a surprise 
   that taking those correlations into account provides a modest improvement.

   Thinking of the systematic uncertainties, evaluating them for ANN-based selections 
   has always been a more labor-intensive procedure than for the box cuts - 
   varying the relevant physics parameters and propagating their variations through 
   the ANN of choice takes a significant effort. 
   However this is a problem which has been solved many times already.

> - Line 387: typo

Response: thanks for catching it! 

> - Line 531-551: you show that the template fit of the Michel spectrum
>   gives N2 = 9.7, and you set a systematics comparing with N2 = 8.5,
>   coming from a direct fit of the resolution function at 52.8
>   MeV/c. On the other hand, the fit of the resolution function at 105
>   MeV/c gives 6.5 (fig. 11). So, I would say that the dominant
>   uncertainty comes from the factor to be applied to correct this
>   inaccuracy in the estimate of N2, that will need to be scaled by
>   something like 6.5/9.7 ~ -33% going from 52.8 to 105 MeV/c. Isn't
>   it? It would be also worth to investigate the correlations among N2
>   and the other fitted parameters. If there are large correlations,
>   the difference in the resolution shape could be much lower than what
>   the N2 uncertainty alone could suggest.

Response: 

The situation here is a little bit different. 
There are two effects responsible for the widening of the momentum distributions 
in Fig 11 and 12: fluctuations of the energy losses and the multiple scattering. 
The effect of multiple scattering scales as 1/sqrt(p). 
At the same time, when the electron energy changes from 50 MeV/c to 100 MeV/c, 
the energy losses and their fluctuations do not have a similar scaling.
When that is taken into account, it becomes clear that the values of the parameter 
describing the tail of the resolution function at 100 MeV/c and 50 MeV/c do not 
have to be the same. Therefore, one can not directly compare the number of 6.5 
to the number of 9.7: physics-wise, the two numbers have a different meaning, 
and there is no simple scaling-like mechanism transforming one into another.

What matters is the comparison between the value of the parameter describing 
the tail of the resolution function which could be extracted at from the data 
at ~50 MeV, from the analysis of the mu+ Michel spectrum, and the value 
of the same parameter extracted from the MC-only analysis of the detector 
response to ~50 MeV electrons, and that is 8.5 vs 9.7.

This difference tells to which extent one can rely on the parameterization
of the resolution function extracted from the data - the analysis method 
may have hidden systematics.

Today, we interpret the whole of a difference as a measure of the systematic 
uncertainty of the method. It is more likely to be an upper bound on that: 
the difference includes a non-negligible statistical component and we already 
know what can be improved in the method itself. 

Final comment: of course, this is not a done deal yet, the described method 
needs to be tested an validated using the data, and we understand that.

> - Equation between line 882 and line 883: although this definition of
>   the numerical value of the 5sigma p-value is correct, it can be
>   confusing here, because you are treating a Poisson process. Also,
>   the sentence is not very rigorous. I suggest to rephrase: "Standard
>   for HEP, a discovery is defined as a measurement yielding a
>   significant, "5σ", deviation form the expected background,
>   corresponding to a p-value of 2.87e-7.", without any formula.

Response : 

thank you for a good suggestion! implemented.

> - Line 887-889: it is not clear to me what you mean by "average
>   deviation". Are you taking the average of the n-sigma significances?
>   Are you taking the average of the p-values? In the average, how do
>   you treat experiments with N=0 (that is a negative deviation with
>   respect to the expected background)? I see that all these things are
>   discussed in Ref. [62], nonetheless a clearer statement is needed
>   here.

Response: 

Technically, the average is usually calculated in the 'sigma space', 
by assuming a Gaussian probability distribution and converting p-values 
of pseudo-experiments into the number of standard deviations. Probably, 
this is what you meant by 'n-sigma' significances.
  
Within this approach, Nobs=0 doesn't represent a special case. 
For given expected signal and background strengths, muS and muB,
the probability to observe Nobs=0 events is finite, P = exp(-muB-muS).
P --> n-sigma conversion gives a negative number for n-sigma used in averaging, 
but doesn't lead to any technical complications.

The following sentence clarifying the procedure has been added after the line 888:

"The average deviation is determined by assuming the Gaussian probability distribution,
converting the {\it p}-value of each generated pseudo-experiment its into the number 
of standard deviations $n_{\sigma}$, and calculating the average value of the distribution
of $n_{\sigma}$."


> - Line 917: does this efficiency include the acceptance of the
>   detector, or just the reconstruction and selection of electrons in
>   some nominal acceptance? In any case, "selection efficiency" can be
>   misleading, I would suggest "reconstruction and selection
>   efficiency".

Response: 

the number of 11.7% includes the acceptance - slighly more than 10% 
of all signal events have reconstructed tracks which pass all selections. 

thanks for the suggestion - implemented!

Reviewer 2 Report

Dear Authors, 

 

This paper gives experimental sensitivity projections for the neutrinoless muon-to-electron conversion in Mu2e Run I. I found the paper clear and well written. Followings are my questions and comments to be addressed before publication.

 

- Figure 4: 

The variance of the proton pulse intensity distribution is very large due to SDF=60%. Is this realistic? I suppose this is caused by the accelerator operation. What is the main source?

 

- Line 297 

Does 33% include geometrical acceptance, or reconstruction efficiency only?

 

- Line 369 

Can you show the breakdown of the 26%? How you calculate?

 

- Sec. 6.1

It might be helpful if you mention what are inputs into PID ANN.

 

- Figure 8(right): 

The pion in the figure is a negative pion? If so, it's better to write pi^- instead of pi to make it clear.

 

- Line 425 

For the cosmic background of 0.046 events, how much efficiency you assume for CRV? 99.99%? It should be clearly quoted.

 

- Line 486--491 

For the energy loss calibration, I am slightly confused. The electron path length in the stopping target differs event by event, and there is no way to know it. How the calibration goes?

 

- Line 498--500 

You reduce the magnetic field for the calibrations. Did you consider uncertainties from changing the magnetic field? I suppose it's not negligible.

 

- Table 2: 

Why don't you insert the row for "2: uncertainty on the momentum resolution tail"?

 

- Figure 17(left): 

What the ratio (MCNP/GEANT4) = 0 means? Is the z-axis correct?

 

- Line 812--813 

How you optimize the absorbers? What are the material, shape, and thickness? How much muons are reduced by the absorbers? These are not mentioned enough.

 

- Line 823--825 

Did you consider RPC of pi^- emitted from antiproton annihilation in the stopping target?

 

- Table 8: 

For RPC out-of-time, you assume the extinction of 10^{-10}. Is this value just assumption or realistic estimation? I cannot find sentences discussing it in this paper.

 

-Figure 20(left): 

Is it better to use a log scale for the vertical axis like the right figure to see the small contributions?

 

Author Response

> Dear Authors, 


> This paper gives experimental sensitivity projections for the
> neutrinoless muon-to-electron conversion in Mu2e Run I. I found the
> paper clear and well written. Followings are my questions and comments
> to be addressed before publication.

We thank the reviewer for carefully reading the submitted manuscript and
asking detailed and deep questions.



> - Figure 4: 

> The variance of the proton pulse intensity distribution is very large
> due to SDF=60%. Is this realistic? I suppose this is caused by the
> accelerator operation. What is the main source?

Response:

The main source of the large expected pulse intensity variations is 
the mechanism of the slow extraction of the proton beam.
For example, in low-intensity running mode each spill circulating
in the debuncher ring makes 63,000 proton pulses arriving
at the production target. Extraction of a single proton pulse reduces
the total number of particles in the debuncher ring by 10^-5 (relative).
Such a small fraction is rather difficult to control very accurately.

> - Line 297 

> Does 33% include geometrical acceptance, or reconstruction efficiency only?

Response:

33% is the absolute number: about 1/3 of all simulated conversion electrons
have reconstructed tracks. So this number includes both the acceptance
and the reconstruction efficiency.

> - Line 369 

> Can you show the breakdown of the 26%? How you calculate?

Response:

- 33% of all simulated events hae reconstructed tracks
- 81% of all reconstructed tracks pass the selections,
- 0.33*0.81 = 0.2673, and the difference between this number and 26% quoted
  in the paper is due to round-offs.

> - Sec. 6.1

> It might be helpful if you mention what are inputs into PID ANN.

Response:

this is a very meaningful comment - thanks!

We understand the importance of inputs for ANN training being specified,
and we were going back and forth on this for quite some time.
Ultimately, we decided against doing that because accurately specifying
the inputs would require a rather lengthy and very technical discussion
of the reconstruction algorithms and their outputs. 
That would overload the manuscript with the technical details,
which we tried to avoid in order to keep the 'detalization bar'
for sections at the same level 

However, as the question has been asked, we could expand and clarify the matter.

Most of the separation between electrons and muons in Mu2e detector comes from
their distributions in E/P and time of flight.
P here is the reconstructed track momentum, E - energy of the reconstructed
calorimeter cluster.

- a 105 MeV/c electron deposits in the calorimeter slightly less than 100 MeV of energy,
  a 105 MeV/c muon - about 40 MeV. The separation is clear, with one caveat.
  There exists a source of 'E/P confusion' between electrons and muons,
  and that source is due the edge effects. The calorimeter crystals are 20 cm long,
  while the calorimeter disk annulus is ~ 30cm wide. On average, particles enter
  the calorimeter at an angle of about 45-50 degrees. Because of that, a relatively
  large fraction of electron trajectories crosses much less than 20cm of the crystal,
  resulting in a significantly lower deposited energy.
  In this momentum range, muons stop much faster - a 105 MeV/c muon in CsI has
  a range of ~5 cm, and the impact of non-hermeticity is quite lower.
  
- for a three meter-long Mu2e tracker, the difference between the time of flight
  for 105 MeV/c electrons and muons is about 5 ns, with the core timing resolution 
  being slightly better than 1 ns. 

  The track fit uses the calorimeter cluster as an additional hit.
  That results in somewhat convoluted (track+cluster) variables used
  for the PID ANN training.
  Explaining those variables in the paper would require a detailed explanation
  of the track fitting procedure, but wouldn't change the physics reasoning 
  presented above. 

> - Figure 8(right): 

> The pion in the figure is a negative pion? If so, it's better to write
> pi^- instead of pi to make it clear.

Response:

In case of Fig 8(right), the sign of the pion is not important, it could be either a pi+ or a pi-.
In the displayed event, a charged pion, likely produced in the neutron interaction
with the DS cryostat, hits one of the straws and produces a E>105 MeV photon which subsequently
converts, producing a 105 MeV/c electron.
The electron initially moves upstream, reflects in the DS magnetic mirror, and returns back
into the tracker. The 'reflected' part of the electron trajectory, reconstructed as a track,
is responsible for the signature of a conversion electron.

> - Line 425 

> For the cosmic background of 0.046 events, how much efficiency you
> assume for CRV? 99.99%? It should be clearly quoted.

Response:

This is a very good question, and we think it is explicitly answered
in the manuscript. Section 3, which describes the simulation framework, 
starts from saying that the simulations and the reconstruction assume
perfectly aligned and calibrated detector with no dead channels.

Going into more detail, in the CRV simulation, there is no parameter with 
the explicit meaning of a single scintillation counter efficiency.
The simulation inputs are the scintillation counter light yields measured 
at the test beam, the readout thresholds, and the measured time dependence
of the light yield degradation.

In the running experiment, the rejection efficiency of the cosmic muons 
going through the CRV fiducial will likely be defined by the number of dead
channels rather than by the readout thresholds. 

We hope that the presented considerations do clarify the issue.

However, behind the asked question one could read another one - "What if the real 
cosmic background is much higher than the number presented in the manusctipt?".

We'd like to touch on that as well. In a general form, the impact of the  
experimental background being higher than the simulated one is discussed in sec 8.4.
A total background increase by a factor of three degrades the experimental
sensitivity by about 30%. An assumption that the increase of teh total is exclusively
due to the higher cosmic background, translates into the cosmic background being ~5.5 times
higher than the number of 0.046 coming out of the simulation.

So ~ 6 times higher cosmic background degrades the experimental sensitivity by ~30%.

> - Line 486--491 

> For the energy loss calibration, I am slightly confused. The electron
> path length in the stopping target differs event by event, and there
> is no way to know it. How the calibration goes?

> Response:

The calibration of the energy losses relies on cosmic ray events entering the
tracker in the upstream direction, reflecting in the DS magnetic mirror,
and returning back to the tracker. (Please, see the paragraph starting at line 485).

According to the MC estimates, in about 90% of such events the particles
entering the detector are muons, the rest are electrons.

The difference between the momenta of the two tracks corresponding to the upstream
and downstream 'legs' of the same particle determines the energy losses by the particle
in the stopping target and the proton absorber. While the energy losses vary from
one event to another, the mean energy losses averaged over many events, as well as
the shape of the distribution of the energy losses, tells about the total amount
of material in front of the tracker.

> - Line 498--500 

> You reduce the magnetic field for the calibrations. Did you consider
> uncertainties from changing the magnetic field? I suppose it's not
> negligible.

Response:

We plan to map the magnetic field in the detector solenoid at at least
two values of the field - B = 1 T and B = 0.5 T. The momentum calibration
based on the Michel decays of stopped mu+'s will be using tracks with pretty
much the same topology as the expected mu- --> e- conversion signal.
The relative accuracy of the fied measurement is 10^-4, so we do not
expect any hidden systematic effects of a 10^-3 relative scale,
resulting from the B-field scaling.

There is, however, a different effect which needs to be taken into account.
The calibration with Michel decays will be using the stopped mu^+ beam.
The average momentum of the mu+ beam entering the Mu2e detector solenoid
is slightly different from the average momentum of the mu^- beam.
That should result in a small difference between the average energy losses
of electrons and positrons produced in muon decays in the stopping target
and reaching the tracker. 

The total mean energy losses in from of the tracker are about 600 keV.
The difference between the mean energy losses of e- and e+ is much smaller
than that.

In summary, we do not expect 10^-3 (100 keV/c) level systematic uncertainties
on the momentum scale resulting from the scale calibration at B = 0.5 T.

> - Table 2: 

> Why don't you insert the row for "2: uncertainty on the momentum resolution tail"?

Response:

It is true that the uncertainty on the momentum resolution tail results in an
uncertainty on the DIO background. Section 7.2 of the manuscript describes
the procedure we plan to use to estimate that uncertainty.
What could be estimated now, before the Mu2e started taking the data,
is the intrinsic uncertainty of the procedure itself and we show that it is
a) small enough and b) could be further reduced.
Understanding of the real uncertainty on the DIO background due to uncertainty
on the tail will come from the comparison of the the high-momentum tails of
the Michel positron momentum spectra in the data and MC.
That comparison requires the data, so the manuscript states what could be stated today:
we expect the intrinsic uncertainty of the proposed procedure to be small on a scale
of the Mu2e error budget.

> - Figure 17(left): 

> What the ratio (MCNP/GEANT4) = 0 means? Is the z-axis correct?

Response:

The estimate of the antiproton background does not rely on the absolute 
cross section predictions by either MC code, MC's are only used to calculate
the transport efficiencies and understand the corresponding uncertainties.

The violet areas in Fug. 17(left) correspond to the ratio of probabilities 
for a particle, produced in the production target, to enter the transport solenoid,
calculated with Geant and MCNP, P(MCNP)/P(Geant4) < 0.1.

Acceptancies are normalized to the total number of the simulated pA interactions.
So the location of violet and navy areas tells that for the same number of simulated
pA interactions, MCNP produces much less, than Geant4, forward antiprotons 
with P > 0.5 GeV/c (forward production corresponds to cos theta(Mu2e) = -1)
which interact in the production target, change their direction, and enter the transport
solenoid, becoming a potential background source.

The most important takeaway from this plot is, however, the location of the
green areas. In the region of the phase space which dominates the Mu2e background,
P < 0.5 GeV, acceptancies calculated using the two MC codes are quite close.

So, yes, the color legend or the scale of the Z axis is correct.

> - Line 812--813 

> How you optimize the absorbers? What are the material, shape, and
> thickness? How much muons are reduced by the absorbers?
> These are not mentioned enough.

Response:

We very much appreciate the reviewer's interest to this level of detail.

Despite its length, the manuscript covers only the surface of every specific
topic it discusses. We are fully aware of that being true for every section
of the manuscript, including the section describing the antiproton background.

However, as the question was asked, here is the answer:

The optimization requirements were rather loose: 
the expected background from antiprotons had to be comparable with the contributions
from other sources. We definitely didn't want antiprotons to be the dominant
background source, at the same time, suppressing one particular source to the level
much lower than the rest also is not worth the effort.

There are four pbar absorption elements :

- an aluminum absorption window in front of the transport solenoid (TS)
- a block of material on the bottom of the TS1 collimator
- a titanium absorption / pressure window in the middle of the TS
- a wedge-shaped Al absorber located next to the Ti window

Each of these elements is responsible for a certain part of the "transport phase space" available to antiprotons. All together, they reduce the antiproton transport efficiency down to the level corresponding to the expected antiproton-induced background of 0.01 events.

Clearly, the presence of antiproton absorbers also leads in additional muon losses. Their impact is small, and the antiproton absorbers reduce the muon transport efficiency by about 5%.

A sentence specifying that is added to section 2.2, where the absorption elements are first introduced.

> - Line 823--825 

> Did you consider RPC of pi^- emitted from antiproton annihilation in the stopping target?

Response:

Yes we did. Section 7.5.3 called  "Delayed RPC Simulations" is devoted to discussing 
the RPC component of the antiproton-induced background.

> - Table 8:
>   
> For RPC out-of-time, you assume the extinction of 10^{-10}. Is this
> value just assumption or realistic estimation? I cannot find sentences
> discussing it in this paper.

Response:

The extinction value of 1e-10 is a Mu2e technical requirement.

However, the proton beam extinction of 1.4x10^-10 has already been
demonstrated at J-PARC [https://pos.sissa.it/402/104/pdf],
so the requirement is quite realistic.

It is worth noting that for the signal timing window starting at 640 ns,
the estimated contribution of the out-of-time RPC is about 10% of the
in-time RPC contribution.


> -Figure 20(left): 

> Is it better to use a log scale for the vertical axis like the right
> figure to see the small contributions?

Response:

The choice of the Y-scale, linear vs log, in many cases is a matter of taste.
In other cases it matters. The logic behind the choice of the linear scale
for the momentum distribution is as follows.

With the exception of the DIO, the e- momentum distributions for all background
processes on an interval 100-105 MeV/c are consistent with being flat.
So plotting Fig. 20(left) in a log scale would not really reveal many
new features, but would make the shape of the signal less pronounced.
The most important feature the plot conveys graphically is that
the expected background is << 1 event.

This is different from the timing distributions in Fig. 20(right). 
The timing distributions corresponding to different background sources 
are different, and that difference would not be visible in the linear scale.
So for Fig. 20(right) the preferred choice is the log vertical scale.

Reviewer 3 Report

The article presents feasibility studies on hypothetical neutrinoless mu-to-e conversion that are planned for the Mu2e experiment at Fermilab. The paper consist of comprehensive review of the experimental facility, data-taking strategy, and possible sources of systematic uncertainties. The specific details of work of different subsystems have already been published in a number of journals and correctly cited here.

In order to estimate sensitivity and to evaluate uncertainties the up-to-date versions of Monte-Carlo event generators were used. Undoubted advantage of the presented study is that simulation that were done in Geant4 were cross-checked with other simulation tools. It is also clear from the paper that part of the background effects would be possible to suppress in Run-2 after analysis of Run-1 data. 

Presented analysis indicates that the successful completion of the experimental program would allow to push the present experimental limit obtained by the SINDRUM II experimental by a factor of 1000.

In general, the paper achieves its goal of presenting the experimental program in a clear way. The program is of high importance for the beyond-the-standard-model physics, it has a discovery potential and even without discovery will advance the current knowledge significantly.

With my comment I would like to highlight the weak point of the background estimation procedure, namely, the lack of knowledge on pion and antiproton production cross sections at the given collision energy. Due to this fact the authors are right to take rather conservative estimates of the uncertainties. Clearly some sort of synergy with other experiments that can study hadron production is needed to further improve sensitivity.

 

Author Response

> The article presents feasibility studies on hypothetical neutrinoless
> mu-to-e conversion that are planned for the Mu2e experiment at
> Fermilab. The paper consist of comprehensive review of the
> experimental facility, data-taking strategy, and possible sources of
> systematic uncertainties. The specific details of work of different
> subsystems have already been published in a number of journals and
> correctly cited here.

> In order to estimate sensitivity and to evaluate uncertainties the
> up-to-date versions of Monte-Carlo event generators were
> used. Undoubted advantage of the presented study is that simulation
> that were done in Geant4 were cross-checked with other simulation
> tools. It is also clear from the paper that part of the background
> effects would be possible to suppress in Run-2 after analysis of Run-1
> data.

> Presented analysis indicates that the successful completion of the
> experimental program would allow to push the present experimental
> limit obtained by the SINDRUM II experimental by a factor of 1000.

> In general, the paper achieves its goal of presenting the experimental
> program in a clear way. The program is of high importance for the
> beyond-the-standard-model physics, it has a discovery potential and
> even without discovery will advance the current knowledge
> significantly.

> With my comment I would like to highlight the weak point of the
> background estimation procedure, namely, the lack of knowledge on pion
> and antiproton production cross sections at the given collision
> energy. Due to this fact the authors are right to take rather
> conservative estimates of the uncertainties. Clearly some sort of
> synergy with other experiments that can study hadron production is
> needed to further improve sensitivity.

Response:

We thank the reviewer for carefully reading the manuscript and fully
appreciate the comment on uncertainties on the production cross sections,
which we fully agree with.

A couple of quick comments in response.

As Mu2e will measure the stopped muon flux with the expected accuracy
of about 10%, the uncertainty on the pion production cross section will,
to much extent, cancel out from the experimental results Mu2e will publish them.

An uncertainty on the antiproton production cross section is difficult
to reduce. However, it is a 100% uncertainty on the background which 
mean expected value is 0.01, i.e. << 1. For as long as that holds,
the Mu2e experimental reach is not very sensitive to the scale
of the absolute uncertainty on antiproton-induced background.

Reviewer 4 Report

The article reports an extensive sensitivity study of the Mu2e experiment aiming to search for neutrinoless muon conversion into electron in Al atoms. The paper is well written and organized, all arguments are well described and the sensitivity calculation is clearly reported. I have no major comments about the article that can be published in the present form. I just recommend the authors to check for some minor typo or misprinting (as example line 387 must be canceled). 

Author Response

> The article reports an extensive sensitivity study of the Mu2e
> experiment aiming to search for neutrinoless muon conversion into
> electron in Al atoms. The paper is well written and organized, all
> arguments are well described and the sensitivity calculation is
> clearly reported. I have no major comments about the article that can
> be published in the present form. I just recommend the authors to
> check for some minor typo or misprinting (as example line 387 must be
> canceled).

Response:

We appreciate careful reading of the manuscript by the reviewer
and the overall positive response.

The 'leftover' typo on line 387 is fixed - thank you very much 
for spotting it! 

Back to TopTop