[Rivet] CDF Run-1 pTZ

Peter Skands skands at fnal.gov
Wed Feb 18 00:21:10 GMT 2009


Hi,

I wouldn't phrase it so much that *we* need the data in as uncorrected 
form as possible (while still correcting for detector effects), but that 
an experimental measurement should simply not be diluted by theoretical 
model-dependence unless absolutely necessary - since this degrades the 
precision of the measurement!

Then, if so desired, numbers containing larger corrections may also be 
provided, to compare to less sophisticated theoretical calculations, but 
it's of critical importance that the legacy of each experimental 
measurement is to leave a number for the future that has the highest 
possible intrinsic precision. And filling out unmeasured (or poorly 
measured) phase space regions with pure modeling is an unnecessary 
degradation of the measurement in that context.

Currently, Monte Carlo generators are the only "theoretical 
calculations" that can be directly compared to such measurements, and so 
that's why "we" want those results. But the broader implication is much 
more important than that.

So the logic would be that every measurement should *at least* present a 
result where unmeasured or poorly measured phase space regions are cut 
out entirely, and then the remainder is corrected only for detector 
efficiency effects. There should be a well-defined "minimal correction" 
procedure to obtain this result. This result could be called the 
"fiducial" result, i.e., the raw measurement corrected back to 100% 
efficiency inside the fiducial acceptance of the measurement (eta and 
pT) and 0% outside. Then, one can also start correcting for UE, 
out-of-cone, isolation, and more things in order to compare to less 
exclusive theoretical calculations, basically unfolding the exclusive 
effects according to the best modeling of the day (whether theory- or 
data-driven). The resulting measurement could be called "calibrated". 
There should then be a detailed account of what is involved in the 
calibration, and there should also at least be a plot showing the total 
correction factor involved in going from the fiducial to the calibrated 
plot as function of the plotted variable. E.g., for the case of the D0 
DY measurement, if they had a plot showing just the mass of the lepton 
pair, the "minimal" procedure would result in a plot that only contained 
points inside the region 71-111, corresponding to the actual fiducial 
acceptance of the measurement (this would still be corrected for track 
finding efficiency, etc, and one would have to clearly define the 
observable: e.g., the effective definition of a lepton would be part of 
the observable, whether it is isolated lepton + cone of DR=0.2 or 
whatever), whereas the "calibrated" plot would contain points from 
40-200 in their case. The associated correction-factor plot would 
presumably be close to unity in the range 71-111, but it would show an 
almost infinite correction factor applied outside that range. The more 
interesting plot is the DY pT of course. Here it would be nice to know 
how large the corrections are when going from the fiducial distribution 
to the calibrated one, as function of the plotted variable. Since the 
total cross section in the unmeasured regions is small compared to that 
inside the measured region, presumably the correction factors are not 
enormous in this case, but still, they dilute the actual measured result 
(and are causing us to become paranoid and have head-aches...)

Cheers,
Peter

Andy Buckley wrote:
> Emily Nurse wrote:
>> We can double check with Gavin but I think this is the case (ie/ from
>> the statement  "We use PHOTOS [15] to simulate the effects of 
>> final state photon radiation.").  I also asked the author of the analysis
>> a while ago and I think that's what he said.
>>
>> I'll bet the Run I analysis did the same but we should check.
> 
> Ok, worth checking.
> 
>> So it seems to be a pattern that all the analyses so far have done this,
>> so we need to cluster back all the photons that we think Photos would
>> have simulated. I can make some plots of dR between the lepton and
>> photon in W / Z events and maybe we can decide on the best cut to make? 
> 
> That would be great. We need some "opposing" distribution, though, like
> the dR between the lepton and background photons, which will rise as the
> signal falls: the trade-off determines where to place the cut. Otherwise
> we'll get a distribution which tells us that the best cone has R = pi ;)
> I don't know if you need to simulate pile-up for that at the Tevatron.
> 
>> I think that the D0 Run II analyses also corrects for QED radiation,
>> in an attempt to present the "true Z pT".
> [...]
>> For the future we really need to communicate these concerns with the
>> electroweak convenors and authors of any current analyses. We should
>> define what we want.
> 
> We are seeing this from a rather different direction than the experiment
> is: from our point of view it's crucial that we have reference data
> which can be compared against without having to unfold some model, but
> from the point of view of writing a paper that measures the Z boson pT,
> the unfolding was necessary to get to the "truth". That's no bad thing,
> but I think we need to get the message across that *we* need the
> timeless, un-unfolded, model independent measurement at least as much as
> we need a best possible inference of what the boson is actually doing.
>   In a sense, all we want for MC tuning and validation is enough
> different experimental distributions to roughly span the space of
> simulated physics. Whether those distributions have a super-clean
> physical interpretation is less important than whether they can be
> readily constructed from a final state particle record ;)
> 
>> For electrons it is probably quite simple:
>> "pT of e+e- pair where an electron is defined as a dR = X cone of EM
>> objects". 
> 
> Sounds fine.
> 
>> For muons its a bit more difficult, we don't cluster any photons in the
>> muon momenta measurement, but often MIP and/or isolation cuts are made.
>> Do we ask that the experiments to correct for this using some QED model
>> or do we attempt to model the cuts in our analyses?
> 
> I would suggest both: the model-independent part should be preserved as
> a matter of principle; the QED correction can be used to make physics
> conclusions that we are less interested in for MC tuning purposes. I
> would rather model the cuts than attempt to encode some bin-by-bin (or
> worse) corrections, but mostly I'd like to have the option!
> 
> Andy
> 


More information about the Rivet mailing list