<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi all,<div><br></div><div>I think that the D0 Run II analyses also corrects for QED radiation, in an attempt to present the "true Z pT".</div><div>We can double check with Gavin but I think this is the case (ie/ from the statement "<span class="Apple-style-span" style="font-family: Times; font-size: 10px; ">We use<span style="font: 12.0px Helvetica"> </span><span style="font: 8.0px Times">PHOTOS</span><span style="font: 12.0px Helvetica"> </span>[15] to simulate the effects of<span style="font: 12.0px Helvetica"> </span></span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 10px/normal Times; ">final state photon radiation.<font class="Apple-style-span" face="Helvetica">"<span class="Apple-style-span" style="font-size: 12px; ">). I also asked the author of the analysis a while ago and I think that's what he said.</span></font></div><div><br></div><div>I'll bet the Run I analysis did the same but we should check.</div><div><br></div><div>So it seems to be a pattern that all the analyses so far have done this, so we need to cluster back all the photons that we think Photos would have simulated. I can make some plots of dR between the lepton and photon in W / Z events and maybe we can decide on the best cut to make? </div><div><br></div><div>For the future we really need to communicate these concerns with the electroweak convenors and authors of any current analyses. We should define what we want. For electrons it is probably quite simple:</div><div>"pT of e+e- pair where an electron is defined as a dR = X cone of EM objects". </div><div><br></div><div>For muons its a bit more difficult, we don't cluster any photons in the muon momenta measurement, but often MIP and/or isolation cuts are made. Do we ask that the experiments to correct for this using some QED model or do we attempt to model the cuts in our analyses?</div><div><br></div><div>Emily.</div><div><br></div><div><br></div><div><br></div><div><br><div><div>On 17 Feb 2009, at 17:51, Andy Buckley wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Peter Skands wrote:<br><blockquote type="cite">Hi Andy,<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Yeah, correcting all the way back to generator-Z is not so great, I<br></blockquote><blockquote type="cite">agree. I also heard the DeltaR=0.2 figure. What I do is actually just<br></blockquote><blockquote type="cite">plot the pTZ at generator-level now, since the corrections from outside<br></blockquote><blockquote type="cite">DeltaR=0.2 amount to finite QED-suppressed contributions, so the main<br></blockquote><blockquote type="cite">correction is going from unclustered to 0.2 and then the rest of the way<br></blockquote><blockquote type="cite">should be very minor.<br></blockquote><br>Right: the 0.2 cone should pick up the log-enhanced region.<br><br><blockquote type="cite">I agree that this cannot be the procedure adopted<br></blockquote><blockquote type="cite">in Rivet, since you actually have to reproduce the analysis accurately.<br></blockquote><br>Hrmm, as far as reasonably possible: in practice the MC analysis has to<br>approximate so many things which are complex in the experiment that I<br>wouldn't be completely averse to using a cone of whatever radius and<br>stopping there, as long as the correction is heavily suppressed. Which<br>in this case it should be. Anyway, given that Rivet currently does no<br>photon clustering, anything that picks up the log-enhanced QED emissions<br>is an improvement!<br><br><blockquote type="cite">I would try to see if I could get a reference on those 0.2 from anyone<br></blockquote><blockquote type="cite">in D0, and then refer to that. Anyway, so as you can see, I don't think<br></blockquote><blockquote type="cite">the 'clever' CDF approach is completely stupid, since the difference<br></blockquote><blockquote type="cite">between it and D0 should be down by a genuine (non-log-enhanced)<br></blockquote><blockquote type="cite">alpha_em suppression.<br></blockquote><br>Sure, hence it's nowhere near as odious as the hadronisation unfolding<br>(or the D0 mess you mention below!)<br><br><blockquote type="cite">But in all fairness D0 also screwed up, and quite<br></blockquote><blockquote type="cite">a bit worse in my opinion. They measured the Z in a mass window from 71<br></blockquote><blockquote type="cite">to 111 GeV. However, they then *correct* that back to a mass window from<br></blockquote><blockquote type="cite">40 to 200 GeV, for no bloody reason! They fill that entire phase space<br></blockquote><blockquote type="cite">region with pure model-dependence. So we can't trust those numbers<br></blockquote><blockquote type="cite">completely either, they probably represent 90% Tevatron and 10% Resbos.<br></blockquote><blockquote type="cite">Why they wanted Resbos to contaminate their measurement when they could<br></blockquote><blockquote type="cite">have just left the mass window as it was, I just have no clue.<br></blockquote><br>*sigh* This is new to me.<br><br>Is this in the Run II measurement, or Run I? Gavin was enthusiastic that<br>we should use the newer one, so we can all give him an earful if it's<br>the Resbos-contaminated version! I don't know if the Run I measurement<br>is problematic in that way, but I recall the peak binning is badly<br>chosen for tuning comparisons.<br><br>Andy<br><br>-- <br>Dr Andy Buckley<br>Institute for Particle Physics Phenomenology<br>Durham University<br>0191 3343798 | 0191 3732613 | <a href="http://www.insectnation.org">www.insectnation.org</a><br></div></blockquote></div><br></div></body></html>