[Rivet] mcplots - Problem with the OPAL 2004 Rivet analysis ?

Peter Skands peter.skands at cern.ch
Mon May 30 16:07:19 BST 2011


Hi guys,

We are scheduling an update of mcplots for early next week. We just 
wanted to check if we should anticipate any new Rivet releases in the 
near future?

Of special relevance right now is that our plan right now is to leave 
out the energy-scaling OPAL 2004 analysis, despite its large physics 
interest, due to the problems I outlined in my mail below (summary: data 
gives consistent values between OPAL and ALEPH, but the MC results are 
very different. This appears to indicate that something could be wrong 
in the calculation of the observable in the Rivet code for the OPAL 
one). I think the upshot was that you decided to look into the Rivet 
code for the OPAL analysis, but I do not know if the problem has been 
identified at the code level. This is a significant amount of plots to 
kick off mcplots (as you can see from the dev site), but we decided it 
is worse to show something that may be wrong, than to show nothing at 
all. We have not ourselves had the resources to look into the problem at 
the technical level, unfortunately, so for now still only have the 
summary below to report.

Cheers
Peter



On 5/20/11 6:56 PM, Anton Karneyeu wrote:
> Hi Hendrik,
> the plots on http://mcplots-dev.cern.ch correspond to Rivet 1.5.1a0.
>
> Anton
>
> Hendrik Hoeth:
>> Hi Peter,
>>
>> what Rivet version are you using?
>>
>> Cheers,
>>
>> Hendrik
>>

 >>> Peter Skands wrote:

Dear Rivet people (cc mcplots)

For the next update of mcplots, we have incorporated more LEP analyses, 
so that we can now show both ALEPH and OPAL data, with OPAL allowing us 
to show the scaling with LEP energy. Specifically, we have:

   OPAL_2004_S6132243
   ALEPH_2004_S5765862

 From the papers, these analyses purport to use exactly the same 
definitions of the particle level (i.e., stable particles, removal of 
ISR effects, and subtraction of 4-fermion events). The assumption that 
they really do measure the same thing is corroborated by the fact that 
the different experiments report mutually compatible values. For 
instance the very lowest bin of Thrust at 91 GeV, which is extremely 
sensitive to just about anything you do, is

DATA:
  ALEPH 1-T [0.00:0.01] = 1.31 plus/minus bla
  OPAL  1-T [0.00:0.01] = 1.28 plus/minus bla

(Note that ALEPH reports T, but changing to 1-T is not difficult.)

Accordingly, the Monte Carlos should also be calculating the same thing, 
when comparing to these two sets, and we use the same run cards on 
mcplots for the two. However, the corresponding Rivet analyses come out 
with very different numbers for the two analyses. For Pythia 6 with 
Perugia 2011, for instance,

PYTHIA 6 (350):
  ALEPH 1-T [0.00:0.01] = 1.14 (fine, it's a bit low)
  OPAL  1-T [0.00:0.01] = 3.23 ( !!! NOT FINE !!! )

Cf.
  OPAL : http://mcplots-dev.cern.ch/?query=plots,ee,zhad,tau,,
  ALEPH : http://mcplots-dev.cern.ch/?query=plots,ee,zhad,T,,

You see now why I didn't care about reporting the error above. The MC is 
a factor 3 off from the data (as are all the other generators) in the 
OPAL analysis, *despite* being almost on the mark for the ALEPH one, 
despite the fact that they eledgedly measured the same thing.

So, I am hoping there is maybe a simple bug in the OPAL analysis? Note 
that I see similar differences in other event-shape distributions. 
Thrust was just a convenient example since they even used the same bin 
sizes there, making the comparison especially direct.

Cheers,
Peter


More information about the Rivet mailing list