[Rivet] more fun with alpgen

Gavin Hesketh hesketh at cern.ch
Fri Dec 17 13:56:21 GMT 2010


the "solution", in case anyone was wondering....

This is a feature of HepMC IO. The file:
GenEventStreamIO.cc
parses an event. It reads in the event header, then as soon as it hits a 
P or V, skips to reading the body of the event.

There are several lines in the header it can try to read in, each 
starting with a different ID code:
E = start of new event
N = weight names
U = unit info
C = Cross section
H = Heavy Ion
F = PDF info

Now, my agile job was producing empty events, the last of which looks like:

E 321 -1 -1.0 -1.0 -1.0 0 0 0 0 0 0 1 1.0
N 1 "0"
U GEV MM
C 1.4347291803278606e+00 9.5861875635604679e-02
HepMC::IO_GenEvent-END_EVENT_LISTING

So, it thinks it is still reading the header as there are no P or V 
lines, then hits the "HepMC::" line and thinks this is a Heavy Ion 
header line. It can't be parsed this way, so hangs.

I'll point this out to the HepMC guys to see if they have any ideas. For 
now, I just break the header IO when it reads a "H" line:
case'H':
{   // we have a HeavyIon line
info.set_reading_event_header(false); 

} break;

So, I can't read any heavy ion data like this, but that's fine for my 
purposes...

Gavin


On 16/12/10 17:04, Andy Buckley wrote:
> On 16/12/10 16:39, Gavin Hesketh wrote:
>> Hello,
>> hopefully a simple one.
>>
>> I'm running alpgen+pythia using agile, piping into a fifo being read by
>> rivet. I  specify the number of events: agile-runmc -n 1000.
>>
>> The problem is pythia is reading from an external file (from the alpgen
>> matrix element generator), which might not contain 1000 events. Then,
>> pythia calls UPVETO, to veto events based on parton-particle jet
>> matching. So, even if the external file contains 1000 events, they might
>> not all get processed through pythia to make it to the fifo.
>>
>> The result being that rivet does not get 1000 events, and just hangs
>> once the fifo is not being filled. I have to kill it by hand, which
>> means no .aida file, and no way to run this on the batch queue...
>>
>> Now, I have no way of knowing in advance exactly how many events will be
>> in the external file read by pythia, or how many of those will fail
>> UPVETO. So I want to tell agile to run over as many as possible (it
>> seems to cleanly stop once the external file is exhausted, even if this
>> is less than the "n" I specify). But is there a way to make rivet wrap
>> up cleanly once the fifo is not receiving any more events (or when agile
>> is no longer running)?
>
> Hi Gavin,
>
> Are you telling rivet to expect exactly 1000 events? In that case I
> think it will wait even when AGILe has died. It should exit cleanly,
> with an output file, if Ctrl-C'd. And if you don't specify a number of
> events to Rivet, it should run until the end of the HepMC stream, which
> is written by AGILe shutting down correctly. Maybe the AGILe AlpGen
> interface *doesn't* exit smoothly enough to write that terminating run
> footer.
>
> Without having time right now to look into it, I think that Rivet is
> doing the right thing, and heuristics to guess when the fifo is not
> receiving more events would probably have their own edge cases and
> problems. I've put a 1 hour timeout on the first event into the SVN
> version of Rivet, to catch a never-amusing batch farm gotcha where jobs
> appear to be running for days but are actually doing nothing at all, but
> even if I were to extend that to the event loop it wouldn't have the
> sort of responsive behaviour that you want.
>
> So the problem is probably either that agile-runmc is *not* exiting as
> smoothly as required when the external file is exhausted, or that you
> are explicitly telling rivet to wait for a fixed number of events which
> never arrive. Which is it? If the former, we should fix that before
> making the next AGILe release (pretty soon, I hope)
>
> Andy
>


More information about the Rivet mailing list