|
[Rivet] agile memory useGavin Hesketh hesketh at cern.chMon Jan 31 19:02:29 GMT 2011
Yes, fixed it! I can now happily attempt to run over 1G events. 10G causes: Traceback (most recent call last): File "/scratch/professor/local/bin/agile-runmc", line 519, in <module> for i in xrange(opts.NEVTS): OverflowError: long int too large to convert to int but surely nobody ever needs to generate that many (I don't). Still, good to know I'm not "normal" :) Gavin On 31/01/11 18:56, Andy Buckley wrote: > Ha! I think I found it... in the very last block in the Python > bin/agile-runmc script we have a loop over the number of events like this: > > for i in range(opts.NEVTS): > ... > > Can you change this to: > > for i in xrange(opts.NEVTS): > ... > > and see if that solves the problem? I'm pretty sure it will: range() > creates an array whereas xrange() generates a single iterator (a > "generator function") so it's definitely the right choice for when the > range array itself isn't needed. We never noticed the memory overhead > for "normal" numbers of events but it would have a big effect for huge > event numbers. > > Even if (somehow) this isn't responsible, this *should* have been > xrange(), so it's fixed now in the trunk. > > Thanks! > Andy > > > On 31/01/11 15:53, Andy Buckley wrote: >> I'll look into this... thanks for finding and reporting it. I don't know >> of anywhere that we are e.g. allocating an array proportional to >> NUM_EVENTS, but that's what it looks like. We also don't pass the AGILe >> maximum event number to the Fortran common blocks, AFAIK, so the problem >> is indeed likely to be in the steering and generator independent. >> Hopefully it'll be obvious :) >> >> Cheers, >> Andy >> >> >> On 31/01/11 13:42, Gavin Hesketh wrote: >>> Hello, >>> Noticed this after seeing lots of batch job crashes. Sometimes I run >>> agile with a large number of events (mostly because of the setup I have >>> for alpgen). When this number gets very large, agile grabs a huge block >>> of virtual memory before generating any events. >>> eg: >>> agile-runmc Pythia6:424 -n 10000000 >>> grabs 170 MB >>> >>> agile-runmc Pythia6:424 -n 100000000 >>> grabs 1.5 GB, and tends to be killed by the batch queue I use. >>> >>> x10 higher N won't initialise. >>> >>> ok, 100M is a lot of events, and I've figured out a work-around for this >>> in the way I run alpgen. But it seems strange that agile should be >>> requesting so much memory when events are in the end fed to a pipe for >>> rivet to read one by one. Is this expected? Seems to be independent of >>> the generator I use. >>> >>> thanks, >>> Gavin >>> _______________________________________________ >>> Rivet mailing list >>> Rivet at projects.hepforge.org >>> http://www.hepforge.org/lists/listinfo/rivet >>> >> >> > >
More information about the Rivet mailing list |