[yt-users] EnzoDatasetInMemory not working in parallel

Pengfei Chen madcpf at gmail.com
Wed Feb 17 15:56:33 PST 2016


Hi Matt and Britton,

I get the same error message with the same user_script. I'm sorry that I
still don't know what's the right way to correct this and what .index
means. Could you please clarify that?

Thank you,
Pengfei

On Mon, Jul 13, 2015 at 7:07 AM, Britton Smith <brittonsmith at gmail.com>
wrote:

> Thanks for your help!
>
> On Mon, Jul 13, 2015 at 3:01 PM, Matthew Turk <matthewturk at gmail.com>
> wrote:
>
>> Yup, it does. Nice detective work!
>>
>> On Mon, Jul 13, 2015, 8:58 AM Britton Smith <brittonsmith at gmail.com>
>> wrote:
>>
>>> Your tip led me to the right answer.  The call to parallel_objects was
>>> happening in the derived quantity, where each processor is being made into
>>> its own comm where it is rank 0.  The issue is that they then try to
>>> identify fields and incorrectly think of themselves as rank 0 for choosing
>>> which grids to look at.  If I simply as ds.index right after creating the
>>> dataset, the problem goes away.  This should probably just be added to the
>>> bottom of the __init__ for EnzoDatasetInMemory.  Does that sound right?
>>>
>>> Britton
>>>
>>> On Mon, Jul 13, 2015 at 2:38 PM, Matthew Turk <matthewturk at gmail.com>
>>> wrote:
>>>
>>>> That sounds like a new communicator got pushed to the top of the stack
>>>> when it should not have been, perhaps in a rogue parallel_objects call.
>>>>
>>>> On Mon, Jul 13, 2015, 8:35 AM Britton Smith <brittonsmith at gmail.com>
>>>> wrote:
>>>>
>>>>> Hi again,
>>>>>
>>>>> Maybe this is a clue.  In _generate_random_grids, self.comm.rank is 0
>>>>> for all processors, which would explain why N-1 cores are trying to get
>>>>> grids that don't belong to them.  Interestingly, mylog.info prints
>>>>> out the correct rank for each of them.
>>>>>
>>>>> Britton
>>>>>
>>>>> On Mon, Jul 13, 2015 at 2:21 PM, Britton Smith <brittonsmith at gmail.com
>>>>> > wrote:
>>>>>
>>>>>> Hi Matt,
>>>>>>
>>>>>> Thanks for your help.  Adjust by grid._id_offset did not work, but I
>>>>>> can that what is happening is that all processors are trying to call
>>>>>> _read_field_names using grid 1, when only processor 0 owns that grid.  I
>>>>>> will look into why now, but if you have any intuition where to check next,
>>>>>> that would be awesome.
>>>>>>
>>>>>> Thanks,
>>>>>> Britton
>>>>>>
>>>>>> On Mon, Jul 13, 2015 at 1:51 PM, Matthew Turk <matthewturk at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Britton,
>>>>>>>
>>>>>>> What looks suspicious to me is the way it's using grid.id.  This
>>>>>>> might
>>>>>>> lead to an off-by-one error.  Can you try it with
>>>>>>> grid.id-grid._id_offset and see if that clears it up?
>>>>>>>
>>>>>>> On Mon, Jul 13, 2015 at 7:42 AM, Britton Smith <
>>>>>>> brittonsmith at gmail.com> wrote:
>>>>>>> > Hi all,
>>>>>>> >
>>>>>>> > I've recently been trying to use yt's inline analysis
>>>>>>> functionality with
>>>>>>> > Enzo and am having some difficultly getting it to work in
>>>>>>> parallel.  I am
>>>>>>> > using the development tip of yt.  In serial, everything works
>>>>>>> fine, but in
>>>>>>> > parallel, I get the following error:
>>>>>>> > http://paste.yt-project.org/show/5694/
>>>>>>> >
>>>>>>> > It seems that the issue is that yt is not correctly identifying
>>>>>>> which grids
>>>>>>> > are available on a given processory for the EnzoDatasetInMemory
>>>>>>> object.
>>>>>>> > Does anyone have an idea of how to fix this?  Has anyone else seen
>>>>>>> this?
>>>>>>> >
>>>>>>> > For reference, my user_script is just this:
>>>>>>> >
>>>>>>> > import yt
>>>>>>> > from yt.frontends.enzo.api import EnzoDatasetInMemory
>>>>>>> >
>>>>>>> > def main():
>>>>>>> >     ds = EnzoDatasetInMemory()
>>>>>>> >     ad = ds.all_data()
>>>>>>> >     print ad.quantities.total_quantity("cell_mass")
>>>>>>> >
>>>>>>> > Thanks for any help,
>>>>>>> >
>>>>>>> > Britton
>>>>>>> >
>>>>>>> > _______________________________________________
>>>>>>> > yt-users mailing list
>>>>>>> > yt-users at lists.spacepope.org
>>>>>>> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>> >
>>>>>>> _______________________________________________
>>>>>>> yt-users mailing list
>>>>>>> yt-users at lists.spacepope.org
>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>>
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> yt-users mailing list
>>>>> yt-users at lists.spacepope.org
>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>
>>>>
>>>> _______________________________________________
>>>> yt-users mailing list
>>>> yt-users at lists.spacepope.org
>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>
>>>>
>>> _______________________________________________
>>> yt-users mailing list
>>> yt-users at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>
>>
>> _______________________________________________
>> yt-users mailing list
>> yt-users at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>>
>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20160217/123ad966/attachment.htm>


More information about the yt-users mailing list