[yt-users] PHOX mock observation part III

John ZuHone jzuhone at gmail.com
Tue Sep 22 07:26:17 PDT 2015


Hi Barbara,

Ok, I’m a little puzzled as to why it won’t fit in memory, because the cluster doesn’t seem like it should be too luminous. 

There are two things I can think of:

1) Obviously, we are limited by memory. I typically run these calculations on a machine with 32 GB of RAM, which is definitely sufficient for most cases (though I have encountered this problem myself before when trying to run with a large effective area). I’m not sure what the memory on your machine is, but unfortunately this algorithm is very memory-hungry. If you have access to a machine with more RAM, then you could move the calculation there, or, even better, if you can run the code in parallel (using mpi4py) on a parallel machine this would also help. 

2) One thing we could try, which sounds a little annoying but nevertheless should work, is to split this problem up into multiple chunks, i.e., several short exposure times and then merge the event files together afterwards. As a first measure, this should definitely work. 

So I would try first photons_per_chunk to 50e6, and then half your exposure time (if that doesn’t work, make it a quarter). After you make the observations we can merge them easily. 

In any case, I see that this points to the need for me to tweak the algorithm a little bit, so that 

a) the messages about running out of memory are more informative and point to a resolution
b) finding a way to cut down on memory usage, though this will be hard. 

If you don’t mind, it would be immensely helpful if I could have a copy of your dataset, so I can figure out exactly what it is that is going on, and it lead me to a quicker resolution of the problem. If you’re ok with this, we can post it somewhere and you can privately email me the location at jzuhone at gmail.com. After I’m done, I’ll delete my copy of the data. 

I’d be happy to try to find a way to improve the code soon, so that we don’t run into this problem again, but in the meantime I would suggest 1) or 2) above. What do you think? 

Best,

John

> On Sep 22, 2015, at 10:01 AM, Barbara Ramirez <Barbara.Ramirez-Mosquera at uibk.ac.at> wrote:
> 
> Hello John!
> Sure, I hadn't understand before. Here are my object properties:
> 
> R200: 1.00654482691 Mpccm/h
> Total Mass at R200: 3.65901447937e+14 Msun
> Avg Temperature at R200: 5.70273 E 7 K
> 
> I tried decreasing the Area to 2000 but I'm still getting the memory error.
> 
> With a photons_per_chunk of: 10 E 6 I get:
>     RuntimeError: Number of photons generated for this chunk exceeds photons_per_chunk
> With an increase of 10 (10 E 7 photons_per_chunk) I get the same Runtime error, but with an increase of 100 I get memory error!
> 
> Again, observation parameters:
>     A = 2000.
>     exp_time = 100.0e3
>     redshift = 0.05
> 
> Do you have any idea on how could we solve this?
> 
> Thanks for your time!
> 
> -- 
> Barbara L. Ramirez MSc.
> Institut für Astro- und Teilchenphysik
> University of Innsbruck
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20150922/57c396b5/attachment.htm>


More information about the yt-users mailing list