[yt-users] Rockstar and RAMSES (DM only run). Rockstar appears to run, but no halo information is written.

Britton Smith brittonsmith at gmail.com
Tue Oct 14 07:25:24 PDT 2014


Hi Ben,

It does seem as if rockstar isn't receiving the particle velocities in the
correct units.  There was a bug like this a few months ago that also
affected rockstar with Enzo datasets.  This was corrected, so you may want
to first make sure you have an up to date version of yt.  This bug was
fixed at the end of July, so the latest stable release should have it.  Let
us know if you're already on the latest version and still having these
issues.

Britton

On Sat, Oct 11, 2014 at 4:17 PM, Ben Thompson <bthompson2090 at gmail.com>
wrote:

> Hi Britton and everyone.
>
> I have finally had some time to look into the rockstar halos a little more
> (Sorry for the delayed response to this, I had to work on a few other
> things this week)
>
> Anyway. I am starting to think that this might be a units issue...
>
> Within a 20 Mpc/h box (dark matter only). I have run both the AHF halo
> finder and the rockstar halofinder. Of which I have identified the halo
> with the most number of particles via each halofinder code
>
> First thing to note, number of halos
>
> len(ahfhalos) = 771
> len(rk_halos) = 61 !
>
> These positions are in a similar region.. give or take < 100 kpc
>
> abs(rkhalos[0]["pos"] - ahfhalos[0]["pos"]).in_units("kpc")
> YTArray([ 16.67015648,  54.59806824,  83.17473602], dtype=float32) kpc
>
> Now lets wrap a data sphere container around each of these halos and
> explore some of the properties in more detail.. out to the virial radius
>
> rocksphere = ds.sphere(rk_halos[0]["pos"], rk_halos[0]["Rvir"])
> ahfsphere = ds.sphere(ahf_halos[0]["pos"], ahf_halos[0]["Rvir"])
>
> Comparing the virial radius
>
> >>> rk_halos[0]["Rvir"].in_units("kpc/h")
> 711.183418249 kpc/h
> >>> ahf_halos[0]["Rvir"].in_units("kpc/h")
> 877.551515684 kpc/h
>
> Not too much to cause concern here compared to what follows on
>
> more specifically, the properties of this halo as quoted in rockstar are
>
> 3 -1 4.2275e+13 540.21 542.04 711.012 110.933 92746 2.16471 5.12865
> 0.58479 -58.35 92.88 26.16 5.501e+14 8.955e+13 3.073e+14 0.02475 136.33844
> 4.2275e+13 4.6532e+13 3.5057e+13 2.2415e+13 0.0000e+00 42.64141 98.28
> 0.02960 0.67428 0.49493 -71.84635 25.39820 137.49133 0.64392 0.51116
> -18.93683 37.49042 101.71218 0.6312 4.403e+13 3.002e+13
>
>
> Lets take a look at the difference in masses of the spheres
>
> abs(rocksphere.quantities.total_quantity(["particle_mass"]).in_units("Msun")
> - rk_halos[0]["Mvir"].in_units("Msun")) = 1.0623502593e+11 Msun
> where rk_halos[0]["Mvir"].in_units("Msun") = 6.03928545573e+13 Msun and
> rocksphere.quantities.total_quantity(["particle_mass"]).in_units("Msun") =
> 6.02866195313e+13 Msun
>
>
> nothing immediately strikes me as problematic.. until you investigate the
> number of particles
>
> abs(rk_halos[0]["num_p"] - len(ahfsphere["particle_position_x"])) =
> 79229.0 dimensionless in which the data container has 171975 particles and
> the halo has 92746 particles.
>
> That is a lot of missing particles.
>
>
> Looking at my config, there is also a particle mass 2.73991e+08 and
> rocksphere["particle_mass"][0].in_units("Msun/h") = 273990947.215 Msun/h ..
> so the particle mass is fine.. but what I do not understand is that with a
> difference in number of particles being that large, then how can the mass
> of the halo and the data container be approximatly the same?
>
> rocksphere.quantities.bulk_velocity(use_gas=False,use_particles=True).in_units("km/s")
> = YTArray([  4.80047835,  41.30041047, -29.83890759]) km/s
> rk_halos[0]["vel"] = YTArray([-58.34999847,  92.87999725,  26.15999985],
> dtype=float32) km/s
>
>
> So we have some contradictory values with regards to the particle count
> and the velocity/bulk velocity of the componants
>
> interestingly with the AHF sphere
>
> >>> ahf_halos[0]["vel"].in_units("km/s")
> YTArray([  0.47      ,  35.36999893, -24.67000008], dtype=float32) km/s
> >>>
> ahfsphere.quantities.bulk_velocity(use_gas=False,use_particles=True).in_units("km/s")
> YTArray([  0.46971136,  35.36391688, -24.66292379]) km/s
>
> I.e the AHF halo finder is getting results as to what you would expect
> with this simulation.
>
> ------------------------------------------
>
> In conclusion... I am getting contradictory particle counts within the
> rockstar halo and the data container. As well as the bulk velocity being
> wrong too (maybe as a result of the particle count?)
>
> I have also attatched a copy of rockstar.cfg too.
>
> Ben.
>
>
>
>
> On Tue, Oct 7, 2014 at 3:42 PM, Britton Smith <brittonsmith at gmail.com>
> wrote:
>
>> Hi Ben,
>>
>> This still sounds like it could be a units issue?  Can you confirm that
>> the masses, positions, and velocities of the particles are correct?  You
>> should be able to just create a data container and query a few particles.
>>
>> Britton
>>
>> On Mon, Oct 6, 2014 at 10:38 AM, Ben Thompson <bthompson2090 at gmail.com>
>> wrote:
>>
>>> Hello all.
>>>
>>> So I have decided to test my code on a smaller box ( 20 Mpc/h @ level 7,
>>> one of my previous runs with 430 snapshots ) and I am getting halos (i.e
>>> Rockstar is working) within the earlier snapshots with Rockstar. I am
>>> currently running rockstar all the way through on that run now.
>>>
>>> A part of me now is suspecting that I am doing something wrong with the
>>> resolution side of things maybe?  I have kept the particle number the same,
>>> but increased the box size (thus decreased the spatial resolution).
>>>
>>> Basically, rockstar is working. But I need to relax some initial
>>> conditions somewhere with a larger box by the sounds of it.
>>>
>>> Currently running some more tests on the 20 Mpc box series (i.e seeing
>>> what happens when you allow for the spatial (but not particle mass)
>>> resolution to increase., rather than having a uniform grid run spatially
>>>
>>> Any thoughts, suggestions on why it might be complaining about the
>>> larger box size runs?
>>>
>>> Ben
>>>
>>> On Sat, Oct 4, 2014 at 8:53 PM, Nathan Goldbaum <nathan12343 at gmail.com>
>>> wrote:
>>>
>>>> If you are comfortable sharing the data, one of us could probably help
>>>> out with debugging. It's hard to debug these things remotely but if we have
>>>> the dataset you're using and the script you're running it becomes much
>>>> easier to quickly figure out what's going wrong.
>>>>
>>>> In the past we've found sharing data via Dropbox or Google drive to
>>>> work pretty well.
>>>>
>>>>
>>>> On Saturday, October 4, 2014, Ben Thompson <bthompson2090 at gmail.com>
>>>> wrote:
>>>>
>>>>> Reinstalling YT did not resolve the problem (although now I am on the
>>>>> latest version of YT which is cool :) )
>>>>>
>>>>> I will have a serious debug with it on Monday.
>>>>>
>>>>> Ben.
>>>>>
>>>>> On Fri, Oct 3, 2014 at 11:29 PM, Britton Smith <brittonsmith at gmail.com
>>>>> > wrote:
>>>>>
>>>>>> My guess is that this is a units issue.  It might be worth it to
>>>>>> stick some debugging statements into
>>>>>> yt/analysis_modules/halo_finding/rockstar/rockstar.py where the particles
>>>>>> are loaded up to be sent to rockstar.  I believe the units that rockstar is
>>>>>> expecting are positions in Mpc/h, masses in Msun/h, and velocities in km/s.
>>>>>>
>>>>>> Britton
>>>>>>
>>>>>> On Fri, Oct 3, 2014 at 11:39 AM, Ben Thompson <
>>>>>> bthompson2090 at gmail.com> wrote:
>>>>>>
>>>>>>> I am using Ramses v3.11
>>>>>>>
>>>>>>> The current version and changeset of YT I am using is:
>>>>>>> ---
>>>>>>> Version = 3.0.1
>>>>>>> Changeset = f66765a58a61 (stable) @
>>>>>>> ---
>>>>>>>
>>>>>>> And yes, I have produced a projection plot of the simulation, and
>>>>>>> halo structures exist (in z=0). The halos have also been identified by
>>>>>>> running another halo finder independently (AHF)
>>>>>>>
>>>>>>> I will run the updates to see that helps.
>>>>>>>
>>>>>>> Will be back later.
>>>>>>>
>>>>>>> Ben.
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Oct 3, 2014 at 5:30 PM, Cameron Hummels <chummels at gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> That's odd.  Could you produce a projection of the dark matter
>>>>>>>> particles to identify if halos actually exist? Try something like this:
>>>>>>>>
>>>>>>>> http://paste.yt-project.org/show/5145/
>>>>>>>>
>>>>>>>> Also, what simulation code are you using?
>>>>>>>>
>>>>>>>> And what version of yt are you using?  To determine your yt
>>>>>>>> version, type:
>>>>>>>>
>>>>>>>> $ yt version
>>>>>>>>
>>>>>>>> I know that recently there were some changes in the output format
>>>>>>>> of rockstar that broke yt's interaction with it.  Perhaps that is related
>>>>>>>> to the problem you're experiencing.  You could try upgrading to the newest
>>>>>>>> version of yt and rockstar with this:
>>>>>>>>
>>>>>>>> $ yt update --all
>>>>>>>>
>>>>>>>> This may resolve your problems.
>>>>>>>>
>>>>>>>> Cameron
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Oct 3, 2014 at 9:22 AM, Ben Thompson <
>>>>>>>> bthompson2090 at gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hello.
>>>>>>>>>
>>>>>>>>> My simulation runs to z=0. Only with 10 timesteps though (purely a
>>>>>>>>> test run). It identifies no halos in halos_0 (expected with it being the
>>>>>>>>> first snapshot) and halos_9 (the final snapshot).. All files are the same
>>>>>>>>> size too with no data.
>>>>>>>>>
>>>>>>>>> My code is (with the filters taken out... it's a dark matter only
>>>>>>>>> one anyway)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> from mpi4py import MPI
>>>>>>>>> import yt, sys, os
>>>>>>>>> yt.enable_parallelism()
>>>>>>>>> from yt.config import ytcfg
>>>>>>>>> from yt.analysis_modules.halo_finding.rockstar.api import \
>>>>>>>>>         RockstarHaloFinder
>>>>>>>>> from yt.data_objects.particle_filters import \
>>>>>>>>>         particle_filter
>>>>>>>>> from yt.data_objects.time_series import DatasetSeries
>>>>>>>>> import numpy as np
>>>>>>>>>
>>>>>>>>> ncpu = MPI.COMM_WORLD.Get_size()
>>>>>>>>> ytcfg.set('yt', '__global_parallel_size', str(ncpu))
>>>>>>>>>
>>>>>>>>> outputs = np.arange(1, 11)
>>>>>>>>> dirs = []
>>>>>>>>> #Add the datasets
>>>>>>>>> for ioutput in outputs:
>>>>>>>>>     ds = yt.load('../output_%05d/info_%05d.txt'%(ioutput, ioutput))
>>>>>>>>>     dirs.append(ds)
>>>>>>>>>
>>>>>>>>> es = DatasetSeries(dirs)
>>>>>>>>> readers = int(ncpu/4.)
>>>>>>>>> writers = ncpu - readers - 1
>>>>>>>>> rh = RockstarHaloFinder(es, num_readers=readers,
>>>>>>>>> num_writers=writers)
>>>>>>>>> rh.run()
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Ben
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Oct 3, 2014 at 5:09 PM, Cameron Hummels <
>>>>>>>>> chummels at gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Ben,
>>>>>>>>>>
>>>>>>>>>> No, you don't need to have halos in the first timestep.  Rockstar
>>>>>>>>>> will find the halos when they exist, so if you run it on all of the outputs
>>>>>>>>>> and you get to a later redshift then 5 or so, rockstar will start to
>>>>>>>>>> identify halos.  Does your simulation run to z=0?  If so, just look at the
>>>>>>>>>> output file associated with that redshift and see if it identifies any
>>>>>>>>>> halos.  It doesn't matter your starting redshift.
>>>>>>>>>>
>>>>>>>>>> Cameron
>>>>>>>>>>
>>>>>>>>>> On Fri, Oct 3, 2014 at 9:02 AM, Ben Thompson <
>>>>>>>>>> bthompson2090 at gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Cameron.
>>>>>>>>>>>
>>>>>>>>>>> So it is dependent as of what timestep I start my run at? I
>>>>>>>>>>> start my run (i.e there needs to be halos in the snapshot that you start
>>>>>>>>>>> with)? This is the first time I have used Rockstar before (figured that
>>>>>>>>>>> since the package is there, for free, I should use it) I have DM only runs
>>>>>>>>>>> that start at z=50 (of which I am not expecting to find halos there, as
>>>>>>>>>>> part of the loading script I loop over all the snapshots)
>>>>>>>>>>>
>>>>>>>>>>> This is what I have
>>>>>>>>>>>
>>>>>>>>>>> cat halos_0.0.ascii
>>>>>>>>>>> #id num_p mvir mbound_vir rvir vmax rvmax vrms x y z vx vy vz Jx
>>>>>>>>>>> Jy Jz E Spin PosUncertainty VelUncertainty bulk_vx bulk_vy bulk_vz
>>>>>>>>>>> BulkVelUnc n_core m200b m200c m500c m2500c Xoff Voff spin_bullock b_to_a
>>>>>>>>>>> c_to_a A[x] A[y] A[z] b_to_a(500c) c_to_a(500c) A[x](500c) A[y](500c)
>>>>>>>>>>> A[z](500c) Rs Rs_Klypin T/|U| M_pe_Behroozi M_pe_Diemer idx i_so i_ph
>>>>>>>>>>> num_cp mmetric
>>>>>>>>>>> #a = 0.019608
>>>>>>>>>>> #Bounds: (0.000000, 0.000000, 0.000000) - (99.816002, 99.816002,
>>>>>>>>>>> 19.807144)
>>>>>>>>>>> #Om = 0.300000; Ol = 0.700000; h = 0.700000
>>>>>>>>>>> #FOF linking length: 0.280000
>>>>>>>>>>> #Unbound Threshold: 0.500000; FOF Refinement Threshold: 0.700000
>>>>>>>>>>> #Particle mass: 2.87164e+10 Msun/h
>>>>>>>>>>> #Box size: 99.816005 Mpc/h
>>>>>>>>>>> #Total particles processed: 419422
>>>>>>>>>>> #Force resolution assumed: 0.780498 Mpc/h
>>>>>>>>>>> #Units: Masses in Msun / h
>>>>>>>>>>> #Units: Positions in Mpc / h (comoving)
>>>>>>>>>>> #Units: Velocities in km / s (physical, peculiar)
>>>>>>>>>>> #Units: Halo Distances, Lengths, and Radii in kpc / h (comoving)
>>>>>>>>>>> #Units: Angular Momenta in (Msun/h) * (Mpc/h) * km/s (physical)
>>>>>>>>>>> #Units: Spins are dimensionless
>>>>>>>>>>> #Units: Total energy in (Msun/h)*(km/s)^2 (physical)
>>>>>>>>>>> #Note: idx, i_so, and i_ph are internal debugging quantities
>>>>>>>>>>> #Np is an internal debugging quantity.
>>>>>>>>>>> #Rockstar Version: 0.99.9-RC3
>>>>>>>>>>>
>>>>>>>>>>> (yt-x86_64)[bthompson1 at leopard rockstar_halos]$ cat
>>>>>>>>>>> halos_9.0.ascii
>>>>>>>>>>> #id num_p mvir mbound_vir rvir vmax rvmax vrms x y z vx vy vz Jx
>>>>>>>>>>> Jy Jz E Spin PosUncertainty VelUncertainty bulk_vx bulk_vy bulk_vz
>>>>>>>>>>> BulkVelUnc n_core m200b m200c m500c m2500c Xoff Voff spin_bullock b_to_a
>>>>>>>>>>> c_to_a A[x] A[y] A[z] b_to_a(500c) c_to_a(500c) A[x](500c) A[y](500c)
>>>>>>>>>>> A[z](500c) Rs Rs_Klypin T/|U| M_pe_Behroozi M_pe_Diemer idx i_so i_ph
>>>>>>>>>>> num_cp mmetric
>>>>>>>>>>> #a = 1.000879
>>>>>>>>>>> #Bounds: (0.000000, 0.000000, 0.000000) - (99.816002, 99.816002,
>>>>>>>>>>> 19.667080)
>>>>>>>>>>> #Om = 0.300000; Ol = 0.700000; h = 0.700000
>>>>>>>>>>> #FOF linking length: 0.280000
>>>>>>>>>>> #Unbound Threshold: 0.500000; FOF Refinement Threshold: 0.700000
>>>>>>>>>>> #Particle mass: 2.87164e+10 Msun/h
>>>>>>>>>>> #Box size: 99.816005 Mpc/h
>>>>>>>>>>> #Total particles processed: 419437
>>>>>>>>>>> #Force resolution assumed: 0.779813 Mpc/h
>>>>>>>>>>> #Units: Masses in Msun / h
>>>>>>>>>>> #Units: Positions in Mpc / h (comoving)
>>>>>>>>>>> #Units: Velocities in km / s (physical, peculiar)
>>>>>>>>>>> #Units: Halo Distances, Lengths, and Radii in kpc / h (comoving)
>>>>>>>>>>> #Units: Angular Momenta in (Msun/h) * (Mpc/h) * km/s (physical)
>>>>>>>>>>> #Units: Spins are dimensionless
>>>>>>>>>>> #Units: Total energy in (Msun/h)*(km/s)^2 (physical)
>>>>>>>>>>> #Note: idx, i_so, and i_ph are internal debugging quantities
>>>>>>>>>>> #Np is an internal debugging quantity.
>>>>>>>>>>> #Rockstar Version: 0.99.9-RC3
>>>>>>>>>>>
>>>>>>>>>>> Both exactly the same pretty much.
>>>>>>>>>>>
>>>>>>>>>>> If it is because I am starting at an early timestep, then I will
>>>>>>>>>>> cut out the earlier ones from the initial setup.
>>>>>>>>>>>
>>>>>>>>>>> Thanks.
>>>>>>>>>>>
>>>>>>>>>>> Ben,
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Oct 3, 2014 at 4:55 PM, Cameron Hummels <
>>>>>>>>>>> chummels at gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Ben,
>>>>>>>>>>>>
>>>>>>>>>>>> Oftentimes, rockstar cannot find any halos for the first few
>>>>>>>>>>>> snapshots of a cosmology DM-only sim because things have not collapsed
>>>>>>>>>>>> enough.  In that case, you'll get files like this:
>>>>>>>>>>>>
>>>>>>>>>>>> $ % cat halos_0.0.ascii
>>>>>>>>>>>> #id num_p mvir mbound_vir rvir vmax rvmax vrms x y z vx vy vz
>>>>>>>>>>>> Jx Jy Jz E Spin PosUncertainty VelUncertainty bulk_vx bulk_vy bulk_vz
>>>>>>>>>>>> BulkVelUnc n_core m200b m200c m500c m2500c Xoff Voff spin_bullock b_to_a
>>>>>>>>>>>> c_to_a A[x] A[y] A[z] b_to_a(500c) c_to_a(500c) A[x](500c) A[y](500c)
>>>>>>>>>>>> A[z](500c) Rs Rs_Klypin T/|U| M_pe_Behroozi M_pe_Diemer idx i_so i_ph
>>>>>>>>>>>> num_cp mmetric
>>>>>>>>>>>> #a = 0.009901
>>>>>>>>>>>> #Bounds: (0.000000, 0.000000, 0.000000) - (40.000000,
>>>>>>>>>>>> 40.000000, 3.099222)
>>>>>>>>>>>> #Om = 0.282000; Ol = 0.718000; h = 0.697000
>>>>>>>>>>>> #FOF linking length: 0.280000
>>>>>>>>>>>> #Unbound Threshold: 0.500000; FOF Refinement Threshold: 0.700000
>>>>>>>>>>>> #Particle mass: 2.98630e+08 Msun/h
>>>>>>>>>>>> #Box size: 40.000000 Mpc/h
>>>>>>>>>>>> #Total particles processed: 1290100
>>>>>>>>>>>> #Force resolution assumed: 0.15625 Mpc/h
>>>>>>>>>>>> #Units: Masses in Msun / h
>>>>>>>>>>>> #Units: Positions in Mpc / h (comoving)
>>>>>>>>>>>> #Units: Velocities in km / s (physical, peculiar)
>>>>>>>>>>>> #Units: Halo Distances, Lengths, and Radii in kpc / h (comoving)
>>>>>>>>>>>> #Units: Angular Momenta in (Msun/h) * (Mpc/h) * km/s (physical)
>>>>>>>>>>>> #Units: Spins are dimensionless
>>>>>>>>>>>> #Units: Total energy in (Msun/h)*(km/s)^2 (physical)
>>>>>>>>>>>> #Note: idx, i_so, and i_ph are internal debugging quantities
>>>>>>>>>>>> #Np is an internal debugging quantity.
>>>>>>>>>>>> #Rockstar Version: 0.99.9-RC3
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Is this what you're seeing or are your files truly empty?  If
>>>>>>>>>>>> you're seeing this, you may just need to run Rockstar on data files that
>>>>>>>>>>>> are more evolved in time, so as to allow for more non-linear collapse of
>>>>>>>>>>>> the halos.  For my DM-only runs, depending on the mass resolution that I
>>>>>>>>>>>> use, I may not start to see halos until after z=10 to z=5, with the first
>>>>>>>>>>>> outputs all looking like the stuff above: empty.
>>>>>>>>>>>>
>>>>>>>>>>>> Cameron
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Oct 3, 2014 at 8:46 AM, Ben Thompson <
>>>>>>>>>>>> bthompson2090 at gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> P.S sorry if I sent the same email twice.. this is the actual
>>>>>>>>>>>>> email... accidental press send before I sent the email out.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hello everyone.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I am currently using YT to run rockstar on my simulation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have been following the instructions which are around and
>>>>>>>>>>>>> about and have managed to write a script to run rockstar via MPI
>>>>>>>>>>>>>
>>>>>>>>>>>>> readers = int(ncpu/4.)
>>>>>>>>>>>>> #Reserve one cpu for the server
>>>>>>>>>>>>> writers = ncpu - readers - 1
>>>>>>>>>>>>> print 'Running rockstar with %i writers and %i
>>>>>>>>>>>>> readers'%(writers, readers)
>>>>>>>>>>>>> rh = RockstarHaloFinder(es, num_readers=readers,
>>>>>>>>>>>>> num_writers=writers,
>>>>>>>>>>>>>                         particle_type="dark_matter",
>>>>>>>>>>>>> dm_only=True)
>>>>>>>>>>>>> rh.run()
>>>>>>>>>>>>>
>>>>>>>>>>>>> using a particle filter on the datasets
>>>>>>>>>>>>>
>>>>>>>>>>>>> @yt.particle_filter("dark_matter",
>>>>>>>>>>>>> requires=[('particle_mass')])
>>>>>>>>>>>>> def dark_matter(pfilter, data):
>>>>>>>>>>>>>         if ('all', 'particle_age') in data.ds.field_list:
>>>>>>>>>>>>>                 return data[("all", "particle_age")] == 0.0
>>>>>>>>>>>>>         else:
>>>>>>>>>>>>>                 arr = np.zeros(len(data['particle_mass']))
>>>>>>>>>>>>>                 return arr == 0.0
>>>>>>>>>>>>>
>>>>>>>>>>>>> def setup_ds(ds):
>>>>>>>>>>>>>         #Return only dark matter particles, and assert that
>>>>>>>>>>>>> the filter holds
>>>>>>>>>>>>>         print 'Here'
>>>>>>>>>>>>>         assert(ds.add_particle_filter("dark_matter"))
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> It seems to run fine.... e.g
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> [   111s] Sending projection requests...
>>>>>>>>>>>>> [   112s] Transferring particles to writers...
>>>>>>>>>>>>> [   116s] Analyzing for FoF groups...
>>>>>>>>>>>>> [   116s] Transferring boundary particles between writers...
>>>>>>>>>>>>> [   117s] Linking boundary particles...
>>>>>>>>>>>>> [   117s] Analyzing for halos / subhalos...
>>>>>>>>>>>>> [   119s] Loading merger tree information...
>>>>>>>>>>>>> [   120s] Constructing merger tree...
>>>>>>>>>>>>> [   121s] [Success] Done with snapshot 7.
>>>>>>>>>>>>> [   121s] Reading 6 blocks for snapshot 8...
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,101 Loading field
>>>>>>>>>>>>> plugins.
>>>>>>>>>>>>> P002 yt : [INFO     ] 2014-10-03 16:19:28,101 Loading field
>>>>>>>>>>>>> plugins.
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,101 Loaded
>>>>>>>>>>>>> angular_momentum (8 new fields)
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,101 Loaded astro (14
>>>>>>>>>>>>> new fields)
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,102 Loaded cosmology
>>>>>>>>>>>>> (20 new fields)
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,102 Loaded fluid (56
>>>>>>>>>>>>> new fields)
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,103 Loaded
>>>>>>>>>>>>> fluid_vector (88 new fields)
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,103 Loaded geometric
>>>>>>>>>>>>> (102 new fields)
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,103 Loaded local
>>>>>>>>>>>>> (102 new fields)
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,104 Loaded
>>>>>>>>>>>>> magnetic_field (108 new fields)
>>>>>>>>>>>>> P001 yt : [INFO     ] 2014-10-03 16:19:28,104 Loaded species
>>>>>>>>>>>>> (108 new fields)
>>>>>>>>>>>>> P006 yt : [INFO     ] 2014-10-03 16:19:28,127 Loading field
>>>>>>>>>>>>> plugins.
>>>>>>>>>>>>> P005 yt : [INFO     ] 2014-10-03 16:19:28,128 Loading field
>>>>>>>>>>>>> plugins.
>>>>>>>>>>>>> P003 yt : [INFO     ] 2014-10-03 16:19:28,128 Loading field
>>>>>>>>>>>>> plugins.
>>>>>>>>>>>>> P004 yt : [INFO     ] 2014-10-03 16:19:28,135 Loading field
>>>>>>>>>>>>> plugins.
>>>>>>>>>>>>> [   129s] Sending projection requests...
>>>>>>>>>>>>> [   130s] Transferring particles to writers...
>>>>>>>>>>>>> [   132s] Analyzing for FoF groups...
>>>>>>>>>>>>> [   133s] Transferring boundary particles between writers...
>>>>>>>>>>>>> [   133s] Linking boundary particles...
>>>>>>>>>>>>> [   134s] Analyzing for halos / subhalos...
>>>>>>>>>>>>> [   137s] Loading merger tree information...
>>>>>>>>>>>>> [   138s] Constructing merger tree...
>>>>>>>>>>>>> [   139s] [Success] Done with snapshot 8.
>>>>>>>>>>>>> [   139s] Reading 6 blocks for snapshot 9...
>>>>>>>>>>>>> [   143s] Sending projection requests...
>>>>>>>>>>>>> [   144s] Transferring particles to writers...
>>>>>>>>>>>>> [   146s] Analyzing for FoF groups...
>>>>>>>>>>>>> [   147s] Transferring boundary particles between writers...
>>>>>>>>>>>>> [   147s] Linking boundary particles...
>>>>>>>>>>>>> [   148s] Analyzing for halos / subhalos...
>>>>>>>>>>>>> [   150s] Loading merger tree information...
>>>>>>>>>>>>> [   151s] Constructing merger tree...
>>>>>>>>>>>>> [   152s] Constructing merger tree...
>>>>>>>>>>>>> [   153s] [Success] Done with snapshot 9.
>>>>>>>>>>>>> [   153s] [Finished]
>>>>>>>>>>>>>
>>>>>>>>>>>>> generates the outputs
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> halos_0.0.ascii    halos_1.14.bin    halos_2.4.ascii
>>>>>>>>>>>>> halos_3.9.bin     halos_5.14.ascii  halos_6.3.bin     halos_7.9.ascii
>>>>>>>>>>>>> halos_9.13.bin
>>>>>>>>>>>>> halos_0.0.bin      halos_1.15.ascii  halos_2.4.bin
>>>>>>>>>>>>> halos_4.0.ascii   halos_5.14.bin    halos_6.4.ascii   halos_7.9.bin
>>>>>>>>>>>>> halos_9.14.ascii
>>>>>>>>>>>>> halos_0.10.ascii   halos_1.15.bin    halos_2.5.ascii
>>>>>>>>>>>>> halos_4.0.bin     halos_5.15.ascii  halos_6.4.bin     halos_8.0.ascii
>>>>>>>>>>>>> halos_9.14.bin
>>>>>>>>>>>>> halos_0.10.bin     halos_1.16.ascii  halos_2.5.bin
>>>>>>>>>>>>> halos_4.10.ascii  halos_5.15.bin    halos_6.5.ascii   halos_8.0.bin
>>>>>>>>>>>>> halos_9.15.ascii
>>>>>>>>>>>>> halos_0.11.ascii   halos_1.16.bin    halos_2.6.ascii
>>>>>>>>>>>>> halos_4.10.bin    halos_5.16.ascii  halos_6.5.bin     halos_8.10.ascii
>>>>>>>>>>>>> halos_9.15.bin
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> but they are all empty.... yet within profiling, it seems to
>>>>>>>>>>>>> imply that there were some particles submitted to be analised
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> [Prof] S0,C0 0s: 10191 fofs, 123365 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S0,C0: 0p,0h,0w; wt:0s; rcv:0s,0s; snd:0s; wk:0s; idl:1s
>>>>>>>>>>>>> [Prof] S1,C0 0s: 10377 fofs, 123363 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S1,C0: 1814p,73h,1w; wt:1s; rcv:0s,0s; snd:0s; wk:0s;
>>>>>>>>>>>>> idl:1s
>>>>>>>>>>>>> [Prof] S2,C0 0s: 9843 fofs, 123353 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S2,C0: 5115p,139h,1w; wt:1s; rcv:0s,0s; snd:0s; wk:0s;
>>>>>>>>>>>>> idl:1s
>>>>>>>>>>>>> [Prof] S3,C0 0s: 9616 fofs, 123359 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S3,C0: 9637p,221h,1w; wt:1s; rcv:0s,0s; snd:0s; wk:0s;
>>>>>>>>>>>>> idl:1s
>>>>>>>>>>>>> [Prof] S4,C0 1s: 9256 fofs, 123367 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S4,C0: 14849p,314h,1w; wt:0s; rcv:0s,1s; snd:0s; wk:0s;
>>>>>>>>>>>>> idl:0s
>>>>>>>>>>>>> [Prof] S5,C0 0s: 9233 fofs, 123361 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S5,C0: 17771p,369h,1w; wt:0s; rcv:0s,0s; snd:1s; wk:0s;
>>>>>>>>>>>>> idl:0s
>>>>>>>>>>>>> [Prof] S6,C0 0s: 8964 fofs, 123365 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S6,C0: 19888p,388h,1w; wt:0s; rcv:0s,1s; snd:0s; wk:0s;
>>>>>>>>>>>>> idl:0s
>>>>>>>>>>>>> [Prof] S7,C0 0s: 8996 fofs, 123356 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S7,C0: 22352p,426h,1w; wt:0s; rcv:0s,1s; snd:0s; wk:0s;
>>>>>>>>>>>>> idl:1s
>>>>>>>>>>>>> [Prof] S8,C0 0s: 8801 fofs, 123361 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S8,C0: 22553p,441h,1w; wt:0s; rcv:0s,1s; snd:0s; wk:0s;
>>>>>>>>>>>>> idl:1s
>>>>>>>>>>>>> [Prof] S9,C0 1s: 8806 fofs, 123361 particles, 0s for conf.
>>>>>>>>>>>>> [Prof] S9,C0: 22685p,440h,1w; wt:0s; rcv:0s,1s; snd:0s; wk:0s;
>>>>>>>>>>>>> idl:1s
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Is there anything I should consider looking at to debug this
>>>>>>>>>>>>> problem? Any suggestions of what to look at?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ben
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> yt-users mailing list
>>>>>>>>>>>>> yt-users at lists.spacepope.org
>>>>>>>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Cameron Hummels
>>>>>>>>>>>> Postdoctoral Researcher
>>>>>>>>>>>> Steward Observatory
>>>>>>>>>>>> University of Arizona
>>>>>>>>>>>> http://chummels.org
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> yt-users mailing list
>>>>>>>>>>>> yt-users at lists.spacepope.org
>>>>>>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> yt-users mailing list
>>>>>>>>>>> yt-users at lists.spacepope.org
>>>>>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Cameron Hummels
>>>>>>>>>> Postdoctoral Researcher
>>>>>>>>>> Steward Observatory
>>>>>>>>>> University of Arizona
>>>>>>>>>> http://chummels.org
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> yt-users mailing list
>>>>>>>>>> yt-users at lists.spacepope.org
>>>>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> yt-users mailing list
>>>>>>>>> yt-users at lists.spacepope.org
>>>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Cameron Hummels
>>>>>>>> Postdoctoral Researcher
>>>>>>>> Steward Observatory
>>>>>>>> University of Arizona
>>>>>>>> http://chummels.org
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> yt-users mailing list
>>>>>>>> yt-users at lists.spacepope.org
>>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> yt-users mailing list
>>>>>>> yt-users at lists.spacepope.org
>>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> yt-users mailing list
>>>>>> yt-users at lists.spacepope.org
>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>>
>>>>>>
>>>>>
>>>> _______________________________________________
>>>> yt-users mailing list
>>>> yt-users at lists.spacepope.org
>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>
>>>>
>>>
>>> _______________________________________________
>>> yt-users mailing list
>>> yt-users at lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>
>>>
>>
>> _______________________________________________
>> yt-users mailing list
>> yt-users at lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>>
>
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20141014/fb594b8c/attachment.htm>


More information about the yt-users mailing list