[yt-users] cloud in cell mesh construction for particle data

Brendan Griffen brendan.f.griffen at gmail.com
Sun Jun 8 19:21:03 PDT 2014


Hi Matt,

Thanks for your detailed email. Forgive my naivety but why do you need the
oct-tree in the first place? I have a my own fortran code for constructing
a cloud in cell mesh and it uses very little overhead (just the n^3 grid
and the particle data itself). I then calculate the dx,dy,dzs to the
nearest 8 grid points and distribute accordingly in a omp loop which is
done in a fraction of a second. Does the situation with yt come about (oct
tree etc.) necessarily because of the way it handles particle data? Is it
essentially used to map the particles to domains in the grid or something?

The machine I max memory on has 128GB and the snapshots are using 1024^3
particles. Do you have any idea of how much memory the oct-tree uses as a
function of particle/grid number? I am going to try on a 256GB machine
(though this is a bit of a hassle). I'll see how I go.

Thanks.

Brendan


On Sun, Jun 8, 2014 at 6:25 PM, Matthew Turk <matthewturk at gmail.com> wrote:

> Hi all,
>
> I feel like I owe a brief explanation of why things are tricky right
> now, what we're planning on doing, and how we're experimenting and
> developing.
>
> Presently, the particle geometry handlers build a single mesh from all
> particles in the dataset, along with a coarse bitmask that correlates
> files to regions in the domain.  This requires the allocation of a
> single int64 array of size Nparticles, which is sorted in place and
> then fed into an octree construction algorithm that then spits back
> out the mesh.  Each octree component contains 3 64-bit integers and
> eitehr a void pointer or a pointer to eight other octs.  Increasing
> n_ref decreases the number of octs in this mesh; when smoothing
> operaitons are conducted, a second "index" mesh is created for looking
> up particles near mesh points.  Mesh points are used for adaptive
> resolution smoothing and other "deposit particles on the grid somehow"
> operations (including SPH kernel).
>
> Anyway, because right now it requires a global mesh to be constructed,
> this is expensive and requires holding a 64-bit integer in memory for
> each particle.  I think if you're loading the particles in differently
> there is some additional overhead as well, but I'm still a bit
> surprised you OOM on a 1024^3 dataset.
>
> In general, we don't *need* this global mesh; is can be constructed as
> required, which would speed up both the initial index phase as well as
> the final meshing process.  I got about 50% of the way to implementing
> this last fall, but because of various concerns and deadlines I
> haven't finished it.  I intend to get back to it probably in July,
> right after we put out a 3.0, so that we can have it in time for 3.1.
> In principle this will make the particle codes much more similar to
> ARTIO, in that the mesh will be constructed only as required and
> discarded when no longer required, which will make them much more
> memory efficient.
>
> But, getting a single mesh for extremely large data is a very high
> priority; right now for the 10240^3 run we've been loading up
> individual sub-chunks, which I want to stop doing.
>
> From the technical perspective, these are the things that need to
> happen on the yt side for particle datasets to move to this "lazy"
> mode of loading; most of this is based on things learned from 2HOT and
> ARTIO, and will involve converting to a forest-of-octrees.
>
>  * Split into spatially-organized subchunks of ParticleOctreeSubset
> objects, such that these map 1:Nfiles, and that can be constructed on
> the fly.
>  * Construct a dual-mesh of the bitmask "ParticleRegion" object that
> will help with identifying neighbors to a given oct cell, so that if
> we're inside one octree we know which neighbor octrees to grab if we
> need particles for smoothing things (fast boundary particle
> identification is later down the road)
>  * Parallel sort of particles, or using the parallel ring function;
> may not be necessary after all
>
> All of this is doable, and I'd be happy to work with people if they'd
> like to take a shot at implementing it, but I've mostly put it on my
> list for post-3.0.
>
> -Matt
>
> On Sun, Jun 8, 2014 at 2:43 PM, Nathan Goldbaum <nathan12343 at gmail.com>
> wrote:
> >
> >
> >
> > On Sun, Jun 8, 2014 at 12:27 PM, Brendan Griffen
> > <brendan.f.griffen at gmail.com> wrote:
> >>
> >> Also, how do I construct just a zero filled yt array with dimensions
> >> (ndim,ndim,ndim)? Thanks
> >
> >
> >
> > from yt import YTArray
> > from numpy import np
> >
> > arr = YTArray(np.zeros([ndim, ndim, ndim]), input_units=units_string)
> >
> > or alternatively:
> >
> > from yt.units import kiloparsec
> >
> > arr = kiloparsec*np.zeros([ndim, ndim, ndim])
> >
> > it doesn't have to be kiloparsec - you can compose the units you want
> out of
> > any of the unit symbols that live in yt.units.
> >
> > See this page for a ton more detail about yt's new unit system:
> > http://yt-project.org/docs/dev-3.0/analyzing/units/index.html
> >
> >>
> >>
> >> Brendan
> >>
> >>
> >> On Sun, Jun 8, 2014 at 3:26 PM, Brendan Griffen
> >> <brendan.f.griffen at gmail.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> Since I get memory errors. Could I not just read in the blocks of the
> >>> output individually then basically stack the mesh each time. That way
> not
> >>> every single particle of the snapshot has to be loaded at the same
> time.
> >>> Would that just be a case of doing
> >>>
> >>> level = int(math.log(ndim,2))
> >>> cg = ds.covering_grid(level=level,
> >>> left_edge=[0,0,0],dims=[ndim,ndim,ndim])
> >>> arr = cg['deposit', 'all_density']
> >>> arrall += arr
> >>>
> >>> in a loop over each HDF5 block?
> >
> >
> > It's likely that the memory use is dominated by the octree rather than
> the
> > covering grid.  This is with 1024^3 particles, correct?
> >
> > You can probably significantly reduce the memory used by the octree by
> > increasing n_ref in the call to load_particles.
> >
> > See this page for more detail about load_particles:
> >
> http://yt-project.org/docs/dev-3.0/examining/loading_data.html#generic-particle-data
> >
> > Larger n_ref means fewer octree cells (lower resolution), but it also
> means
> > lower poisson noise and lower memory use.
> >
> > Alternatively, as Matt suggested, you could break your 1024^3 ensemble of
> > particles up into chunks, loop over the chunk, creating a particle octree
> > and then a covering grid for each subset of the particles.  Your final
> > covering grid is just the sub of the covering grids for each subset of
> > particles.
> >
> >>>
> >>>
> >>> Thanks.
> >>> Brendan
> >>>
> >>>
> >>>
> >>>
> >>> On Fri, Jun 6, 2014 at 7:26 PM, Matthew Turk <matthewturk at gmail.com>
> >>> wrote:
> >>>>
> >>>>
> >>>> On Jun 6, 2014 4:54 PM, "Brendan Griffen" <
> brendan.f.griffen at gmail.com>
> >>>> wrote:
> >>>> >
> >>>> > OK great. It is very low resolution but it worked. Thanks for all
> your
> >>>> > help. My higher resolution run 1024^3 in 100 Mpc seems to crash on
> 128GB
> >>>> > memory machine. I might have to look elsewhere.
> >>>> >
> >>>>
> >>>> If you are looking for  low resolution extraction you can tune the
> >>>> memory usage by changing the parameter n_ref to something higher.
> >>>>
> >>>> Supporting extremely large datasets in a single mesh is on the roadmap
> >>>> for the late summer or fall, after a 3.0 release goes out. For now
> you can
> >>>> also extract before you load in; this is sort of how we are
> supporting an
> >>>> INCITE project with very large particle counts.
> >>>>
> >>>>
> >>>> > Also, I normally use Canopy distribution but I just use an alias to
> >>>> > loadyt which erases my PYTHONPATH and I can't access scipy and a
> few other
> >>>> > libraries any more. What is the best practice here? Should I just
> manually
> >>>> > export PYTHONPATH and point to the libraries need in canopy or can
> they play
> >>>> > nice together?
> >>>> >
> >>>> > Thanks.
> >>>> >
> >>>> > BG
> >>>> >
> >>>> >
> >>>> > On Fri, Jun 6, 2014 at 2:54 PM, Nathan Goldbaum
> >>>> > <nathan12343 at gmail.com> wrote:
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >> On Fri, Jun 6, 2014 at 11:48 AM, Brendan Griffen
> >>>> >> <brendan.f.griffen at gmail.com> wrote:
> >>>> >>>
> >>>> >>> OK great. Thanks. I just wanted a homogeneous mesh. 512^3 with no
> >>>> >>> nesting of any kind. Though when I plot the image it looks like
> it is
> >>>> >>> assigning particles incorrectly (low resolution on the outside).
> This is
> >>>> >>> just a test image.
> >>>> >>>
> >>>> >>
> >>>> >> The SlicePlot is visualizing the octree so there is less resolution
> >>>> >> where there are fewer particles. If you want to visualize the
> covering grid
> >>>> >> you're going to need to visualize that separately.
> >>>> >>
> >>>> >>>
> >>>> >>> ds = yt.load_particles(data, length_unit=3.08e24,
> >>>> >>> mass_unit=1.9891e33,bbox=bbox)
> >>>> >>>
> >>>> >>> ad = ds.all_data()
> >>>> >>> print ad['deposit', 'all_cic']
> >>>> >>> slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))
> >>>> >>> slc.set_figure_size(4)
> >>>> >>> cg = ds.covering_grid(level=9,
> left_edge=[0,0,0],dims=[512,512,512])
> >>>> >>>
> >>>> >>
> >>>> >> To actually produce the uniform resolution ndarray, you're going to
> >>>> >> need to do something like:
> >>>> >>
> >>>> >> array = cg[('deposit', 'all_cic')]
> >>>> >>
> >>>> >> array will then be a 3D array you can do whatever you want with. By
> >>>> >> default it has units, but to strip them off you'll just need to
> cast to
> >>>> >> ndarray:
> >>>> >>
> >>>> >> array_without_units = array.v
> >>>> >>
> >>>> >>
> >>>> >>>
> >>>> >>> Also, is there a way to load multiple particle types?
> >>>> >>>
> >>>> >>> Do I just need to stack the particles into the array here?
> >>>> >>>
> >>>> >>> data = {'particle_position_x': pos[:,0],
> >>>> >>>         'particle_position_y': pos[:,1],
> >>>> >>>         'particle_position_z': pos[:,2],
> >>>> >>>         'particle_mass': np.array([mpart]*npart)}
> >>>> >>>
> >>>> >>> Then feed it in as usual?
> >>>> >>
> >>>> >>
> >>>> >> That's right, although if the particle masses are different for the
> >>>> >> different particle types that code snippet will need to be
> generalized to
> >>>> >> handle that.
> >>>> >>
> >>>> >> I think in principle it should be possible to make load_particles
> >>>> >> handle different particle types just like an SPH dataset that
> contains
> >>>> >> multiple particle types, but right now that hasn't been
> implemented yet.
> >>>> >>
> >>>> >>>
> >>>> >>>
> >>>> >>> Brendan
> >>>> >>>
> >>>> >>>
> >>>> >>> On Thu, Jun 5, 2014 at 9:44 PM, Nathan Goldbaum
> >>>> >>> <nathan12343 at gmail.com> wrote:
> >>>> >>>>
> >>>> >>>> That's right, you can set that via the bbox keyword parameter for
> >>>> >>>> load_particles.  I'd urge you to take a look at the docstrings
> and source
> >>>> >>>> code for load_particles.
> >>>> >>>>
> >>>> >>>>
> >>>> >>>> On Thu, Jun 5, 2014 at 6:34 PM, Brendan Griffen
> >>>> >>>> <brendan.f.griffen at gmail.com> wrote:
> >>>> >>>>>
> >>>> >>>>> Thanks very much Nathan. I tried to load in my own data but I
> >>>> >>>>> think there are too many particles or I have to specifically
> set the domain
> >>>> >>>>> size.
> >>>> >>>>>
> >>>> >>>>> In this area:
> >>>> >>>>>
> >>>> >>>>> data = {'particle_position_x': pos[:,0],
> >>>> >>>>>         'particle_position_y': pos[:,1],
> >>>> >>>>>         'particle_position_z': pos[:,2],
> >>>> >>>>>         'particle_mass': np.array([mpart]*npart)}
> >>>> >>>>>
> >>>> >>>>> ds = yt.load_particles(data, length_unit=3.08e24,
> >>>> >>>>> mass_unit=1.9891e36)
> >>>> >>>>> ad = ds.all_data()
> >>>> >>>>> print ad['deposit', 'all_cic']
> >>>> >>>>>
> >>>> >>>>> In [3]: run ytcic.py
> >>>> >>>>> yt : [INFO     ] 2014-06-05 21:29:06,183 Parameters:
> current_time
> >>>> >>>>> = 0.0
> >>>> >>>>> yt : [INFO     ] 2014-06-05 21:29:06,183 Parameters:
> >>>> >>>>> domain_dimensions         = [2 2 2]
> >>>> >>>>> yt : [INFO     ] 2014-06-05 21:29:06,184 Parameters:
> >>>> >>>>> domain_left_edge          = [ 0.  0.  0.]
> >>>> >>>>> yt : [INFO     ] 2014-06-05 21:29:06,185 Parameters:
> >>>> >>>>> domain_right_edge         = [ 1.  1.  1.]
> >>>> >>>>> yt : [INFO     ] 2014-06-05 21:29:06,185 Parameters:
> >>>> >>>>> cosmological_simulation   = 0.0
> >>>> >>>>> yt : [INFO     ] 2014-06-05 21:29:06,188 Allocating for
> 1.342e+08
> >>>> >>>>> particles
> >>>> >>>>>
> >>>> >>>>>
> ---------------------------------------------------------------------------
> >>>> >>>>> YTDomainOverflow                          Traceback (most recent
> >>>> >>>>> call last)
> >>>> >>>>>
> >>>> >>>>>
> /nfs/blank/h4231/bgriffen/data/lib/yt-x86_64/lib/python2.7/site-packages/IPython/utils/py3compat.pyc
> >>>> >>>>> in execfile(fname, *where)
> >>>> >>>>>     202             else:
> >>>> >>>>>     203                 filename = fname
> >>>> >>>>> --> 204             __builtin__.execfile(filename, *where)
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /nfs/blank/h4231/bgriffen/work/projects/caterpillar/c2ray/cic/ytcic.py in
> >>>> >>>>> <module>()
> >>>> >>>>>      52
> >>>> >>>>>      53 ad = ds.all_data()
> >>>> >>>>> ---> 54 print ad['deposit', 'all_cic']
> >>>> >>>>>      55 slc = yt.SlicePlot(ds, 2, ('deposit', 'all_cic'))
> >>>> >>>>>      56 slc.set_figure_size(4)
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/data_objects/data_containers.pyc
> >>>> >>>>> in __getitem__(self, key)
> >>>> >>>>>     205         Returns a single field.  Will add if necessary.
> >>>> >>>>>     206         """
> >>>> >>>>> --> 207         f = self._determine_fields([key])[0]
> >>>> >>>>>     208         if f not in self.field_data and key not in
> >>>> >>>>> self.field_data:
> >>>> >>>>>     209             if f in self._container_fields:
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/data_objects/data_containers.pyc
> >>>> >>>>> in _determine_fields(self, fields)
> >>>> >>>>>     453                     raise YTFieldNotParseable(field)
> >>>> >>>>>     454                 ftype, fname = field
> >>>> >>>>> --> 455                 finfo = self.pf._get_field_info(ftype,
> >>>> >>>>> fname)
> >>>> >>>>>     456             else:
> >>>> >>>>>     457                 fname = field
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/data_objects/static_output.pyc
> >>>> >>>>> in _get_field_info(self, ftype, fname)
> >>>> >>>>>     445     _last_finfo = None
> >>>> >>>>>     446     def _get_field_info(self, ftype, fname = None):
> >>>> >>>>> --> 447         self.index
> >>>> >>>>>     448         if fname is None:
> >>>> >>>>>     449             ftype, fname = "unknown", ftype
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/data_objects/static_output.pyc
> >>>> >>>>> in index(self)
> >>>> >>>>>     277                 raise RuntimeError("You should not
> >>>> >>>>> instantiate Dataset.")
> >>>> >>>>>     278             self._instantiated_index =
> self._index_class(
> >>>> >>>>> --> 279                 self, dataset_type=self.dataset_type)
> >>>> >>>>>     280             # Now we do things that we need an
> >>>> >>>>> instantiated index for
> >>>> >>>>>     281             # ...first off, we create our field_info
> now.
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/frontends/stream/data_structures.pyc
> >>>> >>>>> in __init__(self, pf, dataset_type)
> >>>> >>>>>     942     def __init__(self, pf, dataset_type = None):
> >>>> >>>>>     943         self.stream_handler = pf.stream_handler
> >>>> >>>>> --> 944         super(StreamParticleIndex, self).__init__(pf,
> >>>> >>>>> dataset_type)
> >>>> >>>>>     945
> >>>> >>>>>     946     def _setup_data_io(self):
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/geometry/particle_geometry_handler.pyc
> >>>> >>>>> in __init__(self, pf, dataset_type)
> >>>> >>>>>      48         self.directory =
> >>>> >>>>> os.path.dirname(self.index_filename)
> >>>> >>>>>      49         self.float_type = np.float64
> >>>> >>>>> ---> 50         super(ParticleIndex, self).__init__(pf,
> >>>> >>>>> dataset_type)
> >>>> >>>>>      51
> >>>> >>>>>      52     def _setup_geometry(self):
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/geometry/geometry_handler.pyc
> >>>> >>>>> in __init__(self, pf, dataset_type)
> >>>> >>>>>      54
> >>>> >>>>>      55         mylog.debug("Setting up domain geometry.")
> >>>> >>>>> ---> 56         self._setup_geometry()
> >>>> >>>>>      57
> >>>> >>>>>      58         mylog.debug("Initializing data grid data IO")
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/geometry/particle_geometry_handler.pyc
> >>>> >>>>> in _setup_geometry(self)
> >>>> >>>>>      52     def _setup_geometry(self):
> >>>> >>>>>      53         mylog.debug("Initializing Particle Geometry
> >>>> >>>>> Handler.")
> >>>> >>>>> ---> 54         self._initialize_particle_handler()
> >>>> >>>>>      55
> >>>> >>>>>      56
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/geometry/particle_geometry_handler.pyc
> >>>> >>>>> in _initialize_particle_handler(self)
> >>>> >>>>>      87                 pf.domain_left_edge,
> pf.domain_right_edge,
> >>>> >>>>>      88                 [N, N, N], len(self.data_files))
> >>>> >>>>> ---> 89         self._initialize_indices()
> >>>> >>>>>      90         self.oct_handler.finalize()
> >>>> >>>>>      91         self.max_level = self.oct_handler.max_level
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/geometry/particle_geometry_handler.pyc
> >>>> >>>>> in _initialize_indices(self)
> >>>> >>>>>     109             npart =
> >>>> >>>>> sum(data_file.total_particles.values())
> >>>> >>>>>     110             morton[ind:ind + npart] = \
> >>>> >>>>> --> 111                 self.io._initialize_index(data_file,
> >>>> >>>>> self.regions)
> >>>> >>>>>     112             ind += npart
> >>>> >>>>>     113         morton.sort()
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> /bigbang/data/bgriffen/lib/yt-x86_64/src/yt-hg/yt/frontends/stream/io.pyc in
> >>>> >>>>> _initialize_index(self, data_file, regions)
> >>>> >>>>>     144                 raise YTDomainOverflow(pos.min(axis=0),
> >>>> >>>>> pos.max(axis=0),
> >>>> >>>>>     145
> >>>> >>>>> data_file.pf.domain_left_edge,
> >>>> >>>>> --> 146
> >>>> >>>>> data_file.pf.domain_right_edge)
> >>>> >>>>>     147             regions.add_data_file(pos,
> data_file.file_id)
> >>>> >>>>>     148             morton.append(compute_morton(
> >>>> >>>>>
> >>>> >>>>> YTDomainOverflow: Particle bounds [ 0.  0.  0.] and [
> 99.99999237
> >>>> >>>>> 99.99999237  99.99999237] exceed domain bounds [ 0.  0.  0.]
> code_length and
> >>>> >>>>> [ 1.  1.  1.] code_length
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>> On Thu, Jun 5, 2014 at 8:22 PM, Nathan Goldbaum
> >>>> >>>>> <nathan12343 at gmail.com> wrote:
> >>>> >>>>>>
> >>>> >>>>>> Here's a worked out example that does what you're looking for
> >>>> >>>>>> using a fake 1 million particle dataset:
> >>>> >>>>>>
> >>>> >>>>>>
> http://nbviewer.ipython.org/gist/ngoldbaum/546d37869aafe71cfe38
> >>>> >>>>>>
> >>>> >>>>>> In this notebook I make use of two key yt features:
> >>>> >>>>>> `load_particles`, and `covering_grid`.
> >>>> >>>>>>
> >>>> >>>>>> load_particles creates a "stream" dataset based on in-memory
> data
> >>>> >>>>>> fed in as a numpy array.  This dataset acts just like an
> on-disk simulation
> >>>> >>>>>> dataset, but doesn't come with the baggage of needing to write
> a custom
> >>>> >>>>>> frontend to read a specific data format off disk.
> >>>> >>>>>>
> >>>> >>>>>> covering_grid is a way to generate uniform resolution data from
> >>>> >>>>>> an AMR dataset. It acts like a python dictionary where the
> keys are field
> >>>> >>>>>> names and returns 3D numpy arrays of whatever uniform
> resolution you specify
> >>>> >>>>>> when you create the covering_grid.
> >>>> >>>>>>
> >>>> >>>>>> Note that if you're using load_particles all of your data needs
> >>>> >>>>>> to live in memory.  If your data is too big for that you'll
> need to write a
> >>>> >>>>>> frontend for your data format or use a memmap to an on-disk
> file somehow.
> >>>> >>>>>> I'm not an expert on that but others on the list should be
> able to help out.
> >>>> >>>>>>
> >>>> >>>>>> Hope that gets you well on your way :)
> >>>> >>>>>>
> >>>> >>>>>> -Nathan
> >>>> >>>>>>
> >>>> >>>>>>
> >>>> >>>>>> On Thu, Jun 5, 2014 at 5:04 PM, Desika Narayanan
> >>>> >>>>>> <dnarayan at haverford.edu> wrote:
> >>>> >>>>>>>
> >>>> >>>>>>> Hey Brendan,
> >>>> >>>>>>>
> >>>> >>>>>>> A couple of extra tools you might find helpful in conjunction
> >>>> >>>>>>> with Nathan's example of depositing the particles onto an
> octree are at:
> >>>> >>>>>>>
> >>>> >>>>>>> http://paste.yt-project.org/show/4737/
> >>>> >>>>>>>
> >>>> >>>>>>> Where I load a gadget snapshot, and then recover the
> coordinates
> >>>> >>>>>>> and width of each cell.
> >>>> >>>>>>>
> >>>> >>>>>>> In response to your last question - the particles are
> deposited
> >>>> >>>>>>> into an octree grid (so, you'll see that the cell sizes
> aren't all the same
> >>>> >>>>>>> size).   I don't know if depositing onto a regular NxNxN mesh
> is possible,
> >>>> >>>>>>> though would be interested to hear if so.
> >>>> >>>>>>>
> >>>> >>>>>>> -d
> >>>> >>>>>>>
> >>>> >>>>>>>
> >>>> >>>>>>> On Thu, Jun 5, 2014 at 7:58 PM, Brendan Griffen
> >>>> >>>>>>> <brendan.f.griffen at gmail.com> wrote:
> >>>> >>>>>>>>
> >>>> >>>>>>>> Thanks. I'll get the "bleeding edge" version first then try
> >>>> >>>>>>>> your suggestions. Though I want to return the NxNxN array
> and be able to
> >>>> >>>>>>>> write this mesh to a file. It is *only* using the cic part
> of yt and it
> >>>> >>>>>>>> should return the mesh to be written? Just wanted to clarify?
> >>>> >>>>>>>>
> >>>> >>>>>>>> Thanks.
> >>>> >>>>>>>> Brendan
> >>>> >>>>>>>>
> >>>> >>>>>>>>
> >>>> >>>>>>>> On Thu, Jun 5, 2014 at 6:49 PM, Nathan Goldbaum
> >>>> >>>>>>>> <nathan12343 at gmail.com> wrote:
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> On Thu, Jun 5, 2014 at 3:36 PM, John ZuHone
> >>>> >>>>>>>>> <jzuhone at gmail.com> wrote:
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> Hi Brendan,
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> Which version of yt are you using?
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> If you're using 3.0, this is actually fairly easy. If you
> >>>> >>>>>>>>>> look in yt.fields.particle_fields.py, around line 85, you
> can see how this
> >>>> >>>>>>>>>> is done for the "particle_density" and "particle_mass"
> fields. Basically you
> >>>> >>>>>>>>>> can call a "deposit" method which takes the particle field
> quantity you want
> >>>> >>>>>>>>>> deposited and deposits it into cells. The underlying
> calculation is done
> >>>> >>>>>>>>>> using Cython, so it's fast.
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> And you shouldn't ever actually need to call these "deposit"
> >>>> >>>>>>>>> functions, since "deposit" is exposed as a field type for
> all datasets that
> >>>> >>>>>>>>> contain particles.
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> Here is a notebook that does this for Enzo AMR data:
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> http://nbviewer.ipython.org/gist/ngoldbaum/5e19e4e6cc2bf330149c
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> This dataset contains about a million particles and
> generates
> >>>> >>>>>>>>> a CIC deposition for the whole domain in about 6 seconds
> from a cold start.
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> If you're using 2.x, then you can do the same thing, but
> it's
> >>>> >>>>>>>>>> not as straightforward. You can see how this works in
> >>>> >>>>>>>>>> yt.data_objects.universal_fields.py, around line 986,
> where the
> >>>> >>>>>>>>>> "particle_density" field is defined. Basically, it calls
> CICDeposit_3, which
> >>>> >>>>>>>>>> is in yt.utilities.lib.CICDeposit.pyx.
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> Let me know if you need any more clarification.
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> Best,
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> John Z
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> On Jun 5, 2014, at 6:07 PM, Brendan Griffen
> >>>> >>>>>>>>>> <brendan.f.griffen at gmail.com> wrote:
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> > Hi,
> >>>> >>>>>>>>>> >
> >>>> >>>>>>>>>> > I was wondering if there were any Cython routines within
> yt
> >>>> >>>>>>>>>> > which takes particle data and converts it into a
> cloud-in-cell based mesh
> >>>> >>>>>>>>>> > which can be written to a file of my choosing.
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> What sort of mesh were you looking for?  yt will internally
> >>>> >>>>>>>>> construct an octree if it is fed particle data.  I'm not
> sure whether this
> >>>> >>>>>>>>> octree can be saved to disk for later analysis.
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> It's also possible to create a uniform resolution covering
> >>>> >>>>>>>>> grid containing field data for a deposited quantity, which
> can be quite
> >>>> >>>>>>>>> easily saved to disk in a number of ways.
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> I heard a while ago there was some such functionality but
> it
> >>>> >>>>>>>>>> could be too far down the yt rabbit hole to be used as a
> standalone? Is this
> >>>> >>>>>>>>>> true? I have my own Python code for doing it but it just
> isn't fast enough
> >>>> >>>>>>>>>> and thought I'd ask the yt community if there were any
> wrapper tools
> >>>> >>>>>>>>>> available to boost the speed.
> >>>> >>>>>>>>>> >
> >>>> >>>>>>>>>> > Thanks.
> >>>> >>>>>>>>>> > Brendan
> >>>> >>>>>>>>>> > _______________________________________________
> >>>> >>>>>>>>>> > yt-users mailing list
> >>>> >>>>>>>>>> > yt-users at lists.spacepope.org
> >>>> >>>>>>>>>> >
> >>>> >>>>>>>>>> >
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> _______________________________________________
> >>>> >>>>>>>>>> yt-users mailing list
> >>>> >>>>>>>>>> yt-users at lists.spacepope.org
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>>
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> _______________________________________________
> >>>> >>>>>>>>> yt-users mailing list
> >>>> >>>>>>>>> yt-users at lists.spacepope.org
> >>>> >>>>>>>>>
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>>>>>>>
> >>>> >>>>>>>>
> >>>> >>>>>>>>
> >>>> >>>>>>>> _______________________________________________
> >>>> >>>>>>>> yt-users mailing list
> >>>> >>>>>>>> yt-users at lists.spacepope.org
> >>>> >>>>>>>>
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>>>>>>
> >>>> >>>>>>>
> >>>> >>>>>>>
> >>>> >>>>>>> _______________________________________________
> >>>> >>>>>>> yt-users mailing list
> >>>> >>>>>>> yt-users at lists.spacepope.org
> >>>> >>>>>>>
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>>>>>
> >>>> >>>>>>
> >>>> >>>>>>
> >>>> >>>>>> _______________________________________________
> >>>> >>>>>> yt-users mailing list
> >>>> >>>>>> yt-users at lists.spacepope.org
> >>>> >>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>>>>
> >>>> >>>>>
> >>>> >>>>>
> >>>> >>>>> _______________________________________________
> >>>> >>>>> yt-users mailing list
> >>>> >>>>> yt-users at lists.spacepope.org
> >>>> >>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>>>
> >>>> >>>>
> >>>> >>>>
> >>>> >>>> _______________________________________________
> >>>> >>>> yt-users mailing list
> >>>> >>>> yt-users at lists.spacepope.org
> >>>> >>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>>
> >>>> >>>
> >>>> >>>
> >>>> >>> _______________________________________________
> >>>> >>> yt-users mailing list
> >>>> >>> yt-users at lists.spacepope.org
> >>>> >>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>>
> >>>> >>
> >>>> >>
> >>>> >> _______________________________________________
> >>>> >> yt-users mailing list
> >>>> >> yt-users at lists.spacepope.org
> >>>> >> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >>
> >>>> >
> >>>> >
> >>>> > _______________________________________________
> >>>> > yt-users mailing list
> >>>> > yt-users at lists.spacepope.org
> >>>> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>> >
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> yt-users mailing list
> >>>> yt-users at lists.spacepope.org
> >>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>>>
> >>>
> >>
> >>
> >> _______________________________________________
> >> yt-users mailing list
> >> yt-users at lists.spacepope.org
> >> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >>
> >
> >
> > _______________________________________________
> > yt-users mailing list
> > yt-users at lists.spacepope.org
> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >
> _______________________________________________
> yt-users mailing list
> yt-users at lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20140608/8b577877/attachment.html>


More information about the yt-users mailing list