[yt-svn] commit/yt: ngoldbaum: Merged in atmyers/yt (pull request #2497)

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Tue Jan 24 08:15:00 PST 2017


1 new commit in yt:

https://bitbucket.org/yt_analysis/yt/commits/5c3af8da1dd8/
Changeset:   5c3af8da1dd8
Branch:      yt
User:        ngoldbaum
Date:        2017-01-24 16:14:53+00:00
Summary:     Merged in atmyers/yt (pull request #2497)

Boxlib particle support
Affected #:  9 files

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f doc/source/examining/loading_data.rst
--- a/doc/source/examining/loading_data.rst
+++ b/doc/source/examining/loading_data.rst
@@ -274,13 +274,13 @@
 BoxLib Data
 -----------
 
-yt has been tested with BoxLib data generated by Orion, Nyx, Maestro and
-Castro.  Currently it is cared for by a combination of Andrew Myers, Chris
+yt has been tested with BoxLib data generated by Orion, Nyx, Maestro, Castro, 
+and WarpX. Currently it is cared for by a combination of Andrew Myers, Chris
 Malone, Matthew Turk, and Mike Zingale.
 
 To load a BoxLib dataset, you can use the ``yt.load`` command on
 the plotfile directory name.  In general, you must also have the
-``inputs`` file in the base directory, but Maestro and Castro will get
+``inputs`` file in the base directory, but Maestro, Castro, and WarpX will get
 all the necessary parameter information from the ``job_info`` file in
 the plotfile directory.  For instance, if you were in a
 directory with the following files:
@@ -308,7 +308,7 @@
    import yt
    ds = yt.load("pltgmlcs5600")
 
-For Maestro and Castro, you would not need the ``inputs`` file, and you
+For Maestro, Castro, and WarpX, you would not need the ``inputs`` file, and you
 would have a ``job_info`` file in the plotfile directory.
 
 .. rubric:: Caveats
@@ -316,7 +316,9 @@
 * yt does not read the Maestro base state (although you can have Maestro
   map it to a full Cartesian state variable before writing the plotfile
   to get around this).  E-mail the dev list if you need this support.
-* yt does not know about particles in Maestro.
+* yt supports BoxLib particle data stored in the standard format used 
+  by Nyx and WarpX. It currently does not support the ASCII particle
+  data used by Maestro and Castro.
 * For Maestro, yt aliases either "tfromp" or "tfromh to" ``temperature``
   depending on the value of the ``use_tfromp`` runtime parameter.
 * For Maestro, some velocity fields like ``velocity_magnitude`` or

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f doc/source/reference/code_support.rst
--- a/doc/source/reference/code_support.rst
+++ b/doc/source/reference/code_support.rst
@@ -46,7 +46,7 @@
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+
 | MOAB                  |     Y      |    N/A    |      Y     |   Y   |    Y     |    Y     |     Y      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+
-| Nyx                   |     Y      |     N     |      Y     |   Y   |    Y     |    Y     |     Y      |   Full   |
+| Nyx                   |     Y      |     Y     |      Y     |   Y   |    Y     |    Y     |     Y      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+
 | openPMD               |     Y      |     Y     |      N     |   Y   |    Y     |    Y     |     N      | Partial  |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+
@@ -62,6 +62,8 @@
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+
 | Tipsy                 |     Y      |     Y     |      Y     |   Y   | Y [#f2]_ |    Y     |     Y      |   Full   |
 +-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+
+| WarpX                 |     Y      |     Y     |      Y     |   Y   |    Y     |    Y     |     Y      |   Full   |
++-----------------------+------------+-----------+------------+-------+----------+----------+------------+----------+
 
 .. [#f1] one-dimensional base-state not read in currently.
 .. [#f2] These handle mesh fields using an in-memory octree that has not been parallelized.

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f tests/tests.yaml
--- a/tests/tests.yaml
+++ b/tests/tests.yaml
@@ -68,8 +68,21 @@
     - yt/visualization/tests/test_mesh_slices.py:test_tri2
     - yt/visualization/tests/test_mesh_slices.py:test_multi_region
 
-  local_orion_001:
-    - yt/frontends/boxlib/tests/test_orion.py
+  local_boxlib_002:
+    - yt/frontends/boxlib/tests/test_outputs.py:test_radadvect
+    - yt/frontends/boxlib/tests/test_outputs.py:test_radtube
+    - yt/frontends/boxlib/tests/test_outputs.py:test_star
+    - yt/frontends/boxlib/tests/test_outputs.py:test_OrionDataset
+    - yt/frontends/boxlib/tests/test_outputs.py:test_units_override
+
+  local_boxlib_particles_001:
+    - yt/frontends/boxlib/tests/test_outputs.py:test_LyA
+    - yt/frontends/boxlib/tests/test_outputs.py:test_nyx_particle_io
+    - yt/frontends/boxlib/tests/test_outputs.py:test_langmuir
+    - yt/frontends/boxlib/tests/test_outputs.py:test_plasma
+    - yt/frontends/boxlib/tests/test_outputs.py:test_warpx_particle_io
+    - yt/frontends/boxlib/tests/test_outputs.py:test_NyxDataset
+    - yt/frontends/boxlib/tests/test_outputs.py:test_WarpXDataset
 
   local_ramses_001:
     - yt/frontends/ramses/tests/test_outputs.py

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f yt/frontends/boxlib/api.py
--- a/yt/frontends/boxlib/api.py
+++ b/yt/frontends/boxlib/api.py
@@ -20,12 +20,18 @@
       OrionHierarchy, \
       OrionDataset, \
       CastroDataset, \
-      MaestroDataset
+      MaestroDataset, \
+      NyxDataset, \
+      NyxHierarchy, \
+      WarpXDataset, \
+      WarpXHierarchy
 
 from .fields import \
       BoxlibFieldInfo, \
       MaestroFieldInfo, \
-      CastroFieldInfo
+      CastroFieldInfo, \
+      NyxFieldInfo, \
+      WarpXFieldInfo
 
 from .io import \
       IOHandlerBoxlib

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f yt/frontends/boxlib/data_structures.py
--- a/yt/frontends/boxlib/data_structures.py
+++ b/yt/frontends/boxlib/data_structures.py
@@ -16,19 +16,21 @@
 import inspect
 import os
 import re
+from collections import namedtuple
 
 from stat import ST_CTIME
 
 import numpy as np
+import glob
 
 from yt.funcs import \
     ensure_tuple, \
     mylog, \
     setdefaultattr
 from yt.data_objects.grid_patch import AMRGridPatch
-from yt.extern.six.moves import zip as izip
 from yt.geometry.grid_geometry_handler import GridIndex
 from yt.data_objects.static_output import Dataset
+from yt.units import YTQuantity
 
 from yt.utilities.parallel_tools.parallel_analysis_interface import \
     parallel_root_only
@@ -39,9 +41,10 @@
 
 from .fields import \
     BoxlibFieldInfo, \
+    NyxFieldInfo, \
     MaestroFieldInfo, \
-    CastroFieldInfo
-
+    CastroFieldInfo, \
+    WarpXFieldInfo
 
 # This is what we use to find scientific notation that might include d's
 # instead of e's.
@@ -74,6 +77,7 @@
         self._base_offset = offset
         self._parent_id = []
         self._children_ids = []
+        self._pdata = {}
 
     def _prepare_grid(self):
         super(BoxlibGrid, self)._prepare_grid()
@@ -142,6 +146,7 @@
         self.dataset_type = dataset_type
         self.header_filename = os.path.join(ds.output_dir, 'Header')
         self.directory = ds.output_dir
+        self.particle_headers = {}
 
         GridIndex.__init__(self, ds, dataset_type)
         self._cache_endianness(self.grids[-1])
@@ -360,6 +365,32 @@
     def _setup_data_io(self):
         self.io = io_registry[self.dataset_type](self.dataset)
 
+    def _read_particles(self, directory_name, is_checkpoint, extra_field_names=None):
+
+        self.particle_headers[directory_name] = BoxLibParticleHeader(self.ds,
+                                                                     directory_name,
+                                                                     is_checkpoint,
+                                                                     extra_field_names)
+
+        base_particle_fn = self.ds.output_dir + '/' + directory_name + "/Level_%d/DATA_%.4d"
+
+        gid = 0
+        for lev, data in self.particle_headers[directory_name].data_map.items():
+            for pdf in data.values():
+                pdict = self.grids[gid]._pdata
+                pdict[directory_name] = {}
+                pdict[directory_name]["particle_filename"] = base_particle_fn % \
+                                                             (lev, pdf.file_number)
+                pdict[directory_name]["offset"] = pdf.offset
+                pdict[directory_name]["NumberOfParticles"] = pdf.num_particles
+                self.grid_particle_count[gid] += pdf.num_particles
+                self.grids[gid].NumberOfParticles += pdf.num_particles
+                gid += 1
+
+        # add particle fields to field_list
+        pfield_list = self.particle_headers[directory_name].known_fields
+        self.field_list.extend(pfield_list)
+
 
 class BoxlibDataset(Dataset):
     """
@@ -690,7 +721,7 @@
 
     def _read_particles(self):
         """
-        reads in particles and assigns them to grids. Will search for
+        Reads in particles and assigns them to grids. Will search for
         Star particles, then sink particles if no star particle file
         is found, and finally will simply note that no particles are
         found if neither works. To add a new Orion particle type,
@@ -778,6 +809,9 @@
         if os.path.exists(jobinfo_filename):
             return False
         # Now we check for all the others
+        warpx_jobinfo_filename = os.path.join(output_dir, "warpx_job_info")
+        if os.path.exists(warpx_jobinfo_filename):
+            return False
         lines = open(inputs_filename).readlines()
         if any(("castro." in line for line in lines)): return False
         if any(("nyx." in line for line in lines)): return False
@@ -910,47 +944,23 @@
 
     def __init__(self, ds, dataset_type='nyx_native'):
         super(NyxHierarchy, self).__init__(ds, dataset_type)
-        self._read_particle_header()
-
-    def _read_particle_header(self):
-        if not self.ds.parameters["particles"]:
-            self.pgrid_info = np.zeros((self.num_grids, 3), dtype='int64')
-            return
-        for fn in ['particle_position_%s' % ax for ax in 'xyz'] + \
-                  ['particle_mass'] +  \
-                  ['particle_velocity_%s' % ax for ax in 'xyz']:
-            self.field_list.append(("io", fn))
-        header = open(os.path.join(self.ds.output_dir, "DM", "Header"))
-        version = header.readline()  # NOQA
-        ndim = header.readline()  # NOQA
-        nfields = header.readline()  # NOQA
-        ntotalpart = int(header.readline())  # NOQA
-        nextid = header.readline()  # NOQA
-        maxlevel = int(header.readline())  # NOQA
 
-        # Skip over how many grids on each level; this is degenerate
-        for i in range(maxlevel + 1):
-            header.readline()
+        # extra beyond the base real fields that all Boxlib
+        # particles have, i.e. the xyz positions
+        nyx_extra_real_fields = ['particle_mass',
+                                 'particle_velocity_x',
+                                 'particle_velocity_y',
+                                 'particle_velocity_z']
 
-        grid_info = np.fromiter((int(i) for line in header.readlines()
-                                 for i in line.split()),
-                                dtype='int64',
-                                count=3*self.num_grids).reshape((self.num_grids, 3))
-        # we need grid_info in `populate_grid_objects`, so save it to self
+        is_checkpoint = False
 
-        for g, pg in izip(self.grids, grid_info):
-            g.particle_filename = os.path.join(self.ds.output_dir, "DM",
-                                               "Level_%s" % (g.Level),
-                                               "DATA_%04i" % pg[0])
-            g.NumberOfParticles = pg[1]
-            g._particle_offset = pg[2]
-
-        self.grid_particle_count[:, 0] = grid_info[:, 1]
+        self._read_particles("DM", is_checkpoint, nyx_extra_real_fields)
 
 
 class NyxDataset(BoxlibDataset):
 
     _index_class = NyxHierarchy
+    _field_info_class = NyxFieldInfo
 
     @classmethod
     def _is_valid(cls, *args, **kwargs):
@@ -1013,7 +1023,7 @@
         if os.path.isdir(os.path.join(self.output_dir, "DM")):
             # we have particles
             self.parameters["particles"] = 1 
-            self.particle_types = ("io",)
+            self.particle_types = ("DM",)
             self.particle_types_raw = self.particle_types
 
     def _set_code_unit_attributes(self):
@@ -1044,3 +1054,219 @@
     if len(vals) == 1:
         vals = vals[0]
     return vals
+
+
+class BoxLibParticleHeader(object):
+
+    def __init__(self, ds, directory_name, is_checkpoint, extra_field_names=None):
+
+        self.species_name = directory_name
+        header_filename =  ds.output_dir + "/" + directory_name + "/Header"
+        with open(header_filename, "r") as f:
+            self.version_string = f.readline().strip()
+
+            particle_real_type = self.version_string.split('_')[-1]
+            particle_real_type = self.version_string.split('_')[-1]
+            if particle_real_type == 'double':
+                self.real_type = np.float64
+            elif particle_real_type == 'single':
+                self.real_type = np.float32
+            else:
+                raise RuntimeError("yt did not recognize particle real type.")
+            self.int_type = np.int32
+
+            self.dim = int(f.readline().strip())
+            self.num_int_base = 2 + self.dim
+            self.num_real_base = self.dim
+            self.num_int_extra = 0  # this should be written by Boxlib, but isn't
+            self.num_real_extra = int(f.readline().strip())
+            self.num_int = self.num_int_base + self.num_int_extra
+            self.num_real = self.num_real_base + self.num_real_extra            
+            self.num_particles = int(f.readline().strip())
+            self.max_next_id = int(f.readline().strip())
+            self.finest_level = int(f.readline().strip())
+            self.num_levels = self.finest_level + 1
+
+            # Boxlib particles can be written in checkpoint or plotfile mode
+            # The base integer fields are only there for checkpoints, but some
+            # codes use the checkpoint format for plotting
+            if not is_checkpoint:
+                self.num_int_base = 0
+                self.num_int_extra = 0
+                self.num_int = 0
+
+            self.grids_per_level = np.zeros(self.num_levels, dtype='int64')
+            self.data_map = {}
+            for level_num in range(self.num_levels):
+                self.grids_per_level[level_num] = int(f.readline().strip())
+                self.data_map[level_num] = {}
+            
+            pfd = namedtuple("ParticleFileDescriptor",
+                             ["file_number", "num_particles", "offset"])
+
+            for level_num in range(self.num_levels):
+                for grid_num in range(self.grids_per_level[level_num]):
+                    entry = [int(val) for val in f.readline().strip().split()]
+                    self.data_map[level_num][grid_num] = pfd(*entry)
+
+        self._generate_particle_fields(extra_field_names)
+
+    def _generate_particle_fields(self, extra_field_names):
+
+        # these are the 'base' integer fields
+        self.known_int_fields = [(self.species_name, "particle_id"),
+                                 (self.species_name, "particle_cpu"),
+                                 (self.species_name, "particle_cell_x"),
+                                 (self.species_name, "particle_cell_y"),
+                                 (self.species_name, "particle_cell_z")]
+        self.known_int_fields = self.known_int_fields[0:self.num_int_base]
+
+        # these are extra integer fields
+        extra_int_fields = ["particle_int_comp%d" % i 
+                            for i in range(self.num_int_extra)]
+        self.known_int_fields.extend([(self.species_name, field) 
+                                      for field in extra_int_fields])
+
+        # these are the base real fields
+        self.known_real_fields = [(self.species_name, "particle_position_x"),
+                                  (self.species_name, "particle_position_y"),
+                                  (self.species_name, "particle_position_z")]
+        self.known_real_fields = self.known_real_fields[0:self.num_real_base]
+
+        # these are the extras
+        if extra_field_names is not None:
+            assert(len(extra_field_names) == self.num_real_extra)
+        else:
+            extra_field_names = ["particle_real_comp%d" % i 
+                                 for i in range(self.num_real_extra)]
+
+        self.known_real_fields.extend([(self.species_name, field) 
+                                       for field in extra_field_names])
+
+        self.known_fields = self.known_int_fields + self.known_real_fields
+
+        self.particle_int_dtype = np.dtype([(t[1], self.int_type) 
+                                            for t in self.known_int_fields])
+
+        self.particle_real_dtype = np.dtype([(t[1], self.real_type) 
+                                            for t in self.known_real_fields])
+
+
+class WarpXHierarchy(BoxlibHierarchy):
+
+    def __init__(self, ds, dataset_type="boxlib_native"):
+        super(WarpXHierarchy, self).__init__(ds, dataset_type)
+
+        # extra beyond the base real fields that all Boxlib
+        # particles have, i.e. the xyz positions
+        warpx_extra_real_fields = ['particle_weight',
+                                   'particle_velocity_x',
+                                   'particle_velocity_y',
+                                   'particle_velocity_z']
+
+        is_checkpoint = False
+
+        for ptype in self.ds.particle_types:
+            self._read_particles(ptype, is_checkpoint, warpx_extra_real_fields)
+        
+        # Additional WarpX particle information (used to set up species)
+        with open(self.ds.output_dir + "/WarpXHeader", 'r') as f:
+
+            # skip to the end, where species info is written out
+            line = f.readline()
+            while line and line != ')\n':
+                line = f.readline()
+            line = f.readline()
+
+            # Read in the species information
+            species_id = 0
+            while line:
+                line = line.strip().split()
+                charge = YTQuantity(float(line[0]), "C")
+                mass = YTQuantity(float(line[1]), "kg")
+                charge_name = 'particle%.1d_charge' % species_id
+                mass_name = 'particle%.1d_mass' % species_id
+                self.parameters[charge_name] = charge
+                self.parameters[mass_name] = mass
+                line = f.readline()
+                species_id += 1
+    
+def _skip_line(line):
+    if len(line) == 0:
+        return True
+    if line[0] == '\n':
+        return True
+    if line[0] == "=":
+        return True
+    if line[0] == ' ':
+        return True
+
+
+class WarpXDataset(BoxlibDataset):
+
+    _index_class = WarpXHierarchy
+    _field_info_class = WarpXFieldInfo
+
+    def __init__(self, output_dir,
+                 cparam_filename="inputs",
+                 fparam_filename="probin",
+                 dataset_type='boxlib_native',
+                 storage_filename=None,
+                 units_override=None,
+                 unit_system="mks"):
+
+        super(WarpXDataset, self).__init__(output_dir,
+                                           cparam_filename,
+                                           fparam_filename,
+                                           dataset_type,
+                                           storage_filename,
+                                           units_override,
+                                           unit_system)
+
+    @classmethod
+    def _is_valid(cls, *args, **kwargs):
+        # fill our args
+        output_dir = args[0]
+        # boxlib datasets are always directories
+        if not os.path.isdir(output_dir): return False
+        header_filename = os.path.join(output_dir, "Header")
+        jobinfo_filename = os.path.join(output_dir, "warpx_job_info")
+        if not os.path.exists(header_filename):
+            # We *know* it's not boxlib if Header doesn't exist.
+            return False
+        if os.path.exists(jobinfo_filename):
+            return True
+        return False
+
+    def _parse_parameter_file(self):
+        super(WarpXDataset, self)._parse_parameter_file()
+        jobinfo_filename = os.path.join(self.output_dir, "warpx_job_info")
+        with open(jobinfo_filename, "r") as f:
+            for line in f.readlines():
+                if _skip_line(line):
+                    continue
+                l = line.strip().split(":")
+                if len(l) == 2:
+                    self.parameters[l[0].strip()] = l[1].strip()
+                l = line.strip().split("=")
+                if len(l) == 2:
+                    self.parameters[l[0].strip()] = l[1].strip()
+                
+        # set the periodicity based on the integer BC runtime parameters
+        is_periodic = self.parameters['geometry.is_periodic'].split()
+        periodicity = [bool(val) for val in is_periodic]
+        self.periodicity = ensure_tuple(periodicity)
+
+        species_list = []
+        species_dirs = glob.glob(self.output_dir + "/particle*")
+        for species in species_dirs:
+            species_list.append(species[len(self.output_dir)+1:])
+        self.particle_types = tuple(species_list)
+        self.particle_types_raw = self.particle_types
+
+
+    def _set_code_unit_attributes(self):
+        setdefaultattr(self, 'length_unit', self.quan(1.0, "m"))
+        setdefaultattr(self, 'mass_unit', self.quan(1.0, "kg"))
+        setdefaultattr(self, 'time_unit', self.quan(1.0, "s"))
+        setdefaultattr(self, 'velocity_unit', self.quan(1.0, "m/s"))

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f yt/frontends/boxlib/fields.py
--- a/yt/frontends/boxlib/fields.py
+++ b/yt/frontends/boxlib/fields.py
@@ -51,6 +51,66 @@
     return tr
 
 
+class WarpXFieldInfo(FieldInfoContainer):
+
+    known_other_fields = (
+        ("Bx", ("T", ["magnetic_field_x"], None)),
+        ("By", ("T", ["magnetic_field_y"], None)),
+        ("Bz", ("T", ["magnetic_field_z"], None)),
+        ("Ex", ("V/m", ["electric_field_x"], None)),
+        ("Ey", ("V/m", ["electric_field_y"], None)),
+        ("Ex", ("V/m", ["electric_field_z"], None)),
+        ("jx", ("A", ["current_x"], None)),
+        ("jy", ("A", ["current_y"], None)),
+        ("jz", ("A", ["current_z"], None)),
+    )
+
+    known_particle_fields = (
+        ("particle_weight", ("", [], None)),
+        ("particle_position_x", ("m", [], None)),
+        ("particle_position_y", ("m", [], None)),
+        ("particle_position_z", ("m", [], None)),
+        ("particle_velocity_x", ("m/s", [], None)),
+        ("particle_velocity_y", ("m/s", [], None)),
+        ("particle_velocity_z", ("m/s", [], None)),
+    )
+
+    extra_union_fields = (
+        ("kg", "particle_mass"),
+        ("C", "particle_charge"),
+        ("", "particle_ones"),
+    )
+
+    def setup_particle_fields(self, ptype):
+
+        def get_mass(field, data):
+            species_mass = data.ds.index.parameters[ptype + '_mass']
+            return data["particle_weight"]*species_mass
+
+        self.add_field((ptype, "particle_mass"), sampling_type="particle",
+                       function=get_mass,
+                       units="kg")
+
+        def get_charge(field, data):
+            species_charge = data.ds.index.parameters[ptype + '_charge']
+            return data["particle_weight"]*species_charge
+
+        self.add_field((ptype, "particle_charge"), sampling_type="particle",
+                       function=get_charge,
+                       units="C")
+
+        super(WarpXFieldInfo, self).setup_particle_fields(ptype)
+
+
+class NyxFieldInfo(FieldInfoContainer):
+    known_other_fields = ()
+    known_particle_fields = (
+        ("particle_position_x", ("code_length", [], None)),
+        ("particle_position_y", ("code_length", [], None)),
+        ("particle_position_z", ("code_length", [], None)),
+    )
+
+
 class BoxlibFieldInfo(FieldInfoContainer):
     known_other_fields = (
         ("density", (rho_units, ["density"], None)),

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f yt/frontends/boxlib/io.py
--- a/yt/frontends/boxlib/io.py
+++ b/yt/frontends/boxlib/io.py
@@ -1,5 +1,5 @@
 """
-Orion data-file handling functions
+Boxlib data-file handling functions
 
 
 
@@ -81,6 +81,85 @@
                         local_offset += size
         return data
 
+    def _read_particle_selection(self, chunks, selector, fields):
+        rv = {}
+        chunks = list(chunks)
+        unions = self.ds.particle_unions
+
+        rv = {f: np.array([]) for f in fields}
+        for chunk in chunks:
+            for grid in chunk.objs:
+                for ftype, fname in fields:
+                    if ftype in unions:
+                        for subtype in unions[ftype]:
+                            data = self._read_particles(grid, selector,
+                                                        subtype, fname)
+                            rv[ftype, fname] = np.concatenate((data,
+                                                               rv[ftype, fname]))
+                    else:
+                        data = self._read_particles(grid, selector,
+                                                    ftype, fname)
+                        rv[ftype, fname] = np.concatenate((data,
+                                                           rv[ftype, fname]))
+        return rv
+
+    def _read_particles(self, grid, selector, ftype, name):
+
+        npart = grid._pdata[ftype]["NumberOfParticles"]
+        if npart == 0:
+            return np.array([])
+
+        fn = grid._pdata[ftype]["particle_filename"]
+        offset = grid._pdata[ftype]["offset"]
+        pheader = self.ds.index.particle_headers[ftype]
+        
+        # handle the case that this is an integer field
+        int_fnames = [fname for _, fname in pheader.known_int_fields]
+        if name in int_fnames:
+            ind = int_fnames.index(name)
+            fn = grid._pdata[ftype]["particle_filename"]
+            with open(fn, "rb") as f:
+
+                # read in the position fields for selection
+                f.seek(offset + 
+                       pheader.particle_int_dtype.itemsize * npart)
+                rdata = np.fromfile(f, pheader.real_type, pheader.num_real * npart)
+                x = np.asarray(rdata[0::pheader.num_real], dtype=np.float64)
+                y = np.asarray(rdata[1::pheader.num_real], dtype=np.float64)
+                z = np.asarray(rdata[2::pheader.num_real], dtype=np.float64)
+                mask = selector.select_points(x, y, z, 0.0)
+
+                if mask is None:
+                    return np.array([])
+                
+                # read in the data we want
+                f.seek(offset)
+                idata = np.fromfile(f, pheader.int_type, pheader.num_int * npart)
+                data = np.asarray(idata[ind::pheader.num_int], dtype=np.float64)
+                return data[mask].flatten()
+
+        # handle case that this is a real field
+        real_fnames = [fname for _, fname in pheader.known_real_fields]
+        if name in real_fnames:
+            ind = real_fnames.index(name)
+            with open(fn, "rb") as f:
+
+                # read in the position fields for selection
+                f.seek(offset + 
+                       pheader.particle_int_dtype.itemsize * npart)
+                rdata = np.fromfile(f, pheader.real_type, pheader.num_real * npart)
+                x = np.asarray(rdata[0::pheader.num_real], dtype=np.float64)
+                y = np.asarray(rdata[1::pheader.num_real], dtype=np.float64)
+                z = np.asarray(rdata[2::pheader.num_real], dtype=np.float64)
+                mask = selector.select_points(x, y, z, 0.0)
+
+                if mask is None:
+                    return np.array([])
+
+                data = np.asarray(rdata[ind::pheader.num_real], dtype=np.float64)
+                return data[mask].flatten()
+
+
 class IOHandlerOrion(IOHandlerBoxlib):
     _dataset_type = "orion_native"
 

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f yt/frontends/boxlib/tests/test_orion.py
--- a/yt/frontends/boxlib/tests/test_orion.py
+++ /dev/null
@@ -1,65 +0,0 @@
-"""
-Orion frontend tests
-
-
-
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (c) 2013, yt Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-from yt.testing import \
-    assert_equal, \
-    requires_file, \
-    units_override_check
-from yt.utilities.answer_testing.framework import \
-    requires_ds, \
-    small_patch_amr, \
-    data_dir_load
-from yt.frontends.boxlib.api import OrionDataset
-
-# We don't do anything needing ghost zone generation right now, because these
-# are non-periodic datasets.
-_fields = ("temperature", "density", "velocity_magnitude")
-
-radadvect = "RadAdvect/plt00000"
- at requires_ds(radadvect)
-def test_radadvect():
-    ds = data_dir_load(radadvect)
-    yield assert_equal, str(ds), "plt00000"
-    for test in small_patch_amr(ds, _fields):
-        test_radadvect.__name__ = test.description
-        yield test
-
-rt = "RadTube/plt00500"
- at requires_ds(rt)
-def test_radtube():
-    ds = data_dir_load(rt)
-    yield assert_equal, str(ds), "plt00500"
-    for test in small_patch_amr(ds, _fields):
-        test_radtube.__name__ = test.description
-        yield test
-
-star = "StarParticles/plrd01000"
- at requires_ds(star)
-def test_star():
-    ds = data_dir_load(star)
-    yield assert_equal, str(ds), "plrd01000"
-    for test in small_patch_amr(ds, _fields):
-        test_star.__name__ = test.description
-        yield test
-
- at requires_file(rt)
-def test_OrionDataset():
-    assert isinstance(data_dir_load(rt), OrionDataset)
-
- at requires_file(rt)
-def test_units_override():
-    for test in units_override_check(rt):
-        yield test
-

diff -r c23ff278bd328ac2f2944502c46acdb072be10b5 -r 5c3af8da1dd806d6f5d76466e04feade4b1d6e5f yt/frontends/boxlib/tests/test_outputs.py
--- /dev/null
+++ b/yt/frontends/boxlib/tests/test_outputs.py
@@ -0,0 +1,183 @@
+"""
+Boxlib frontend tests
+
+
+
+"""
+
+#-----------------------------------------------------------------------------
+# Copyright (c) 2017, yt Development Team.
+#
+# Distributed under the terms of the Modified BSD License.
+#
+# The full license is in the file COPYING.txt, distributed with this software.
+#-----------------------------------------------------------------------------
+
+from yt.testing import \
+    assert_equal, \
+    requires_file, \
+    units_override_check
+from yt.utilities.answer_testing.framework import \
+    requires_ds, \
+    small_patch_amr, \
+    data_dir_load
+from yt.frontends.boxlib.api import \
+    OrionDataset, \
+    NyxDataset, \
+    WarpXDataset
+import numpy as np    
+
+# We don't do anything needing ghost zone generation right now, because these
+# are non-periodic datasets.
+_orion_fields = ("temperature", "density", "velocity_magnitude")
+_nyx_fields = ("Ne", "Temp", "particle_mass_density")
+_warpx_fields = ("Ex", "By", "jz")
+
+radadvect = "RadAdvect/plt00000"
+ at requires_ds(radadvect)
+def test_radadvect():
+    ds = data_dir_load(radadvect)
+    yield assert_equal, str(ds), "plt00000"
+    for test in small_patch_amr(ds, _orion_fields):
+        test_radadvect.__name__ = test.description
+        yield test
+
+rt = "RadTube/plt00500"
+ at requires_ds(rt)
+def test_radtube():
+    ds = data_dir_load(rt)
+    yield assert_equal, str(ds), "plt00500"
+    for test in small_patch_amr(ds, _orion_fields):
+        test_radtube.__name__ = test.description
+        yield test
+
+star = "StarParticles/plrd01000"
+ at requires_ds(star)
+def test_star():
+    ds = data_dir_load(star)
+    yield assert_equal, str(ds), "plrd01000"
+    for test in small_patch_amr(ds, _orion_fields):
+        test_star.__name__ = test.description
+        yield test
+
+LyA = "Nyx_LyA/plt00000"
+ at requires_ds(LyA)
+def test_LyA():
+    ds = data_dir_load(LyA)
+    yield assert_equal, str(ds), "plt00000"
+    for test in small_patch_amr(ds, _nyx_fields,
+                                input_center="c",
+                                input_weight="Ne"):
+        test_LyA.__name__ = test.description
+        yield test
+
+ at requires_file(LyA)
+def test_nyx_particle_io():
+    ds = data_dir_load(LyA)
+
+    grid = ds.index.grids[0]
+    npart_grid_0 = 7908  # read directly from the header
+    assert_equal(grid['particle_position_x'].size, npart_grid_0)
+    assert_equal(grid['DM', 'particle_position_y'].size, npart_grid_0)
+    assert_equal(grid['all', 'particle_position_z'].size, npart_grid_0)
+
+    ad = ds.all_data()
+    npart = 32768  # read directly from the header
+    assert_equal(ad['particle_velocity_x'].size, npart)
+    assert_equal(ad['DM', 'particle_velocity_y'].size, npart)
+    assert_equal(ad['all', 'particle_velocity_z'].size, npart)
+
+    assert(np.all(ad['particle_mass'] == ad['particle_mass'][0]))
+
+    left_edge = ds.arr([0.0, 0.0, 0.0], 'code_length')
+    right_edge = ds.arr([4.0, 4.0, 4.0], 'code_length')
+    center = 0.5*(left_edge + right_edge)
+                   
+    reg = ds.region(center, left_edge, right_edge)
+
+    assert(np.all(np.logical_and(reg['particle_position_x'] <= right_edge[0], 
+                                 reg['particle_position_x'] >= left_edge[0])))
+
+    assert(np.all(np.logical_and(reg['particle_position_y'] <= right_edge[1], 
+                                 reg['particle_position_y'] >= left_edge[1])))
+
+    assert(np.all(np.logical_and(reg['particle_position_z'] <= right_edge[2], 
+                                 reg['particle_position_z'] >= left_edge[2])))
+
+langmuir = "LangmuirWave/plt00020"
+ at requires_ds(langmuir)
+def test_langmuir():
+    ds = data_dir_load(langmuir)
+    yield assert_equal, str(ds), "plt00020"
+    for test in small_patch_amr(ds, _warpx_fields, 
+                                input_center="c",
+                                input_weight="Ex"):
+        test_langmuir.__name__ = test.description
+        yield test
+
+plasma = "PlasmaAcceleration/plt00030"
+ at requires_ds(plasma)
+def test_plasma():
+    ds = data_dir_load(plasma)
+    yield assert_equal, str(ds), "plt00030"
+    for test in small_patch_amr(ds, _warpx_fields,
+                                input_center="c",
+                                input_weight="Ex"):
+        test_plasma.__name__ = test.description
+        yield test
+
+ at requires_file(plasma)
+def test_warpx_particle_io():
+    ds = data_dir_load(plasma)
+    grid = ds.index.grids[0]
+
+    # read directly from the header
+    npart0_grid_0 = 344  
+    npart1_grid_0 = 69632
+
+    assert_equal(grid['particle0', 'particle_position_x'].size, npart0_grid_0)
+    assert_equal(grid['particle1', 'particle_position_y'].size, npart1_grid_0)
+    assert_equal(grid['all', 'particle_position_z'].size, npart0_grid_0 + npart1_grid_0)
+
+    # read directly from the header
+    npart0 = 1360  
+    npart1 = 802816  
+    ad = ds.all_data()
+    assert_equal(ad['particle0', 'particle_velocity_x'].size, npart0)
+    assert_equal(ad['particle1', 'particle_velocity_y'].size, npart1)
+    assert_equal(ad['all', 'particle_velocity_z'].size, npart0 + npart1)
+
+    np.all(ad['particle1', 'particle_mass'] == ad['particle1', 'particle_mass'][0])
+    np.all(ad['particle0', 'particle_mass'] == ad['particle0', 'particle_mass'][0])
+
+    left_edge = ds.arr([-7.5e-5, -7.5e-5, -7.5e-5], 'code_length')
+    right_edge = ds.arr([2.5e-5, 2.5e-5, 2.5e-5], 'code_length')
+    center = 0.5*(left_edge + right_edge)
+                   
+    reg = ds.region(center, left_edge, right_edge)
+
+    assert(np.all(np.logical_and(reg['particle_position_x'] <= right_edge[0], 
+                                 reg['particle_position_x'] >= left_edge[0])))
+
+    assert(np.all(np.logical_and(reg['particle_position_y'] <= right_edge[1], 
+                                 reg['particle_position_y'] >= left_edge[1])))
+
+    assert(np.all(np.logical_and(reg['particle_position_z'] <= right_edge[2], 
+                                 reg['particle_position_z'] >= left_edge[2])))
+
+ at requires_file(rt)
+def test_OrionDataset():
+    assert isinstance(data_dir_load(rt), OrionDataset)
+
+ at requires_file(LyA)
+def test_NyxDataset():
+    assert isinstance(data_dir_load(LyA), NyxDataset)
+
+ at requires_file(LyA)
+def test_WarpXDataset():
+    assert isinstance(data_dir_load(plasma), WarpXDataset)
+
+ at requires_file(rt)
+def test_units_override():
+    for test in units_override_check(rt):
+        yield test

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.



More information about the yt-svn mailing list