<html><body>
<p>1 new commit in yt:</p>
<p><a href="https://bitbucket.org/yt_analysis/yt/commits/b61ebaeaae2e/">https://bitbucket.org/yt_analysis/yt/commits/b61ebaeaae2e/</a> Changeset: b61ebaeaae2e Branch: yt User: MatthewTurk Date: 2016-05-11 18:30:09+00:00 Summary: Merged in hyschive/yt-hyschive (pull request #2163)</p>
<p>Updating the _skeleton frontend Affected #: 4 files</p>
<p>diff -r 87f89bdc4c237e8319a35335c5f8875c517f63c8 -r b61ebaeaae2e05c7e2aa6757ff23465c5cf2681d doc/source/developing/creating_frontend.rst --- a/doc/source/developing/creating_frontend.rst +++ b/doc/source/developing/creating_frontend.rst @@ -34,7 +34,8 @@</p>
<pre>`yt-dev <http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org>`_!
To get started, make a new directory in ``yt/frontends`` with the name</pre>
<p>-of your code. Copying the contents of the ``yt/frontends/_skeleton`` +of your code and add the name into ``yt/frontends/api.py``. +Copying the contents of the ``yt/frontends/_skeleton``</p>
<pre>directory will add a lot of boilerplate for the required classes and
methods that are needed. In particular, you'll have to create a
subclass of ``Dataset`` in the data_structures.py file. This subclass</pre>
<p>diff -r 87f89bdc4c237e8319a35335c5f8875c517f63c8 -r b61ebaeaae2e05c7e2aa6757ff23465c5cf2681d yt/frontends/_skeleton/data_structures.py --- a/yt/frontends/_skeleton/data_structures.py +++ b/yt/frontends/_skeleton/data_structures.py @@ -14,6 +14,8 @@</p>
<pre>#-----------------------------------------------------------------------------
import os</pre>
<p>+import numpy as np +import weakref</p>
<pre>from yt.data_objects.grid_patch import \
AMRGridPatch</pre>
<p>@@ -25,15 +27,12 @@</p>
<pre>class SkeletonGrid(AMRGridPatch):
_id_offset = 0</pre>
<ul><li><p>def __init__(self, id, index, level, start, dimensions):</p></li></ul>
<p>+ def __init__(self, id, index, level):</p>
<pre> AMRGridPatch.__init__(self, id, filename=index.index_filename,
index=index)</pre>
<ul><li><p>self.Parent = []</p></li></ul>
<p>+ self.Parent = None</p>
<pre>self.Children = []
self.Level = level</pre>
<ul><li><p>self.start_index = start.copy()</p></li>
<li><p>self.stop_index = self.start_index + dimensions</p></li>
<li><p>self.ActiveDimensions = dimensions.copy()</p></li></ul>
<pre>def __repr__(self):
return "SkeletonGrid_%04i (%s)" % (self.id, self.ActiveDimensions)</pre>
<p>@@ -43,14 +42,17 @@</p>
<pre>def __init__(self, ds, dataset_type='skeleton'):
self.dataset_type = dataset_type</pre>
<p>+ self.dataset = weakref.proxy(ds)</p>
<pre># for now, the index file is the dataset!
self.index_filename = self.dataset.parameter_filename
self.directory = os.path.dirname(self.index_filename)</pre>
<p>+ # float type for the simulation edges and must be float64 now + self.float_type = np.float64</p>
<pre> GridIndex.__init__(self, ds, dataset_type)
def _detect_output_fields(self):
# This needs to set a self.field_list that contains all the available,</pre>
<ul><li><p># on-disk fields.</p></li></ul>
<p>+ # on-disk fields. No derived fields should be defined here.</p>
<pre># NOTE: Each should be a tuple, where the first element is the on-disk
# fluid type or particle type. Convention suggests that the on-disk
# fluid type is usually the dataset_type and the on-disk particle type</pre>
<p>@@ -69,7 +71,7 @@</p>
<pre># self.grid_particle_count (N, 1) <= int
# self.grid_levels (N, 1) <= int
# self.grids (N, 1) <= grid objects</pre>
<ul><li><p>#</p></li></ul>
<p>+ # self.max_level = self.grid_levels.max()</p>
<pre> pass
def _populate_grid_objects(self):</pre>
<p>@@ -94,6 +96,8 @@</p>
<pre> Dataset.__init__(self, filename, dataset_type,
units_override=units_override)
self.storage_filename = storage_filename</pre>
<p>+ # refinement factor between a grid and its subgrid + # self.refine_by = 2</p>
<pre>def _set_code_unit_attributes(self):
# This is where quantities are created that represent the various</pre>
<p>@@ -114,10 +118,11 @@</p>
<pre>def _parse_parameter_file(self):
# This needs to set up the following items. Note that these are all
# assumed to be in code units; domain_left_edge and domain_right_edge</pre>
<ul><li><p># will be updated to be in code units at a later time. This includes</p></li>
<li><p># the cosmological parameters.</p></li></ul>
<p>+ # will be converted to YTArray automatically at a later time. + # This includes the cosmological parameters.</p>
<pre>#</pre>
<ul><li><p># self.unique_identifier</p></li></ul>
<p>+ # self.unique_identifier <= unique identifier for the dataset + # being read (e.g., UUID or ST_CTIME)</p>
<pre># self.parameters <= full of code-specific items of use
# self.domain_left_edge <= array of float64
# self.domain_right_edge <= array of float64</pre>
<p>diff -r 87f89bdc4c237e8319a35335c5f8875c517f63c8 -r b61ebaeaae2e05c7e2aa6757ff23465c5cf2681d yt/frontends/_skeleton/fields.py --- a/yt/frontends/_skeleton/fields.py +++ b/yt/frontends/_skeleton/fields.py @@ -31,13 +31,14 @@</p>
<pre> # ( "name", ("units", ["fields", "to", "alias"], # "display_name")),
)
</pre>
<ul><li><p>def __init__(self, ds):</p></li>
<li><p>super(SkeletonFieldInfo, self).__init__(ds)</p></li></ul>
<p>+ def __init__(self, ds, field_list): + super(SkeletonFieldInfo, self).__init__(ds, field_list)</p>
<pre> # If you want, you can check self.field_list
def setup_fluid_fields(self):
# Here we do anything that might need info about the dataset.</pre>
<ul><li><p># You can use self.alias, self.add_output_field and self.add_field .</p></li></ul>
<p>+ # You can use self.alias, self.add_output_field (for on-disk fields) + # and self.add_field (for derived fields).</p>
<pre> pass
def setup_particle_fields(self, ptype):</pre>
<p>diff -r 87f89bdc4c237e8319a35335c5f8875c517f63c8 -r b61ebaeaae2e05c7e2aa6757ff23465c5cf2681d yt/frontends/_skeleton/io.py --- a/yt/frontends/_skeleton/io.py +++ b/yt/frontends/_skeleton/io.py @@ -42,9 +42,18 @@</p>
<pre># dict gets returned at the end and it should be flat, with selected
# data. Note that if you're reading grid data, you might need to
# special-case a grid selector object.</pre>
<p>+ # Also note that “chunks” is a generator for multiple chunks, each of + # which contains a list of grids. The returned numpy arrays should be + # in 64-bit float and contiguous along the z direction. Therefore, for + # a C-like input array with the dimension [x][y][z] or a + # Fortran-like input array with the dimension (z,y,x), a matrix + # transpose is required (e.g., using np_array.transpose() or + # np_array.swapaxes(0,2)).</p>
<pre> pass
def _read_chunk_data(self, chunk, fields):</pre>
<ul><li><p># This reads the data from a single chunk, and is only used for</p></li>
<li><p># caching.</p></li></ul>
<p>+ # This reads the data from a single chunk without doing any selection, + # and is only used for caching data that might be used by multiple + # different selectors later. For instance, this can speed up ghost zone + # computation.</p>
<pre>pass</pre>
<p>Repository URL: <a href="https://bitbucket.org/yt_analysis/yt/">https://bitbucket.org/yt_analysis/yt/</a></p>
<p>—</p>
<p>This is a commit notification from bitbucket.org. You are receiving this because you have the service enabled, addressing the recipient of this email.</p>
<img src="http://link.bitbucket.org/wf/open?upn=ll4ctv0L-2ByeRZFC1LslHcg6aJmnQ70VruLbmeLQr27DiKjUbrMzw20y1oBguTnsYWON9q7D9WoU-2BnRv6huSjVyJsRdMPlBAVpKGygOoiq-2Bv-2BewpHB7nilY0omY-2FBtUGTj5DIuOD-2Fj4cIloee8zgqk7NlMio-2B3qfkPvrNqZsl8HWMrSntcUDqobBKX667peelJ2nOSld2uvzwsiSWXnffYomLTo9IbQcCTQLgvii-2FQeE-3D" alt="" width="1" height="1" border="0" style="height:1px !important;width:1px !important;border-width:0 !important;margin-top:0 !important;margin-bottom:0 !important;margin-right:0 !important;margin-left:0 !important;padding-top:0 !important;padding-bottom:0 !important;padding-right:0 !important;padding-left:0 !important;"/>
</body></html>