[yt-svn] commit/yt: 15 new changesets

commits-noreply at bitbucket.org commits-noreply at bitbucket.org
Wed Mar 16 09:12:23 PDT 2016


15 new commits in yt:

https://bitbucket.org/yt_analysis/yt/commits/1ba2cb92f516/
Changeset:   1ba2cb92f516
Branch:      yt
User:        chummels
Date:        2016-02-07 06:37:20+00:00
Summary:     Modularizing AbsorptionSpectrum to consistently deal with virtual bins for subgrid line deposition.  First step.
Affected #:  1 file

diff -r be66247e2a8a1454ad2e7c29ef3473bf17e39b6e -r 1ba2cb92f51628671ee5f10f08b9ba286f6209d2 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -317,15 +317,19 @@
             #    10; this will assure we don't get spikes in the deposited
             #    spectra from uneven numbers of vbins per bin
             resolution = thermal_width / self.bin_width 
-            vbin_width = self.bin_width / \
-                         10**(np.ceil(np.log10(subgrid_resolution/resolution)).clip(0, np.inf))
-            vbin_width = vbin_width.in_units('angstrom').d
+            n_vbins_per_bin = 10**(np.ceil(np.log10(subgrid_resolution/resolution)).clip(0, np.inf))
+            vbin_width = self.bin_width.d / n_vbins_per_bin
 
             # the virtual window into which the line is deposited initially 
             # spans a region of 5 thermal_widths, but this may expand
-            n_vbins = np.ceil(5*thermal_width.d/vbin_width)
-            vbin_window_width = n_vbins*vbin_width
+            #n_vbins = np.ceil(5*thermal_width.d/vbin_width)
 
+            # the virtual window into which the line is deposited initially
+            # spans at least five times the thermal width but can expand as 
+            # needed
+            #n_vbins = 5 * np.ceil(resolution.d) * (self.bin_width.d / vbin_width)
+            #vbin_window_width = n_vbins*vbin_width # in angstroms
+            
             if (thermal_width < self.bin_width).any():
                 mylog.info(("%d out of %d line components will be " + \
                             "deposited as unresolved lines.") %
@@ -337,18 +341,23 @@
                             (line['label'], line['wavelength']),
                             thermal_width.size)
 
+
             # for a given transition, step through each location in the 
             # observed spectrum where it occurs and deposit a voigt profile
             for i in parallel_objects(valid_lines, njobs=-1):
-                my_vbin_window_width = vbin_window_width[i]
-                my_n_vbins = n_vbins[i]
-                my_vbin_width = vbin_width[i]
+                window_width_in_bins = 2
 
                 while True:
+                    left_index = (center_index[i] - \
+                            window_width_in_bins/2).clip(0, self.n_lambda-1)
+                    right_index = (center_index[i] + \
+                            window_width_in_bins/2).clip(0, self.n_lambda-1)
+                    n_vbins = (right_index - left_index) * n_vbins_per_bin[i]
+                    
                     vbins = \
-                        np.linspace(lambda_1[i]-my_vbin_window_width/2.,
-                                    lambda_1[i]+my_vbin_window_width/2., 
-                                    my_n_vbins, endpoint=False)
+                        np.linspace(self.lambda_field[left_index].d,
+                                    self.lambda_field[right_index].d,
+                                    n_vbins, endpoint=False)
 
                     vbins, vtau = \
                         tau_profile(
@@ -359,34 +368,20 @@
                     # If tau has not dropped below min tau threshold by the
                     # edges (ie the wings), then widen the wavelength 
                     # window and repeat process. 
+                    try: vtau[0] < min_tau and vtau[-1] < min_tau
+                    except: import pdb; pdb.set_trace()
                     if (vtau[0] < min_tau and vtau[-1] < min_tau):
                         break
-                    my_vbin_window_width *= 2
-                    my_n_vbins *= 2
-
-                # identify the extrema of the vbin_window so as to speed
-                # up searching over the entire lambda_field array
-                bins_from_center = np.ceil((my_vbin_window_width/2.) / \
-                                           self.bin_width.d) + 1
-                left_index = (center_index[i] - bins_from_center).clip(0, self.n_lambda)
-                right_index = (center_index[i] + bins_from_center).clip(0, self.n_lambda)
-                window_width = right_index - left_index
-
-                # run digitize to identify which vbins are deposited into which
-                # global lambda bins.
-                # shift global lambda bins over by half a bin width; 
-                # this has the effect of assuring np.digitize will place 
-                # the vbins in the closest bin center.
-                binned = np.digitize(vbins, 
-                                     self.lambda_field[left_index:right_index] \
-                                     + (0.5 * self.bin_width))
+                    window_width_in_bins *= 2
 
                 # numerically integrate the virtual bins to calculate a
                 # virtual equivalent width; then sum the virtual equivalent
                 # widths and deposit into each spectral bin
-                vEW = vtau * my_vbin_width
-                EW = [vEW[binned == j].sum() for j in np.arange(window_width)]
-                EW = np.array(EW)/self.bin_width.d
+                vEW = vtau * vbin_width[i]
+                EW = np.zeros(right_index - left_index)
+                for k in np.arange(right_index - left_index):
+                    EW[k] = vEW[n_vbins_per_bin[i]*k:n_vbins_per_bin[i]*(k+1)].sum()
+                EW = EW/self.bin_width.d
                 self.tau_field[left_index:right_index] += EW
 
                 # write out absorbers to file if the column density of
@@ -412,8 +407,8 @@
 
             del column_density, delta_lambda, lambda_obs, center_index, \
                 thermal_b, thermal_width, lambda_1, cdens, thermb, dlambda, \
-                vlos, resolution, vbin_width, n_vbins, vbin_window_width, \
-                valid_lines, vbins, vtau, vEW
+                vlos, resolution, vbin_width, n_vbins, window_width_in_bins, \
+                n_vbins_per_bin, valid_lines, vbins, vtau, vEW
 
         comm = _get_comm(())
         self.tau_field = comm.mpi_allreduce(self.tau_field, op="sum")


https://bitbucket.org/yt_analysis/yt/commits/30c18566e7f5/
Changeset:   30c18566e7f5
Branch:      yt
User:        chummels
Date:        2016-02-07 08:35:35+00:00
Summary:     Completing the fix to absorption spectrum analysis module by using full bin's worth of virtual bins instead of partial bin's worth which leads to round-off errors due to changes in numerical precision
Affected #:  2 files

diff -r 1ba2cb92f51628671ee5f10f08b9ba286f6209d2 -r 30c18566e7f55295fc54aa481f8e23a319245423 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -52,6 +52,8 @@
     def __init__(self, lambda_min, lambda_max, n_lambda):
         self.n_lambda = n_lambda
         # lambda, flux, and tau are wavelength, flux, and optical depth
+        self.lambda_min = lambda_min
+        self.lambda_max = lambda_max
         self.lambda_field = YTArray(np.linspace(lambda_min, lambda_max, 
                                     n_lambda), "angstrom")
         self.tau_field = None
@@ -281,6 +283,7 @@
                 delta_lambda = line['wavelength'] * field_data['redshift']
             # lambda_obs is central wavelength of line after redshift
             lambda_obs = line['wavelength'] + delta_lambda
+
             # bin index in lambda_field of central wavelength of line after z
             center_index = np.digitize(lambda_obs, self.lambda_field)
 
@@ -295,7 +298,6 @@
 
             # Sanitize units for faster runtime of the tau_profile machinery.
             lambda_0 = line['wavelength'].d  # line's rest frame; angstroms
-            lambda_1 = lambda_obs.d # line's observed frame; angstroms
             cdens = column_density.in_units("cm**-2").d # cm**-2
             thermb = thermal_b.in_cgs().d  # thermal b coefficient; cm / s
             dlambda = delta_lambda.d  # lambda offset; angstroms
@@ -320,56 +322,64 @@
             n_vbins_per_bin = 10**(np.ceil(np.log10(subgrid_resolution/resolution)).clip(0, np.inf))
             vbin_width = self.bin_width.d / n_vbins_per_bin
 
-            # the virtual window into which the line is deposited initially 
-            # spans a region of 5 thermal_widths, but this may expand
-            #n_vbins = np.ceil(5*thermal_width.d/vbin_width)
+            # only locations in the wavelength range of the spectrum
+            # are valid for processing
+            valid_lines = np.where((lambda_obs > self.lambda_min) &
+                                   (lambda_obs < self.lambda_max))[0]
+            filter_lines = (lambda_obs > self.lambda_min) & \
+                           (lambda_obs < self.lambda_max)
 
-            # the virtual window into which the line is deposited initially
-            # spans at least five times the thermal width but can expand as 
-            # needed
-            #n_vbins = 5 * np.ceil(resolution.d) * (self.bin_width.d / vbin_width)
-            #vbin_window_width = n_vbins*vbin_width # in angstroms
-            
-            if (thermal_width < self.bin_width).any():
+            # a note to the user about which lines components are unresolved
+            if (thermal_width[filter_lines] < self.bin_width).any():
                 mylog.info(("%d out of %d line components will be " + \
                             "deposited as unresolved lines.") %
-                           ((thermal_width < self.bin_width).sum(), 
-                            thermal_width.size))
+                           ((thermal_width[filter_lines] < self.bin_width).sum(), 
+                            thermal_width[filter_lines].size))
 
-            valid_lines = np.arange(len(thermal_width))
-            pbar = get_pbar("Adding line - %s [%f A]: " % \
-                            (line['label'], line['wavelength']),
-                            thermal_width.size)
-
+            # provide a progress bar with information about lines processsed
+            if len(valid_lines) == 0:
+                pbar = get_pbar("No absorbers in wavelength range for line - %s [%f A]: " % \
+                                (line['label'], line['wavelength']),
+                                len(valid_lines))
+            else:
+                pbar = get_pbar("Adding line - %s [%f A]: " % \
+                                (line['label'], line['wavelength']),
+                                len(valid_lines))
 
             # for a given transition, step through each location in the 
             # observed spectrum where it occurs and deposit a voigt profile
-            for i in parallel_objects(valid_lines, njobs=-1):
+            for i, lixel in parallel_objects(enumerate(valid_lines), njobs=-1):
+
+                # the virtual window into which the line is deposited initially 
+                # spans a region of 2 coarse spectral bins 
+                # (one on each side of the center_index) but the window
+                # can expand as necessary to assure everything is accounted for
                 window_width_in_bins = 2
 
                 while True:
-                    left_index = (center_index[i] - \
+                    left_index = (center_index[lixel] - \
                             window_width_in_bins/2).clip(0, self.n_lambda-1)
-                    right_index = (center_index[i] + \
+                    right_index = (center_index[lixel] + \
                             window_width_in_bins/2).clip(0, self.n_lambda-1)
-                    n_vbins = (right_index - left_index) * n_vbins_per_bin[i]
+                    n_vbins = (right_index - left_index) * \
+                              n_vbins_per_bin[lixel]
                     
+                    # the array of virtual bins in lambda space
                     vbins = \
                         np.linspace(self.lambda_field[left_index].d,
                                     self.lambda_field[right_index].d,
                                     n_vbins, endpoint=False)
 
+                    # the virtual bins and their corresponding opacities
                     vbins, vtau = \
                         tau_profile(
-                            lambda_0, line['f_value'], line['gamma'], thermb[i],
-                            cdens[i], delta_lambda=dlambda[i],
-                            lambda_bins=vbins)
+                            lambda_0, line['f_value'], line['gamma'], 
+                            thermb[lixel], cdens[lixel], 
+                            delta_lambda=dlambda[lixel], lambda_bins=vbins)
 
                     # If tau has not dropped below min tau threshold by the
                     # edges (ie the wings), then widen the wavelength 
                     # window and repeat process. 
-                    try: vtau[0] < min_tau and vtau[-1] < min_tau
-                    except: import pdb; pdb.set_trace()
                     if (vtau[0] < min_tau and vtau[-1] < min_tau):
                         break
                     window_width_in_bins *= 2
@@ -377,10 +387,11 @@
                 # numerically integrate the virtual bins to calculate a
                 # virtual equivalent width; then sum the virtual equivalent
                 # widths and deposit into each spectral bin
-                vEW = vtau * vbin_width[i]
+                vEW = vtau * vbin_width[lixel]
                 EW = np.zeros(right_index - left_index)
                 for k in np.arange(right_index - left_index):
-                    EW[k] = vEW[n_vbins_per_bin[i]*k:n_vbins_per_bin[i]*(k+1)].sum()
+                    EW[k] = vEW[n_vbins_per_bin[lixel] * k: \
+                                n_vbins_per_bin[lixel] * (k + 1)].sum()
                 EW = EW/self.bin_width.d
                 self.tau_field[left_index:right_index] += EW
 
@@ -392,23 +403,22 @@
                    cdens[i] >= line['label_threshold']:
 
                     if use_peculiar_velocity:
-                        peculiar_velocity = vlos[i]
+                        peculiar_velocity = vlos[lixel]
                     else:
                         peculiar_velocity = 0.0
                     self.absorbers_list.append({'label': line['label'],
-                                                'wavelength': (lambda_0 + dlambda[i]),
-                                                'column_density': column_density[i],
-                                                'b_thermal': thermal_b[i],
-                                                'redshift': field_data['redshift'][i],
-                                                'redshift_eff': field_data['redshift_eff'][i],
+                                                'wavelength': (lambda_0 + dlambda[lixel]),
+                                                'column_density': column_density[lixel],
+                                                'b_thermal': thermal_b[lixel],
+                                                'redshift': field_data['redshift'][lixel],
+                                                'redshift_eff': field_data['redshift_eff'][lixel],
                                                 'v_pec': peculiar_velocity})
-                pbar.update(i)
+                pbar.update(lixel)
             pbar.finish()
 
             del column_density, delta_lambda, lambda_obs, center_index, \
-                thermal_b, thermal_width, lambda_1, cdens, thermb, dlambda, \
-                vlos, resolution, vbin_width, n_vbins, window_width_in_bins, \
-                n_vbins_per_bin, valid_lines, vbins, vtau, vEW
+                thermal_b, thermal_width, cdens, thermb, dlambda, \
+                vlos, resolution, vbin_width, n_vbins_per_bin, valid_lines 
 
         comm = _get_comm(())
         self.tau_field = comm.mpi_allreduce(self.tau_field, op="sum")

diff -r 1ba2cb92f51628671ee5f10f08b9ba286f6209d2 -r 30c18566e7f55295fc54aa481f8e23a319245423 yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py
@@ -185,9 +185,9 @@
         wavelength, flux = sp.make_spectrum('lightray.h5')
         total_tau.append((lambda_bin_width * sp.tau_field).sum())
         
-    # assure that the total tau values are all within 1e-5 of each other
+    # assure that the total tau values are all within 1e-3 of each other
     for tau in total_tau:
-        assert_almost_equal(tau, total_tau[0], 5)
+        assert_almost_equal(tau, total_tau[0], 3)
 
     # clean up
     os.chdir(curdir)


https://bitbucket.org/yt_analysis/yt/commits/3619ec65e6e8/
Changeset:   3619ec65e6e8
Branch:      yt
User:        chummels
Date:        2016-02-07 09:44:17+00:00
Summary:     Adding line to allow subgrid deposition window to extend to edges of spectrum without failure.
Affected #:  1 file

diff -r 30c18566e7f55295fc54aa481f8e23a319245423 -r 3619ec65e6e85063d6df7162bc339401494bf145 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -378,9 +378,12 @@
                             delta_lambda=dlambda[lixel], lambda_bins=vbins)
 
                     # If tau has not dropped below min tau threshold by the
-                    # edges (ie the wings), then widen the wavelength 
-                    # window and repeat process. 
-                    if (vtau[0] < min_tau and vtau[-1] < min_tau):
+                    # edges (ie the wings), then widen the wavelength
+                    # window and repeat process. Alternatively, it is OK
+                    # for the wings to not be low if you're bumping up against
+                    # the edge of the spectrum
+                    if ((vtau[0] < min_tau or left_index == 0) and \
+                        (vtau[-1] < min_tau or right_index == self.n_lambda-1)):
                         break
                     window_width_in_bins *= 2
 


https://bitbucket.org/yt_analysis/yt/commits/240678087227/
Changeset:   240678087227
Branch:      yt
User:        chummels
Date:        2016-02-07 16:59:11+00:00
Summary:     Adding more comments to the algorithm on absorption deposition.
Affected #:  1 file

diff -r 3619ec65e6e85063d6df7162bc339401494bf145 -r 2406780872276c4948b71470ccd8c112ab28a6a1 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -353,7 +353,10 @@
                 # the virtual window into which the line is deposited initially 
                 # spans a region of 2 coarse spectral bins 
                 # (one on each side of the center_index) but the window
-                # can expand as necessary to assure everything is accounted for
+                # can expand as necessary.
+                # it will continue to expand until the tau value in the far
+                # edge of the wings is less than the min_tau value or it 
+                # reaches the edge of the spectrum
                 window_width_in_bins = 2
 
                 while True:


https://bitbucket.org/yt_analysis/yt/commits/4faa6a53d4bb/
Changeset:   4faa6a53d4bb
Branch:      yt
User:        chummels
Date:        2016-02-17 03:47:52+00:00
Summary:     Updating to address comments in PR by Britton.
Affected #:  1 file

diff -r 2406780872276c4948b71470ccd8c112ab28a6a1 -r 4faa6a53d4bb10e7fb7eaa18e3f44322fc626e42 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -324,10 +324,9 @@
 
             # only locations in the wavelength range of the spectrum
             # are valid for processing
-            valid_lines = np.where((lambda_obs > self.lambda_min) &
-                                   (lambda_obs < self.lambda_max))[0]
             filter_lines = (lambda_obs > self.lambda_min) & \
                            (lambda_obs < self.lambda_max)
+            valid_lines = np.where(filter_lines)[0]
 
             # a note to the user about which lines components are unresolved
             if (thermal_width[filter_lines] < self.bin_width).any():
@@ -419,7 +418,7 @@
                                                 'redshift': field_data['redshift'][lixel],
                                                 'redshift_eff': field_data['redshift_eff'][lixel],
                                                 'v_pec': peculiar_velocity})
-                pbar.update(lixel)
+                pbar.update(i)
             pbar.finish()
 
             del column_density, delta_lambda, lambda_obs, center_index, \


https://bitbucket.org/yt_analysis/yt/commits/7ad05d74fe85/
Changeset:   7ad05d74fe85
Branch:      yt
User:        chummels
Date:        2016-03-05 00:28:26+00:00
Summary:     Adding alternative means of generating equivalent center_index instead of using np.digitize()
Affected #:  1 file

diff -r 4faa6a53d4bb10e7fb7eaa18e3f44322fc626e42 -r 7ad05d74fe8550307870aa391c098ee6e2c58b11 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -284,8 +284,21 @@
             # lambda_obs is central wavelength of line after redshift
             lambda_obs = line['wavelength'] + delta_lambda
 
-            # bin index in lambda_field of central wavelength of line after z
-            center_index = np.digitize(lambda_obs, self.lambda_field)
+            # we want to know the bin index in the lambda_field array
+            # where each line has its central wavelength after being
+            # redshifted.  however, because we don't know a priori how wide
+            # a line will be (ie DLAs), we have to include bin indices 
+            # *outside* the spectral range of the AbsorptionSpectrum 
+            # object.  Thus, we find the "equivalent" bin index, which
+            # may be <0 or >the size of the array.  In the end, we deposit
+            # the bins that actually overlap with the AbsorptionSpectrum's
+            # range in lambda.
+            
+            # this equation gives us the "equivalent" bin index for each line
+            # if it were placed into the self.lambda_field array
+            center_index = (lambda_obs.in_units('Angstrom').d - self.lambda_min) \
+                            / self.bin_width.d
+            center_index = np.ceil(center_index).astype('int')
 
             # thermal broadening b parameter
             thermal_b =  np.sqrt((2 * boltzmann_constant_cgs *


https://bitbucket.org/yt_analysis/yt/commits/2e8f66efa881/
Changeset:   2e8f66efa881
Branch:      yt
User:        chummels
Date:        2016-03-05 16:39:45+00:00
Summary:     Making absorption spectrum parse all deposted lines, regardless of whether they fall in the desired spectral range.
Affected #:  1 file

diff -r 7ad05d74fe8550307870aa391c098ee6e2c58b11 -r 2e8f66efa881bdd57ce2b7c54f13ca064d0305ba yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -337,16 +337,17 @@
 
             # only locations in the wavelength range of the spectrum
             # are valid for processing
-            filter_lines = (lambda_obs > self.lambda_min) & \
-                           (lambda_obs < self.lambda_max)
-            valid_lines = np.where(filter_lines)[0]
+            #filter_lines = (lambda_obs > self.lambda_min) & \
+            #               (lambda_obs < self.lambda_max)
+            #valid_lines = np.where(filter_lines)[0]
+            valid_lines = np.arange(len(thermal_width))
 
             # a note to the user about which lines components are unresolved
-            if (thermal_width[filter_lines] < self.bin_width).any():
+            if (thermal_width < self.bin_width).any():
                 mylog.info(("%d out of %d line components will be " + \
                             "deposited as unresolved lines.") %
-                           ((thermal_width[filter_lines] < self.bin_width).sum(), 
-                            thermal_width[filter_lines].size))
+                           ((thermal_width < self.bin_width).sum(), 
+                            thermal_width.size))
 
             # provide a progress bar with information about lines processsed
             if len(valid_lines) == 0:
@@ -373,16 +374,16 @@
 
                 while True:
                     left_index = (center_index[lixel] - \
-                            window_width_in_bins/2).clip(0, self.n_lambda-1)
+                            window_width_in_bins/2)
                     right_index = (center_index[lixel] + \
-                            window_width_in_bins/2).clip(0, self.n_lambda-1)
+                            window_width_in_bins/2)
                     n_vbins = (right_index - left_index) * \
                               n_vbins_per_bin[lixel]
                     
                     # the array of virtual bins in lambda space
                     vbins = \
-                        np.linspace(self.lambda_field[left_index].d,
-                                    self.lambda_field[right_index].d,
+                        np.linspace(self.lambda_min + self.bin_width.d * left_index, 
+                                    self.lambda_min + self.bin_width.d * right_index, 
                                     n_vbins, endpoint=False)
 
                     # the virtual bins and their corresponding opacities
@@ -394,11 +395,8 @@
 
                     # If tau has not dropped below min tau threshold by the
                     # edges (ie the wings), then widen the wavelength
-                    # window and repeat process. Alternatively, it is OK
-                    # for the wings to not be low if you're bumping up against
-                    # the edge of the spectrum
-                    if ((vtau[0] < min_tau or left_index == 0) and \
-                        (vtau[-1] < min_tau or right_index == self.n_lambda-1)):
+                    # window and repeat process. 
+                    if ((vtau[0] < min_tau) and (vtau[-1] < min_tau)):
                         break
                     window_width_in_bins *= 2
 
@@ -407,11 +405,34 @@
                 # widths and deposit into each spectral bin
                 vEW = vtau * vbin_width[lixel]
                 EW = np.zeros(right_index - left_index)
-                for k in np.arange(right_index - left_index):
+                EW_indices = np.arange(left_index, right_index)
+                for k, val in enumerate(EW_indices):
                     EW[k] = vEW[n_vbins_per_bin[lixel] * k: \
                                 n_vbins_per_bin[lixel] * (k + 1)].sum()
                 EW = EW/self.bin_width.d
-                self.tau_field[left_index:right_index] += EW
+
+                # only deposit EW bins that actually intersect the original
+                # spectral wavelength range (i.e. lambda_field)
+
+                # if EW bins are fully in the original spectral range,
+                # just deposit into tau_field
+                if ((left_index < self.n_lambda) and (right_index >= 0)):
+                    self.tau_field[left_index:right_index] += EW
+
+                # if EW bins only catch the right edge of the original
+                # spectral range
+                elif (left_index < self.n_lambda):
+                    self.tau_field[left_index:self.n_lambda-1] += \
+                        EW[self.n_lambda-1-left_index:]
+
+                # if EW bins only catch the left edge of the original
+                # spectral range
+                elif (right_index >= 0):
+                    self.tau_field[:right_index] += EW[:right_index]
+
+                # if EW bins don't intersect the original spectral range at all
+                else:
+                    continue
 
                 # write out absorbers to file if the column density of
                 # an absorber is greater than the specified "label_threshold" 


https://bitbucket.org/yt_analysis/yt/commits/abc78533c84c/
Changeset:   abc78533c84c
Branch:      yt
User:        chummels
Date:        2016-03-11 00:04:21+00:00
Summary:     Protecting against the situation when velocity magnitude == 0.
Affected #:  1 file

diff -r 2e8f66efa881bdd57ce2b7c54f13ca064d0305ba -r abc78533c84c4e14d902b7634cfcc125f21b1dee yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
--- a/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
+++ b/yt/analysis_modules/cosmological_observation/light_ray/light_ray.py
@@ -480,6 +480,9 @@
 
                     sub_vel_mag = sub_ray['velocity_magnitude']
                     cos_theta = np.dot(line_of_sight, sub_vel) / sub_vel_mag
+                    # Protect against stituations where velocity mag is exactly
+                    # zero, in which case zero / zero = NaN.
+                    cos_theta = np.nan_to_num(cos_theta)
                     redshift_dopp = \
                         (1 + sub_vel_mag * cos_theta / speed_of_light_cgs) / \
                          np.sqrt(1 - sub_vel_mag**2 / speed_of_light_cgs**2) - 1


https://bitbucket.org/yt_analysis/yt/commits/6644456b3976/
Changeset:   6644456b3976
Branch:      yt
User:        chummels
Date:        2016-03-11 04:00:21+00:00
Summary:     Finally fixed the boundaries for depositing the features back into the original spectrum.
Affected #:  1 file

diff -r abc78533c84c4e14d902b7634cfcc125f21b1dee -r 6644456b39767349c68894712f4ce8f1bf2815d2 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -414,25 +414,45 @@
                 # only deposit EW bins that actually intersect the original
                 # spectral wavelength range (i.e. lambda_field)
 
-                # if EW bins are fully in the original spectral range,
-                # just deposit into tau_field
-                if ((left_index < self.n_lambda) and (right_index >= 0)):
-                    self.tau_field[left_index:right_index] += EW
+                # if EW bins don't intersect the original spectral range at all
+                # then skip the deposition
+                if ((left_index >= self.n_lambda) or \
+                    (right_index < 0)):
+                    pbar.update(i)
+                    continue
+
+                #else:
+                #    intersect_left_index = np.max(left_index, 0)
+                #    intersect_right_index = np.min(right_index, self.n_lambda-1)
+                #    intersect_range = intersect_right_index - intersect_left_index
+                #    self.tau_field[intersect_left_index:intersect_right_index]
+                #        += EW[
+
+
+                # if EW bins only catch the left edge of the original
+                # spectral range
+                elif (left_index < 0) and (right_index < self.n_lambda):
+                    print "Left Edge: %s %s" % (left_index, right_index)
+                    self.tau_field[0:right_index] += EW[-left_index:]
 
                 # if EW bins only catch the right edge of the original
                 # spectral range
-                elif (left_index < self.n_lambda):
+                elif (left_index >= 0) and (right_index >= self.n_lambda):
+                    print "Right Edge: %s %s" % (left_index, right_index)
                     self.tau_field[left_index:self.n_lambda-1] += \
-                        EW[self.n_lambda-1-left_index:]
+                        EW[:self.n_lambda-1-left_index]
 
-                # if EW bins only catch the left edge of the original
-                # spectral range
-                elif (right_index >= 0):
-                    self.tau_field[:right_index] += EW[:right_index]
+                # if EW bins cover the whole original spectral range
+                # but extend beyond both the left and right edges
+                elif (left_index < 0) and (right_index >= self.n_lambda):
+                    print "Both Edges: %s %s" % (left_index, right_index)
+                    self.tau_field[:] += EW[-left_index:self.n_lambda-left_index]
 
-                # if EW bins don't intersect the original spectral range at all
+                # if EW bins are fully in the original spectral range,
+                # just deposit into tau_field
                 else:
-                    continue
+                    #print "Inside: %s %s" % (left_index, right_index)
+                    self.tau_field[left_index:right_index] += EW
 
                 # write out absorbers to file if the column density of
                 # an absorber is greater than the specified "label_threshold" 


https://bitbucket.org/yt_analysis/yt/commits/5b85f49ec16d/
Changeset:   5b85f49ec16d
Branch:      yt
User:        chummels
Date:        2016-03-11 23:49:20+00:00
Summary:     Removing all the conditionals and replacing with a simple equation which works in 4 different cases.  Timing indicates this is ~2% slower but much easier to read.
Affected #:  1 file

diff -r 6644456b39767349c68894712f4ce8f1bf2815d2 -r 5b85f49ec16de3198fbcc0f135c1f99c79af84e7 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -421,38 +421,16 @@
                     pbar.update(i)
                     continue
 
-                #else:
-                #    intersect_left_index = np.max(left_index, 0)
-                #    intersect_right_index = np.min(right_index, self.n_lambda-1)
-                #    intersect_range = intersect_right_index - intersect_left_index
-                #    self.tau_field[intersect_left_index:intersect_right_index]
-                #        += EW[
-
-
-                # if EW bins only catch the left edge of the original
-                # spectral range
-                elif (left_index < 0) and (right_index < self.n_lambda):
-                    print "Left Edge: %s %s" % (left_index, right_index)
-                    self.tau_field[0:right_index] += EW[-left_index:]
-
-                # if EW bins only catch the right edge of the original
-                # spectral range
-                elif (left_index >= 0) and (right_index >= self.n_lambda):
-                    print "Right Edge: %s %s" % (left_index, right_index)
-                    self.tau_field[left_index:self.n_lambda-1] += \
-                        EW[:self.n_lambda-1-left_index]
-
-                # if EW bins cover the whole original spectral range
-                # but extend beyond both the left and right edges
-                elif (left_index < 0) and (right_index >= self.n_lambda):
-                    print "Both Edges: %s %s" % (left_index, right_index)
-                    self.tau_field[:] += EW[-left_index:self.n_lambda-left_index]
-
-                # if EW bins are fully in the original spectral range,
-                # just deposit into tau_field
+                # otherwise, determine how much of the original spectrum
+                # is intersected by the expanded line window to be deposited, 
+                # and deposit the Equivalent Width data into that intersecting
+                # window in the original spectrum's tau
                 else:
-                    #print "Inside: %s %s" % (left_index, right_index)
-                    self.tau_field[left_index:right_index] += EW
+                    intersect_left_index = max(left_index, 0)
+                    intersect_right_index = min(right_index, self.n_lambda-1)
+                    self.tau_field[intersect_left_index:intersect_right_index] \
+                        += EW[(intersect_left_index - left_index): \
+                              (intersect_right_index - left_index)]
 
                 # write out absorbers to file if the column density of
                 # an absorber is greater than the specified "label_threshold" 


https://bitbucket.org/yt_analysis/yt/commits/828166b57860/
Changeset:   828166b57860
Branch:      yt
User:        chummels
Date:        2016-03-11 23:58:20+00:00
Summary:     Removing unnecessary references to lixels.
Affected #:  1 file

diff -r 5b85f49ec16de3198fbcc0f135c1f99c79af84e7 -r 828166b57860a21bdad01eb3384520db86f1f95f yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -361,7 +361,7 @@
 
             # for a given transition, step through each location in the 
             # observed spectrum where it occurs and deposit a voigt profile
-            for i, lixel in parallel_objects(enumerate(valid_lines), njobs=-1):
+            for i in parallel_objects(valid_lines, njobs=-1):
 
                 # the virtual window into which the line is deposited initially 
                 # spans a region of 2 coarse spectral bins 
@@ -373,12 +373,12 @@
                 window_width_in_bins = 2
 
                 while True:
-                    left_index = (center_index[lixel] - \
+                    left_index = (center_index[i] - \
                             window_width_in_bins/2)
-                    right_index = (center_index[lixel] + \
+                    right_index = (center_index[i] + \
                             window_width_in_bins/2)
                     n_vbins = (right_index - left_index) * \
-                              n_vbins_per_bin[lixel]
+                              n_vbins_per_bin[i]
                     
                     # the array of virtual bins in lambda space
                     vbins = \
@@ -390,8 +390,8 @@
                     vbins, vtau = \
                         tau_profile(
                             lambda_0, line['f_value'], line['gamma'], 
-                            thermb[lixel], cdens[lixel], 
-                            delta_lambda=dlambda[lixel], lambda_bins=vbins)
+                            thermb[i], cdens[i], 
+                            delta_lambda=dlambda[i], lambda_bins=vbins)
 
                     # If tau has not dropped below min tau threshold by the
                     # edges (ie the wings), then widen the wavelength
@@ -403,12 +403,12 @@
                 # numerically integrate the virtual bins to calculate a
                 # virtual equivalent width; then sum the virtual equivalent
                 # widths and deposit into each spectral bin
-                vEW = vtau * vbin_width[lixel]
+                vEW = vtau * vbin_width[i]
                 EW = np.zeros(right_index - left_index)
                 EW_indices = np.arange(left_index, right_index)
                 for k, val in enumerate(EW_indices):
-                    EW[k] = vEW[n_vbins_per_bin[lixel] * k: \
-                                n_vbins_per_bin[lixel] * (k + 1)].sum()
+                    EW[k] = vEW[n_vbins_per_bin[i] * k: \
+                                n_vbins_per_bin[i] * (k + 1)].sum()
                 EW = EW/self.bin_width.d
 
                 # only deposit EW bins that actually intersect the original
@@ -440,15 +440,15 @@
                    cdens[i] >= line['label_threshold']:
 
                     if use_peculiar_velocity:
-                        peculiar_velocity = vlos[lixel]
+                        peculiar_velocity = vlos[i]
                     else:
                         peculiar_velocity = 0.0
                     self.absorbers_list.append({'label': line['label'],
-                                                'wavelength': (lambda_0 + dlambda[lixel]),
-                                                'column_density': column_density[lixel],
-                                                'b_thermal': thermal_b[lixel],
-                                                'redshift': field_data['redshift'][lixel],
-                                                'redshift_eff': field_data['redshift_eff'][lixel],
+                                                'wavelength': (lambda_0 + dlambda[i]),
+                                                'column_density': column_density[i],
+                                                'b_thermal': thermal_b[i],
+                                                'redshift': field_data['redshift'][i],
+                                                'redshift_eff': field_data['redshift_eff'][i],
                                                 'v_pec': peculiar_velocity})
                 pbar.update(i)
             pbar.finish()


https://bitbucket.org/yt_analysis/yt/commits/d5fa4a18e9ed/
Changeset:   d5fa4a18e9ed
Branch:      yt
User:        chummels
Date:        2016-03-12 00:14:50+00:00
Summary:     Cleaning up old references to valid_lines, since we process all lines now.
Affected #:  1 file

diff -r 828166b57860a21bdad01eb3384520db86f1f95f -r d5fa4a18e9ed4e6183edf8e68e6aaf166506ff71 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -335,11 +335,8 @@
             n_vbins_per_bin = 10**(np.ceil(np.log10(subgrid_resolution/resolution)).clip(0, np.inf))
             vbin_width = self.bin_width.d / n_vbins_per_bin
 
-            # only locations in the wavelength range of the spectrum
-            # are valid for processing
-            #filter_lines = (lambda_obs > self.lambda_min) & \
-            #               (lambda_obs < self.lambda_max)
-            #valid_lines = np.where(filter_lines)[0]
+            # we process every line, and only after we know where it falls
+            # do we decide whether or not to deposit it in the original spectrum
             valid_lines = np.arange(len(thermal_width))
 
             # a note to the user about which lines components are unresolved
@@ -350,14 +347,9 @@
                             thermal_width.size))
 
             # provide a progress bar with information about lines processsed
-            if len(valid_lines) == 0:
-                pbar = get_pbar("No absorbers in wavelength range for line - %s [%f A]: " % \
-                                (line['label'], line['wavelength']),
-                                len(valid_lines))
-            else:
-                pbar = get_pbar("Adding line - %s [%f A]: " % \
-                                (line['label'], line['wavelength']),
-                                len(valid_lines))
+            pbar = get_pbar("Adding line - %s [%f A]: " % \
+                            (line['label'], line['wavelength']),
+                            len(valid_lines))
 
             # for a given transition, step through each location in the 
             # observed spectrum where it occurs and deposit a voigt profile


https://bitbucket.org/yt_analysis/yt/commits/a1385280dbe9/
Changeset:   a1385280dbe9
Branch:      yt
User:        chummels
Date:        2016-03-14 21:59:35+00:00
Summary:     Getting rid of valid_lines references in lieu of n_absorbers.  Cleaner.
Affected #:  1 file

diff -r d5fa4a18e9ed4e6183edf8e68e6aaf166506ff71 -r a1385280dbe964d352623b28579dee0ea4834c26 yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
--- a/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
+++ b/yt/analysis_modules/absorption_spectrum/absorption_spectrum.py
@@ -283,6 +283,8 @@
                 delta_lambda = line['wavelength'] * field_data['redshift']
             # lambda_obs is central wavelength of line after redshift
             lambda_obs = line['wavelength'] + delta_lambda
+            # the total number of absorbers per transition
+            n_absorbers = len(lambda_obs)
 
             # we want to know the bin index in the lambda_field array
             # where each line has its central wavelength after being
@@ -335,25 +337,20 @@
             n_vbins_per_bin = 10**(np.ceil(np.log10(subgrid_resolution/resolution)).clip(0, np.inf))
             vbin_width = self.bin_width.d / n_vbins_per_bin
 
-            # we process every line, and only after we know where it falls
-            # do we decide whether or not to deposit it in the original spectrum
-            valid_lines = np.arange(len(thermal_width))
-
             # a note to the user about which lines components are unresolved
             if (thermal_width < self.bin_width).any():
                 mylog.info(("%d out of %d line components will be " + \
                             "deposited as unresolved lines.") %
                            ((thermal_width < self.bin_width).sum(), 
-                            thermal_width.size))
+                            n_absorbers))
 
             # provide a progress bar with information about lines processsed
             pbar = get_pbar("Adding line - %s [%f A]: " % \
-                            (line['label'], line['wavelength']),
-                            len(valid_lines))
+                            (line['label'], line['wavelength']), n_absorbers)
 
             # for a given transition, step through each location in the 
             # observed spectrum where it occurs and deposit a voigt profile
-            for i in parallel_objects(valid_lines, njobs=-1):
+            for i in parallel_objects(np.arange(n_absorbers), njobs=-1):
 
                 # the virtual window into which the line is deposited initially 
                 # spans a region of 2 coarse spectral bins 
@@ -447,7 +444,7 @@
 
             del column_density, delta_lambda, lambda_obs, center_index, \
                 thermal_b, thermal_width, cdens, thermb, dlambda, \
-                vlos, resolution, vbin_width, n_vbins_per_bin, valid_lines 
+                vlos, resolution, vbin_width, n_vbins_per_bin
 
         comm = _get_comm(())
         self.tau_field = comm.mpi_allreduce(self.tau_field, op="sum")


https://bitbucket.org/yt_analysis/yt/commits/3d188f2aedfc/
Changeset:   3d188f2aedfc
Branch:      yt
User:        chummels
Date:        2016-03-15 22:56:42+00:00
Summary:     Merging.
Affected #:  178 files

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -30,6 +30,7 @@
 yt/utilities/lib/amr_kdtools.c
 yt/utilities/lib/basic_octree.c
 yt/utilities/lib/bitarray.c
+yt/utilities/lib/bounding_volume_hierarchy.c
 yt/utilities/lib/contour_finding.c
 yt/utilities/lib/depth_first_octree.c
 yt/utilities/lib/element_mappings.c

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 CONTRIBUTING.rst
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -795,8 +795,8 @@
    rather than explicitly. Ex: ``super(SpecialGridSubclass, self).__init__()``
    rather than ``SpecialGrid.__init__()``.
  * Docstrings should describe input, output, behavior, and any state changes
-   that occur on an object.  See the file ``doc/docstring_example.txt`` for a
-   fiducial example of a docstring.
+   that occur on an object.  See :ref:`docstrings` below for a fiducial example
+   of a docstring.
  * Use only one top-level import per line. Unless there is a good reason not to,
    imports should happen at the top of the file, after the copyright blurb.
  * Never compare with ``True`` or ``False`` using ``==`` or ``!=``, always use
@@ -843,7 +843,7 @@
    be avoided, they must be explained, even if they are only to be passed on to
    a nested function.
 
-.. _docstrings
+.. _docstrings:
 
 Docstrings
 ----------

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 MANIFEST.in
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,4 +1,4 @@
-include README* CREDITS COPYING.txt CITATION requirements.txt optional-requirements.txt
+include README* CREDITS COPYING.txt CITATION requirements.txt optional-requirements.txt setupext.py
 include yt/visualization/mapserver/html/map_index.html
 include yt/visualization/mapserver/html/leaflet/*.css
 include yt/visualization/mapserver/html/leaflet/*.js

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 clean.sh
--- a/clean.sh
+++ b/clean.sh
@@ -1,4 +1,1 @@
-find . -name "*.so" -exec rm -v {} \;
-find . -name "*.pyc" -exec rm -v {} \;
-find . -name "__config__.py" -exec rm -v {} \;
-rm -rvf build dist
+hg --config extensions.purge= purge --all yt

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/analysis_modules/absorption_spectrum.rst
--- a/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
+++ b/doc/source/analyzing/analysis_modules/absorption_spectrum.rst
@@ -204,7 +204,7 @@
 --------------------------
 
 After loading a spectrum and specifying the properties of the species
-used to generate the spectrum, an apporpriate fit can be generated. 
+used to generate the spectrum, an appropriate fit can be generated. 
 
 .. code-block:: python
 
@@ -232,7 +232,7 @@
 as all lines with the same group number as ``group#[i]``.
 
 The ``fitted_flux`` is an ndarray of the same size as ``flux`` and 
-``wavelength`` that contains the cummulative absorption spectrum generated 
+``wavelength`` that contains the cumulative absorption spectrum generated
 by the lines contained in ``fitted_lines``.
 
 Saving a Spectrum Fit

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/analysis_modules/clump_finding.rst
--- a/doc/source/analyzing/analysis_modules/clump_finding.rst
+++ b/doc/source/analyzing/analysis_modules/clump_finding.rst
@@ -7,7 +7,7 @@
 disconnected structures within a dataset.  This works by first creating a 
 single contour over the full range of the contouring field, then continually 
 increasing the lower value of the contour until it reaches the maximum value 
-of the field.  As disconnected structures are identified as separate contoures, 
+of the field.  As disconnected structures are identified as separate contours, 
 the routine continues recursively through each object, creating a hierarchy of 
 clumps.  Individual clumps can be kept or removed from the hierarchy based on 
 the result of user-specified functions, such as checking for gravitational 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
--- a/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
+++ b/doc/source/analyzing/analysis_modules/ellipsoid_analysis.rst
@@ -93,7 +93,7 @@
 ellipsoid's semi-principle axes. "e0" is the largest semi-principle
 axis vector direction that would have magnitude A but normalized.  
 The "tilt" is an angle measured in radians.  It can be best described
-as after the rotation about the z-axis to allign e0 to x in the x-y
+as after the rotation about the z-axis to align e0 to x in the x-y
 plane, and then rotating about the y-axis to align e0 completely to
 the x-axis, the angle remaining to rotate about the x-axis to align
 both e1 to the y-axis and e2 to the z-axis.

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/analysis_modules/halo_catalogs.rst
--- a/doc/source/analyzing/analysis_modules/halo_catalogs.rst
+++ b/doc/source/analyzing/analysis_modules/halo_catalogs.rst
@@ -236,7 +236,7 @@
 All callbacks, quantities, and filters are stored in an actions list, 
 meaning that they are executed in the same order in which they were added. 
 This enables the use of simple, reusable, single action callbacks that 
-depend on each other. This also prevents unecessary computation by allowing 
+depend on each other. This also prevents unnecessary computation by allowing 
 the user to add filters at multiple stages to skip remaining analysis if it 
 is not warranted.
 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/analysis_modules/halo_mass_function.rst
--- a/doc/source/analyzing/analysis_modules/halo_mass_function.rst
+++ b/doc/source/analyzing/analysis_modules/halo_mass_function.rst
@@ -13,7 +13,7 @@
 
 A halo mass function can be created for the halos identified in a cosmological 
 simulation, as well as analytic fits using any arbitrary set of cosmological
-paramters. In order to create a mass function for simulated halos, they must
+parameters. In order to create a mass function for simulated halos, they must
 first be identified (using HOP, FOF, or Rockstar, see 
 :ref:`halo_catalog`) and loaded as a halo dataset object. The distribution of
 halo masses will then be found, and can be compared to the analytic prediction
@@ -78,7 +78,7 @@
   my_halos = load("rockstar_halos/halos_0.0.bin")
   hmf = HaloMassFcn(halos_ds=my_halos)
 
-A simulation dataset can be passed along with additonal cosmological parameters 
+A simulation dataset can be passed along with additional cosmological parameters 
 to create an analytic mass function.
 
 .. code-block:: python
@@ -106,7 +106,7 @@
 -----------------
 
 * **simulation_ds** (*Simulation dataset object*)
-  The loaded simulation dataset, used to set cosmological paramters.
+  The loaded simulation dataset, used to set cosmological parameters.
   Default : None.
 
 * **halos_ds** (*Halo dataset object*)
@@ -130,7 +130,7 @@
 
 * **omega_baryon0**  (*float*)
   The fraction of the universe made up of baryonic matter. This is not 
-  always stored in the datset and should be checked by hand.
+  always stored in the dataset and should be checked by hand.
   Default : 0.0456.
 
 * **hubble0** (*float*)
@@ -140,14 +140,14 @@
 * **sigma8** (*float*)
   The amplitude of the linear power spectrum at z=0 as specified by 
   the rms amplitude of mass-fluctuations in a top-hat sphere of radius 
-  8 Mpc/h. This is not always stored in the datset and should be 
+  8 Mpc/h. This is not always stored in the dataset and should be 
   checked by hand.
   Default : 0.86.
 
 * **primoridal_index** (*float*)
   This is the index of the mass power spectrum before modification by 
   the transfer function. A value of 1 corresponds to the scale-free 
-  primordial spectrum. This is not always stored in the datset and 
+  primordial spectrum. This is not always stored in the dataset and 
   should be checked by hand.
   Default : 1.0.
 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/analysis_modules/halo_transition.rst
--- a/doc/source/analyzing/analysis_modules/halo_transition.rst
+++ b/doc/source/analyzing/analysis_modules/halo_transition.rst
@@ -40,7 +40,7 @@
 the full halo catalog documentation for further information about
 how to add these quantities and what quantities are available.
 
-You no longer have to iteratre over halos in the ``halo_list``.
+You no longer have to iterate over halos in the ``halo_list``.
 Now a halo dataset can be treated as a regular dataset and 
 all quantities are available by accessing ``all_data``.
 Specifically, all quantities can be accessed as shown:

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/analysis_modules/light_cone_generator.rst
--- a/doc/source/analyzing/analysis_modules/light_cone_generator.rst
+++ b/doc/source/analyzing/analysis_modules/light_cone_generator.rst
@@ -50,7 +50,7 @@
   ``use_minimum_datasets`` set to False, this parameter specifies the 
   fraction of the total box size to be traversed before rerandomizing the 
   projection axis and center.  This was invented to allow light cones with 
-  thin slices to sample coherent large cale structure, but in practice does 
+  thin slices to sample coherent large scale structure, but in practice does 
   not work so well.  Try setting this parameter to 1 and see what happens.  
   Default: 0.0.
 
@@ -74,7 +74,7 @@
 
 A light cone solution consists of a list of datasets spanning a redshift 
 interval with a random orientation for each dataset.  A new solution 
-is calcuated with the 
+is calculated with the 
 :func:`~yt.analysis_modules.cosmological_observation.light_cone.light_cone.LightCone.calculate_light_cone_solution`
 function:
 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/analysis_modules/photon_simulator.rst
--- a/doc/source/analyzing/analysis_modules/photon_simulator.rst
+++ b/doc/source/analyzing/analysis_modules/photon_simulator.rst
@@ -347,7 +347,7 @@
   be used to control what vector corresponds to the "up" direction in 
   the resulting event list. 
 * ``psf_sigma`` may be specified to provide a crude representation of 
-  a PSF, and corresponds to the standard deviation (in degress) of a 
+  a PSF, and corresponds to the standard deviation (in degrees) of a 
   Gaussian PSF model. 
 
 Let's just take a quick look at the raw events object:

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/fields.rst
--- a/doc/source/analyzing/fields.rst
+++ b/doc/source/analyzing/fields.rst
@@ -94,16 +94,16 @@
 
 There is a third, borderline class of field in yt, as well.  This is the
 "alias" type, where a field on disk (for example, (frontend, ``Density``)) is 
-aliased into an internal yt-name (for example, (``gas``, ``density``)).  The 
+aliased into an internal yt-name (for example, (``gas``, ``density``)). The 
 aliasing process allows universally-defined derived fields to take advantage of 
 internal names, and it also provides an easy way to address what units something 
 should be returned in.  If an aliased field is requested (and aliased fields 
 will always be lowercase, with underscores separating words) it will be returned 
-in CGS units (future versions will enable global defaults to be set for MKS and 
-other unit systems), whereas if the frontend-specific field is requested, it 
-will not undergo any unit conversions from its natural units.  (This rule is 
-occasionally violated for fields which are mesh-dependent, specifically particle 
-masses in some cosmology codes.)
+in the units specified by the unit system of the database (see :ref:`unit_systems`
+for a guide to using the different unit systems in yt), whereas if the 
+frontend-specific field is requested, it will not undergo any unit conversions 
+from its natural units.  (This rule is occasionally violated for fields which 
+are mesh-dependent, specifically particle masses in some cosmology codes.)
 
 .. _known-field-types:
 
@@ -125,7 +125,8 @@
 * ``gas`` -- This is the usual default for simulation frontends for fluid
   types.  These fields are typically aliased to the frontend-specific mesh
   fields for grid-based codes or to the deposit fields for particle-based
-  codes.  Default units are in CGS.
+  codes.  Default units are in the unit system of the dataset (see 
+  :ref:`unit_systems` for more information).
 * particle type -- These are particle fields that exist on-disk as written 
   by individual frontends.  If the frontend designates names for these particles
   (i.e. particle type) those names are the field types. 
@@ -240,6 +241,37 @@
    print(ds.field_info["gas", "pressure"].get_units())
    print(ds.field_info["gas", "pressure"].get_source())
 
+.. _bfields:
+
+Magnetic Fields
+---------------
+
+Magnetic fields require special handling, because their dimensions are different in
+different systems of units, in particular between the CGS and MKS (SI) systems of units.
+Superficially, it would appear that they are in the same dimensions, since the units 
+of the magnetic field in the CGS and MKS system are gauss (:math:`\rm{G}`) and tesla 
+(:math:`\rm{T}`), respectively, and numerically :math:`1~\rm{G} = 10^{-4}~\rm{T}`. However, 
+if we examine the base units, we find that they do indeed have different dimensions:
+
+.. math::
+
+    \rm{1~G = 1~\frac{\sqrt{g}}{\sqrt{cm}\cdot{s}}} \\
+    \rm{1~T = 1~\frac{kg}{A\cdot{s^2}}}
+
+It is easier to see the difference between the dimensionality of the magnetic field in the two
+systems in terms of the definition of the magnetic pressure:
+
+.. math::
+
+    p_B = \frac{B^2}{8\pi}~\rm{(cgs)} \\
+    p_B = \frac{B^2}{2\mu_0}~\rm{(MKS)}
+
+where :math:`\mu_0 = 4\pi \times 10^{-7}~\rm{N/A^2}` is the vacuum permeability. yt automatically
+detects on a per-frontend basis what units the magnetic should be in, and allows conversion between 
+different magnetic field units in the different :ref:`unit systems <unit_systems>` as well. To 
+determine how to set up special magnetic field handling when designing a new frontend, check out 
+:ref:`bfields-frontend`.
+
 Particle Fields
 ---------------
 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/objects.rst
--- a/doc/source/analyzing/objects.rst
+++ b/doc/source/analyzing/objects.rst
@@ -246,6 +246,8 @@
     | A plane normal to a specified vector and intersecting a particular 
       coordinate.
 
+.. _region-reference:
+
 3D Objects
 """"""""""
 
@@ -256,8 +258,6 @@
       creating a Region covering the entire dataset domain.  It is effectively 
       ``ds.region(ds.domain_center, ds.domain_left_edge, ds.domain_right_edge)``.
 
-.. _region-reference:
-
 **Box Region** 
     | Class :class:`~yt.data_objects.selection_data_containers.YTRegion`
     | Usage: ``region(center, left_edge, right_edge, fields=None, ds=None, field_parameters=None, data_source=None)``
@@ -313,7 +313,7 @@
     | A ``cut_region`` is a filter which can be applied to any other data 
       object.  The filter is defined by the conditionals present, which 
       apply cuts to the data in the object.  A ``cut_region`` will work
-      for either particle fields or mesh fields, but not on both simulaneously.
+      for either particle fields or mesh fields, but not on both simultaneously.
       For more detailed information and examples, see :ref:`cut-regions`.
 
 **Collection of Data Objects** 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/parallel_computation.rst
--- a/doc/source/analyzing/parallel_computation.rst
+++ b/doc/source/analyzing/parallel_computation.rst
@@ -49,7 +49,7 @@
 
     $ conda install mpi4py
 
-This will install `MPICH2 <https://www.mpich.org/>`_ and will interefere with
+This will install `MPICH2 <https://www.mpich.org/>`_ and will interfere with
 other MPI libraries that are already installed. Therefore, it is preferable to
 use the ``pip`` installation method.
 
@@ -103,7 +103,7 @@
    p.save()
 
 If this script is run in parallel, two of the most expensive operations -
-finding of the maximum density and the projection will be calulcated in
+finding of the maximum density and the projection will be calculated in
 parallel.  If we save the script as ``my_script.py``, we would run it on 16 MPI
 processes using the following Bash command:
 
@@ -121,7 +121,7 @@
 
 You can set the ``communicator`` keyword in the 
 :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.enable_parallelism` 
-call to a specific MPI communicator to specify a subset of availble MPI 
+call to a specific MPI communicator to specify a subset of available MPI 
 processes.  If none is specified, it defaults to ``COMM_WORLD``.
 
 Creating Parallel and Serial Sections in a Script
@@ -251,7 +251,7 @@
 You may define an empty dictionary and include it as the keyword argument 
 ``storage`` to ``piter()``.  Then, during the processing step, you can access
 this dictionary as the ``sto`` object.  After the 
-loop is finished, the dictionary is re-aggragated from all of the processors, 
+loop is finished, the dictionary is re-aggregated from all of the processors, 
 and you can access the contents:
 
 .. code-block:: python

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/time_series_analysis.rst
--- a/doc/source/analyzing/time_series_analysis.rst
+++ b/doc/source/analyzing/time_series_analysis.rst
@@ -79,7 +79,7 @@
 Analyzing an Entire Simulation
 ------------------------------
 
-.. note:: Implemented for: Enzo, Gadget, OWLS.
+.. note:: Implemented for the Enzo, Gadget, OWLS, and Exodus II frontends.
 
 The parameter file used to run a simulation contains all the information 
 necessary to know what datasets should be available.  The ``simulation`` 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/units/2)_Fields_and_unit_conversion.ipynb
--- a/doc/source/analyzing/units/2)_Fields_and_unit_conversion.ipynb
+++ b/doc/source/analyzing/units/2)_Fields_and_unit_conversion.ipynb
@@ -24,9 +24,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "import yt\n",
@@ -41,9 +39,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (maxval)"
@@ -52,9 +48,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (dens)"
@@ -63,9 +57,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "mass = dd['cell_mass']\n",
@@ -79,9 +71,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "dx = dd['dx']\n",
@@ -107,9 +97,11 @@
     "* `in_units`\n",
     "* `in_cgs`\n",
     "* `in_mks`\n",
+    "* `in_base`\n",
     "* `convert_to_units`\n",
     "* `convert_to_cgs`\n",
-    "* `convert_to_mks`"
+    "* `convert_to_mks`\n",
+    "* `convert_to_base`"
    ]
   },
   {
@@ -122,9 +114,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (dd['density'].in_units('Msun/pc**3'))"
@@ -134,35 +124,73 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "`in_cgs` and `in_mks` return a copy of the array converted CGS and MKS units, respectively:"
+    "`in_cgs` and `in_mks` return a copy of the array converted to CGS and MKS units, respectively:"
    ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (dd['pressure'])\n",
-    "print ((dd['pressure']).in_cgs())\n",
-    "print ((dd['pressure']).in_mks())"
+    "print (dd['pressure'].in_cgs())\n",
+    "print (dd['pressure'].in_mks())"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "The next two methods do in-place conversions:"
+    "`in_cgs` and `in_mks` are just special cases of the more general `in_base`, which can convert a `YTArray` to a number of different unit systems:"
    ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "print (dd['pressure'].in_base('imperial')) # Imperial/English base units\n",
+    "print (dd['pressure'].in_base('galactic')) # Base units of kpc, Msun, Myr\n",
+    "print (dd['pressure'].in_base('planck')) # Base units in the Planck system\n",
+    "print (dd['pressure'].in_base()) # defaults to cgs if no argument given"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "`in_base` can even take a dataset as the argument to convert the `YTArray` into the base units of the dataset:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "print (dd['pressure'].in_base(ds)) # The IsolatedGalaxy dataset from above"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "yt defines a number of unit systems, and new unit systems may be added by the user, which can also be passed to `in_base`. To learn more about the unit systems, how to use them with datasets and other objects, and how to add new ones, see [Unit Systems](unit_systems.html)."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The rest of the methods do in-place conversions:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
    "outputs": [],
    "source": [
     "dens = dd['density']\n",
@@ -182,9 +210,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (dd['density'])\n",
@@ -206,9 +232,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (dd['cell_mass'])\n",
@@ -234,9 +258,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "q1 = yt.YTArray(1.0,\"C\") # coulombs\n",
@@ -249,9 +271,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "B1 = yt.YTArray(1.0,\"T\") # tesla\n",
@@ -285,9 +305,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "import numpy as np\n",
@@ -317,9 +335,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (dd['cell_mass'].ndarray_view())\n",
@@ -338,9 +354,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "density_values = dd['density'].d\n",
@@ -374,23 +388,19 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "from astropy import units as u\n",
     "\n",
     "x = 42.0 * u.meter\n",
-    "y = yt.YTQuantity.from_astropy(x) "
+    "y = yt.YTQuantity.from_astropy(x)"
    ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (x, type(x))\n",
@@ -400,9 +410,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "a = np.random.random(size=10) * u.km/u.s\n",
@@ -412,9 +420,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (a, type(a))\n",
@@ -431,9 +437,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "temp = dd[\"temperature\"]\n",
@@ -443,9 +447,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (temp, type(temp))\n",
@@ -462,9 +464,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "from yt.utilities.physical_constants import kboltz\n",
@@ -474,9 +474,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (kboltz, type(kboltz))\n",
@@ -493,9 +491,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "k1 = kboltz.to_astropy()\n",
@@ -506,9 +502,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "c = yt.YTArray.from_astropy(a)\n",
@@ -526,9 +520,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "from pint import UnitRegistry\n",
@@ -540,9 +532,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (v, type(v))\n",
@@ -552,9 +542,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "ptemp = temp.to_pint()"
@@ -563,9 +551,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (temp, type(temp))\n",
@@ -582,7 +568,7 @@
   "language_info": {
    "codemirror_mode": {
     "name": "ipython",
-    "version": 3
+    "version": 3.0
    },
    "file_extension": ".py",
    "mimetype": "text/x-python",
@@ -594,4 +580,4 @@
  },
  "nbformat": 4,
  "nbformat_minor": 0
-}
+}
\ No newline at end of file

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
--- a/doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
+++ b/doc/source/analyzing/units/4)_Comparing_units_from_different_datasets.ipynb
@@ -4,7 +4,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Units that refer to the internal simulation coordinate system will have different CGS conversion factors in different datasets.  Depending on how a unit system is implemented, this could add an element of uncertainty when we compare dimensional arrays instances produced by different unit systems.  Fortunately, this is not a problem for `YTArray` since all `YTArray` unit systems are defined in terms of physical CGS units.\n",
+    "Units that refer to the internal simulation coordinate system will have different CGS conversion factors in different datasets.  Depending on how a unit system is implemented, this could add an element of uncertainty when we compare dimensional array instances produced by different unit systems.  Fortunately, this is not a problem for `YTArray` since all `YTArray` unit systems are defined in terms of physical CGS units.\n",
     "\n",
     "As an example, let's load up two enzo datasets from different redshifts in the same cosmology simulation."
    ]
@@ -12,9 +12,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "# A high redshift output from z ~ 8\n",
@@ -29,9 +27,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "# A low redshift output from z ~ 0\n",
@@ -51,9 +47,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (ds2.length_unit.in_cgs()/ds1.length_unit.in_cgs() == (1+ds1.current_redshift)/(1+ds2.current_redshift))"
@@ -69,9 +63,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "print (ds2.length_unit/ds1.length_unit)"
@@ -89,9 +81,7 @@
   {
    "cell_type": "code",
    "execution_count": null,
-   "metadata": {
-    "collapsed": false
-   },
+   "metadata": {},
    "outputs": [],
    "source": [
     "import yt\n",
@@ -120,7 +110,7 @@
   "language_info": {
    "codemirror_mode": {
     "name": "ipython",
-    "version": 3
+    "version": 3.0
    },
    "file_extension": ".py",
    "mimetype": "text/x-python",
@@ -132,4 +122,4 @@
  },
  "nbformat": 4,
  "nbformat_minor": 0
-}
+}
\ No newline at end of file

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/units/7)_Unit_Systems.ipynb
--- /dev/null
+++ b/doc/source/analyzing/units/7)_Unit_Systems.ipynb
@@ -0,0 +1,491 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "By default, the results of most calculations in yt are expressed in a \"centimeters-grams-seconds\" (CGS) set of units. This includes the values of derived fields and aliased fields.\n",
+    "\n",
+    "However, this system of units may not be the most natural for a given dataset or an entire class of datasets. For this reason, yt provides the ability to define new unit systems and use them in a way that is highly configurable by the end-user. "
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Unit Systems Available in yt"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Several unit systems are already supplied for use within yt. They are:\n",
+    "\n",
+    "* `\"cgs\"`: Centimeters-grams-seconds unit system, with base of `(cm, g, s, K, radian)`. Uses the Gaussian normalization for electromagnetic units. \n",
+    "* `\"mks\"`: Meters-kilograms-seconds unit system, with base of `(m, kg, s, K, radian, A)`.\n",
+    "* `\"imperial\"`: Imperial unit system, with base of `(mile, lbm, s, R, radian)`.\n",
+    "* `\"galactic\"`: \"Galactic\" unit system, with base of `(kpc, Msun, Myr, K, radian)`.\n",
+    "* `\"solar\"`: \"Solar\" unit system, with base of `(AU, Mearth, yr, K, radian)`. \n",
+    "* `\"planck\"`: Planck natural units $(\\hbar = c = G = k_B = 1)$, with base of `(l_pl, m_pl, t_pl, T_pl, radian)`. \n",
+    "* `\"geometrized\"`: Geometrized natural units $(c = G = 1)$, with base of `(l_geom, m_geom, t_geom, K, radian)`. \n",
+    "\n",
+    "We can examine these unit systems by querying them from the `unit_system_registry`. For example, we can look at the default CGS system:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "import yt\n",
+    "yt.unit_system_registry[\"cgs\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can see that we have two sets of units that this system defines: \"base\" and \"other\" units. The \"base\" units are the set of units from which all other units in the system are composed of, such as centimeters, grams, and seconds. The \"other\" units are compound units which fields with specific dimensionalities are converted to, such as ergs, dynes, gauss, and electrostatic units (esu). \n",
+    "\n",
+    "We see a similar setup for the MKS system, except that in this case, there is a base unit of current, the Ampere:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "yt.unit_system_registry[\"mks\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also look at the imperial system:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "yt.unit_system_registry[\"imperial\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "and the \"galactic\" system as well:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "yt.unit_system_registry[\"galactic\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Converting `YTArrays` to the Different Unit Systems"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Choosing a Unit System When Loading a Dataset"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "When a dataset is `load`ed, a unit system may be specified. When this happens, all aliased and derived fields will be converted to the units of the given system. The default is `\"cgs\"`.\n",
+    "\n",
+    "For example, we can specify that the fields from a FLASH dataset can be expressed in MKS units:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds_flash = yt.load(\"GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100\", unit_system=\"mks\")\n",
+    "sp = ds_flash.sphere(\"c\", (100.,\"kpc\"))\n",
+    "print (sp[\"density\"]) # This is an alias for (\"flash\",\"dens\")\n",
+    "print (sp[\"pressure\"]) # This is an alias for (\"flash\",\"pres\")\n",
+    "print (sp[\"angular_momentum_x\"]) # This is a derived field\n",
+    "print (sp[\"kinetic_energy\"]) # This is also a derived field"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Aliased fields are converted to the requested unit system, but the on-disk fields that they correspond to remain in their original (code) units:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "print (sp[\"flash\",\"dens\"]) # This is aliased to (\"gas\", \"density\")\n",
+    "print (sp[\"flash\",\"pres\"]) # This is aliased to (\"gas\", \"pressure\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can take an `Enzo` dataset and express it in `\"galactic\"` units:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds_enzo = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\", unit_system=\"galactic\")\n",
+    "sp = ds_enzo.sphere(\"c\", (20.,\"kpc\"))\n",
+    "print (sp[\"density\"])\n",
+    "print (sp[\"pressure\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also express all of the fields associated with a dataset in that dataset's system of \"code\" units. Though the on-disk fields are already in these units, this means that we can express even derived fields in code units as well:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "ds_chombo = yt.load(\"KelvinHelmholtz/data.0004.hdf5\", unit_system=\"code\")\n",
+    "dd = ds_chombo.all_data()\n",
+    "print (dd[\"density\"])\n",
+    "print (dd[\"kinetic_energy\"])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Defining Fields So That They Can Use the Different Unit Systems"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If you define a new derived field for use in yt and wish to make the different unit systems available to it, you will need to specify this when calling `add_field`. Suppose I defined a new field called `\"momentum_x\"` and wanted it to have general units. I would have to set it up in this fashion, using the `unit_system` attribute of the dataset and querying it for the appropriate dimensions:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "mom_units = ds_flash.unit_system[\"velocity\"]*ds_flash.unit_system[\"density\"]\n",
+    "def _momentum_x(field, data):\n",
+    "    return data[\"density\"]*data[\"velocity_x\"]\n",
+    "ds_flash.add_field((\"gas\",\"momentum_x\"), function=_momentum_x, units=mom_units)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Now, the field will automatically be expressed in whatever units the dataset was called with. In this case, it was MKS:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "slc = yt.SlicePlot(ds_flash, \"z\", [\"momentum_x\"], width=(300.,\"kpc\"))\n",
+    "slc.show()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Note that the momentum density has been plotted with the correct MKS units of $\\mathrm{kg/(m^2\\cdot{s})}$."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "If you don't create a derived field from a dataset but instead use `yt.add_field`, and still want to use the unit system of that dataset for the units, the only option at present is to set `units=\"auto\"` in the call to `yt.add_field` and the `dimensions` keyword to the correct dimensions for the field:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.units import clight\n",
+    "\n",
+    "def _rest_energy(field, data):\n",
+    "    return data[\"cell_mass\"]*clight*clight\n",
+    "yt.add_field((\"gas\",\"rest_energy\"), function=_rest_energy, units=\"auto\", dimensions=\"energy\")\n",
+    "\n",
+    "ds_flash2 = yt.load(\"GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150\", unit_system=\"galactic\")\n",
+    "\n",
+    "sp = ds_flash2.sphere(\"c\", (100.,\"kpc\"))\n",
+    "sp[\"rest_energy\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Obtaining Physical Constants in a Specific Unit System"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Each unit system provides the ability to obtain any physical constant in yt's physical constants database in the base units of that system via the `constants` attribute of the unit system. For example, to obtain the value of Newton's universal constant of gravitation in different base units:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "for name in [\"cgs\", \"mks\", \"imperial\", \"planck\", \"geometrized\"]:\n",
+    "    unit_system = yt.unit_system_registry[name]\n",
+    "    print (name, unit_system.constants.G)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Equivalently, one could import a physical constant from the main database and convert it using `in_base`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "from yt.utilities.physical_constants import G\n",
+    "print (G.in_base(\"mks\"))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Defining Your Own Unit System"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "You are not limited to using the unit systems already defined by yt. A new unit system can be defined by creating a new `UnitSystem` instance. For example, to create a unit system where the default units are in millimeters, centigrams, and microseconds:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "small_unit_system = yt.UnitSystem(\"small\", \"mm\", \"cg\", \"us\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "where the required arguments are a `name` for the unit system, and the `length_unit`, `mass_unit`, and `time_unit` for the unit system, which serve as the \"base\" units to convert everything else to. Once a unit system instance is created, it is automatically added to the `unit_system_registry` so that it may be used throughout yt:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "yt.unit_system_registry[\"small\"]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Note that the base units for the dimensions of angle and temperature have been automatically set to radians and Kelvin, respectively. If desired, these can be specified using optional arguments when creating the `UnitSystem` object:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "wacky_unit_system = yt.UnitSystem(\"wacky\", \"mile\", \"kg\", \"day\", temperature_unit=\"R\", angle_unit=\"deg\")\n",
+    "wacky_unit_system"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Though it will rarely be necessary, an MKS-style system of units where a unit of current can be specified as a base unit can also be created using the `current_mks` optional argument:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "mksish_unit_system = yt.UnitSystem(\"mksish\", \"dm\", \"ug\", \"ks\", current_mks_unit=\"mA\")\n",
+    "mksish_unit_system"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Initializing a `UnitSystem` object only sets up the base units. In this case, all fields will be converted to combinations of these base units based on their dimensionality. However, you may want to specify that fields of a given dimensionality use a compound unit by default instead. For example, you might prefer that in the `\"small\"` unit system that pressures be represented in microdynes per millimeter squared. To do this, set these to be the units of the `\"pressure\"` dimension explicitly:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "small_unit_system[\"pressure\"] = \"udyne/mm**2\""
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can now look at the `small_unit_system` object and see that these units are now defined for pressure in the \"Other Units\" category:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "small_unit_system"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can do the same for a few other dimensionalities:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
+   "source": [
+    "small_unit_system[\"magnetic_field_cgs\"] = \"mG\"\n",
+    "small_unit_system[\"specific_energy\"] = \"cerg/ug\"\n",
+    "small_unit_system[\"velocity\"] = \"cm/s\"\n",
+    "small_unit_system"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.5.1"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/units/comoving_units_and_code_units.rst
--- a/doc/source/analyzing/units/comoving_units_and_code_units.rst
+++ b/doc/source/analyzing/units/comoving_units_and_code_units.rst
@@ -12,7 +12,7 @@
 
 yt has additional capabilities to handle the comoving coordinate system used
 internally in cosmological simulations. Simulations that use comoving
-coordinates, all length units have three other counterparts correspoding to
+coordinates, all length units have three other counterparts corresponding to
 comoving units, scaled comoving units, and scaled proper units. In all cases
 'scaled' units refer to scaling by the reduced Hubble parameter - i.e. the length
 unit is what it would be in a universe where Hubble's parameter is 100 km/s/Mpc.

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/units/index.rst
--- a/doc/source/analyzing/units/index.rst
+++ b/doc/source/analyzing/units/index.rst
@@ -34,6 +34,7 @@
    comparing_units_from_different_datasets
    units_and_plotting
    unit_equivalencies
+   unit_systems
 
 .. note::
 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/analyzing/units/unit_systems.rst
--- /dev/null
+++ b/doc/source/analyzing/units/unit_systems.rst
@@ -0,0 +1,7 @@
+.. _unit_systems:
+
+Unit Systems
+============
+
+.. notebook:: 7)_Unit_Systems.ipynb
+:skip_exceptions:

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -60,7 +60,7 @@
 
 # General information about the project.
 project = u'The yt Project'
-copyright = u'2013, the yt Project'
+copyright = u'2013-2016, the yt Project'
 
 # The version info for the project you're documenting, acts as replacement for
 # |version| and |release|, also used in various other places throughout the

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/cookbook/camera_movement.py
--- a/doc/source/cookbook/camera_movement.py
+++ b/doc/source/cookbook/camera_movement.py
@@ -1,31 +1,30 @@
 import yt
 import numpy as np
 
-# Follow the simple_volume_rendering cookbook for the first part of this.
-ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")  # load data
+ds = yt.load("MOOSE_sample_data/out.e-s010")
 sc = yt.create_scene(ds)
 cam = sc.camera
-cam.resolution = (512, 512)
-cam.set_width(ds.domain_width/20.0)
 
-# Find the maximum density location, store it in max_c
-v, max_c = ds.find_max('density')
+# save an image at the starting position
+frame = 0
+sc.save('camera_movement_%04i.png' % frame)
+frame += 1
 
-frame = 0
-# Move to the maximum density location over 5 frames
-for _ in cam.iter_move(max_c, 5):
+# Zoom out by a factor of 2 over 5 frames
+for _ in cam.iter_zoom(0.5, 5):
     sc.render()
-    sc.save('camera_movement_%04i.png' % frame, sigma_clip=8.0)
+    sc.save('camera_movement_%04i.png' % frame)
     frame += 1
 
-# Zoom in by a factor of 10 over 5 frames
-for _ in cam.iter_zoom(10.0, 5):
+# Move to the position [-10.0, 10.0, -10.0] over 5 frames
+pos = ds.arr([-10.0, 10.0, -10.0], 'code_length')
+for _ in cam.iter_move(pos, 5):
     sc.render()
-    sc.save('camera_movement_%04i.png' % frame, sigma_clip=8.0)
+    sc.save('camera_movement_%04i.png' % frame)
     frame += 1
 
-# Do a rotation over 5 frames
+# Rotate by 180 degrees over 5 frames
 for _ in cam.iter_rotate(np.pi, 5):
     sc.render()
-    sc.save('camera_movement_%04i.png' % frame, sigma_clip=8.0)
+    sc.save('camera_movement_%04i.png' % frame)
     frame += 1

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/cookbook/complex_plots.rst
--- a/doc/source/cookbook/complex_plots.rst
+++ b/doc/source/cookbook/complex_plots.rst
@@ -195,7 +195,11 @@
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 In this recipe, we move a camera through a domain and take multiple volume
-rendering snapshots.
+rendering snapshots. This recipe uses an unstructured mesh dataset (see
+:ref:`unstructured_mesh_rendering`), which makes it easier to visualize what 
+the Camera is doing, but you can manipulate the Camera for other dataset types 
+in exactly the same manner.
+
 See :ref:`camera_movement` for more information.
 
 .. yt_cookbook:: camera_movement.py

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/cookbook/cosmological_analysis.rst
--- a/doc/source/cookbook/cosmological_analysis.rst
+++ b/doc/source/cookbook/cosmological_analysis.rst
@@ -65,7 +65,7 @@
 
 .. yt_cookbook:: light_ray.py 
 
-This script demontrates how to make a light ray from a single dataset.
+This script demonstrates how to make a light ray from a single dataset.
 
 .. _cookbook-single-dataset-light-ray:
 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/cookbook/notebook_tutorial.rst
--- a/doc/source/cookbook/notebook_tutorial.rst
+++ b/doc/source/cookbook/notebook_tutorial.rst
@@ -17,7 +17,7 @@
    $ ipython notebook
 
 Depending on your default web browser and system setup this will open a web
-browser and direct you to the notebook dahboard.  If it does not,  you might
+browser and direct you to the notebook dashboard.  If it does not,  you might
 need to connect to the notebook manually.  See the `IPython documentation
 <http://ipython.org/ipython-doc/stable/notebook/notebook.html#starting-the-notebook-server>`_
 for more details.

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/cookbook/various_lens.py
--- a/doc/source/cookbook/various_lens.py
+++ b/doc/source/cookbook/various_lens.py
@@ -1,5 +1,5 @@
 import yt
-from yt.visualization.volume_rendering.api import Scene, Camera, VolumeSource
+from yt.visualization.volume_rendering.api import Scene, VolumeSource
 import numpy as np
 
 field = ("gas", "density")
@@ -19,7 +19,7 @@
 tf.grey_opacity = True
 
 # Plane-parallel lens
-cam = Camera(ds, lens_type='plane-parallel')
+cam = sc.add_camera(ds, lens_type='plane-parallel')
 # Set the resolution of tbe final projection.
 cam.resolution = [250, 250]
 # Set the location of the camera to be (x=0.2, y=0.5, z=0.5)
@@ -32,13 +32,12 @@
 # Set the width of the camera, where width[0] and width[1] specify the length and
 # height of final projection, while width[2] in plane-parallel lens is not used.
 cam.set_width(ds.domain_width * 0.5)
-sc.camera = cam
 sc.add_source(vol)
 sc.render()
 sc.save('lens_plane-parallel.png', sigma_clip=6.0)
 
 # Perspective lens
-cam = Camera(ds, lens_type='perspective')
+cam = sc.add_camera(ds, lens_type='perspective')
 cam.resolution = [250, 250]
 # Standing at (x=0.2, y=0.5, z=0.5), we look at the area of x>0.2 (with some open angle
 # specified by camera width) along the positive x direction.
@@ -49,13 +48,12 @@
 # height of the final projection, while width[2] specifies the distance between the
 # camera and the final image.
 cam.set_width(ds.domain_width * 0.5)
-sc.camera = cam
 sc.add_source(vol)
 sc.render()
 sc.save('lens_perspective.png', sigma_clip=6.0)
 
 # Stereo-perspective lens
-cam = Camera(ds, lens_type='stereo-perspective')
+cam = sc.add_camera(ds, lens_type='stereo-perspective')
 # Set the size ratio of the final projection to be 2:1, since stereo-perspective lens
 # will generate the final image with both left-eye and right-eye ones jointed together.
 cam.resolution = [500, 250]
@@ -65,14 +63,13 @@
 cam.set_width(ds.domain_width*0.5)
 # Set the distance between left-eye and right-eye.
 cam.lens.disparity = ds.domain_width[0] * 1.e-3
-sc.camera = cam
 sc.add_source(vol)
 sc.render()
 sc.save('lens_stereo-perspective.png', sigma_clip=6.0)
 
 # Fisheye lens
 dd = ds.sphere(ds.domain_center, ds.domain_width[0] / 10)
-cam = Camera(dd, lens_type='fisheye')
+cam = sc.add_camera(dd, lens_type='fisheye')
 cam.resolution = [250, 250]
 v, c = ds.find_max(field)
 cam.set_position(c - 0.0005 * ds.domain_width)
@@ -80,13 +77,12 @@
                        north_vector=north_vector)
 cam.set_width(ds.domain_width)
 cam.lens.fov = 360.0
-sc.camera = cam
 sc.add_source(vol)
 sc.render()
 sc.save('lens_fisheye.png', sigma_clip=6.0)
 
 # Spherical lens
-cam = Camera(ds, lens_type='spherical')
+cam = sc.add_camera(ds, lens_type='spherical')
 # Set the size ratio of the final projection to be 2:1, since spherical lens
 # will generate the final image with length of 2*pi and height of pi.
 cam.resolution = [500, 250]
@@ -97,13 +93,12 @@
                        north_vector=north_vector)
 # In (stereo)spherical camera, camera width is not used since the entire volume
 # will be rendered
-sc.camera = cam
 sc.add_source(vol)
 sc.render()
 sc.save('lens_spherical.png', sigma_clip=6.0)
 
 # Stereo-spherical lens
-cam = Camera(ds, lens_type='stereo-spherical')
+cam = sc.add_camera(ds, lens_type='stereo-spherical')
 # Set the size ratio of the final projection to be 4:1, since spherical-perspective lens
 # will generate the final image with both left-eye and right-eye ones jointed together.
 cam.resolution = [1000, 250]
@@ -114,7 +109,6 @@
 # will be rendered
 # Set the distance between left-eye and right-eye.
 cam.lens.disparity = ds.domain_width[0] * 1.e-3
-sc.camera = cam
 sc.add_source(vol)
 sc.render()
 sc.save('lens_stereo-spherical.png', sigma_clip=6.0)

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/developing/building_the_docs.rst
--- a/doc/source/developing/building_the_docs.rst
+++ b/doc/source/developing/building_the_docs.rst
@@ -158,7 +158,7 @@
 HTML. to simplify versioning of the notebook JSON format, we store notebooks in
 an unevaluated state.
 
-To build the full documentation, you will need yt, jupyter, and all depedencies 
+To build the full documentation, you will need yt, jupyter, and all dependencies 
 needed for yt's analysis modules installed. The following dependencies were 
 used to generate the yt documentation during the release of yt 3.2 in 2015.
 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/developing/creating_derived_fields.rst
--- a/doc/source/developing/creating_derived_fields.rst
+++ b/doc/source/developing/creating_derived_fields.rst
@@ -33,7 +33,10 @@
 In this example, the ``density`` field will return data with units of
 ``g/cm**3`` and the ``thermal_energy`` field will return data units of
 ``erg/g``, so the result will automatically have units of pressure,
-``erg/cm**3``.
+``erg/cm**3``. This assumes the unit system is set to the default, which is
+CGS: if a different unit system is selected, the result will be in the same
+dimensions of pressure but different units. See :ref:`unit_systems` for more
+information.
 
 Once we've defined our function, we need to notify yt that the field is
 available.  The :func:`add_field` function is the means of doing this; it has a
@@ -47,7 +50,7 @@
 
 .. code-block:: python
 
-   yt.add_field("pressure", function=_pressure, units="dyne/cm**2")
+   yt.add_field(("gas", "pressure"), function=_pressure, units="dyne/cm**2")
 
 We feed it the name of the field, the name of the function, and the
 units.  Note that the units parameter is a "raw" string, in the format that yt 
@@ -59,20 +62,20 @@
 as in the ``_pressure`` example above.
 
 Field definitions return array data with units. If the field function returns
-data in a dimensionally equivalent unit (e.g. a ``dyne`` versus a ``N``), the
+data in a dimensionally equivalent unit (e.g. a ``"dyne"`` versus a ``"N"``), the
 field data will be converted to the units specified in ``add_field`` before
 being returned in a data object selection. If the field function returns data
-with dimensions that are incompatibible with units specified in ``add_field``,
+with dimensions that are incompatible with units specified in ``add_field``,
 you will see an error. To clear this error, you must ensure that your field
 function returns data in the correct units. Often, this means applying units to
 a dimensionless float or array.
 
-If your field definition influcdes physical constants rather than defining a
+If your field definition includes physical constants rather than defining a
 constant as a float, you can import it from ``yt.utilities.physical_constants``
 to get a predefined version of the constant with the correct units. If you know
 the units your data is supposed to have ahead of time, you can import unit
 symbols like ``g`` or ``cm`` from the ``yt.units`` namespace and multiply the
-return value of your field function by the appropriate compbination of unit
+return value of your field function by the appropriate combination of unit
 symbols for your field's units. You can also convert floats or NumPy arrays into
 :class:`~yt.units.yt_array.YTArray` or :class:`~yt.units.yt_array.YTQuantity`
 instances by making use of the
@@ -82,7 +85,29 @@
 Lastly, if you do not know the units of your field ahead of time, you can
 specify ``units='auto'`` in the call to ``add_field`` for your field.  This will
 automatically determine the appropriate units based on the units of the data
-returned by the field function.
+returned by the field function. This is also a good way to let your derived fields
+be automatically converted to the units of the :ref:`unit system <unit_systems>` in 
+your dataset. 
+
+If ``units='auto'`` is set, it is also required to set the ``dimensions`` keyword
+argument so that error-checking can be done on the derived field to make sure that
+the dimensionality of the returned array and the field are the same:
+
+.. code-block:: python
+
+    import yt
+    from yt.units import dimensions
+    
+    def _pressure(field, data):
+        return (data.ds.gamma - 1.0) * \
+              data["density"] * data["thermal_energy"]
+              
+    yt.add_field(("gas","pressure"), function=_pressure, units="auto",
+                 dimensions=dimensions.pressure)
+
+If ``dimensions`` is not set, an error will be thrown. The ``dimensions`` keyword
+can be a SymPy ``symbol`` object imported from ``yt.units.dimensions``, a compound
+dimension of these, or a string corresponding to one of these objects. 
 
 :func:`add_field` can be invoked in two other ways. The first is by the 
 function decorator :func:`derived_field`. The following code is equivalent to 
@@ -111,10 +136,27 @@
 .. code-block:: python
 
    ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
-   ds.add_field("pressure", function=_pressure, units="dyne/cm**2")
+   ds.add_field(("gas", "pressure"), function=_pressure, units="dyne/cm**2")
 
-If you find yourself using the same custom-defined fields over and over, you
-should put them in your plugins file as described in :ref:`plugin-file`.
+If you specify fields in this way, you can take advantage of the dataset's 
+:ref:`unit system <unit_systems>` to define the units for you, so that
+the units will be returned in the units of that system:
+
+.. code-block:: python
+
+    ds.add_field(("gas", "pressure"), function=_pressure, units=ds.unit_system["pressure"])
+
+Since the :class:`yt.units.unit_systems.UnitSystem` object returns a :class:`yt.units.unit_object.Unit` object when
+queried, you're not limited to specifying units in terms of those already available. You can specify units for fields
+using basic arithmetic if necessary:
+
+.. code-block:: python
+
+    ds.add_field(("gas", "my_acceleration"), function=_my_acceleration,
+                 units=ds.unit_system["length"]/ds.unit_system["time"]**2)
+
+If you find yourself using the same custom-defined fields over and over, you should put them in your plugins file as
+described in :ref:`plugin-file`.
 
 A More Complicated Example
 --------------------------
@@ -148,7 +190,7 @@
        y_hat /= r
        z_hat /= r
        return xv*x_hat + yv*y_hat + zv*z_hat
-   yt.add_field("my_radial_velocity",
+   yt.add_field(("gas","my_radial_velocity"),
                 function=_my_radial_velocity,
                 units="cm/s",
                 take_log=False,
@@ -195,8 +237,11 @@
 ``function``
      This is a function handle that defines the field
 ``units``
-     This is a string that describes the units. Powers must be in
-     Python syntax (``**`` instead of ``^``).
+     This is a string that describes the units, or a query to a :ref:`UnitSystem <unit_systems>` 
+     object, e.g. ``ds.unit_system["energy"]``. Powers must be in Python syntax (``**`` 
+     instead of ``^``). Alternatively, it may be set to ``"auto"`` to have the units 
+     determined automatically. In this case, the ``dimensions`` keyword must be set to the
+     correct dimensions of the field. 
 ``display_name``
      This is a name used in the plots, for instance ``"Divergence of
      Velocity"``.  If not supplied, the ``name`` value is used.
@@ -219,6 +264,9 @@
 ``force_override``
      (*Advanced*) Overrides the definition of an old field if a field with the
      same name has already been defined.
+``dimensions``
+     Set this if ``units="auto"``. Can be either a string or a dimension object from
+     ``yt.units.dimensions``.
 
 Debugging a Derived Field
 -------------------------
@@ -236,7 +284,7 @@
 
 .. code-block:: python
 
-   @yt.derived_field(name = "funthings")
+   @yt.derived_field(name = ("gas","funthings"))
    def funthings(field, data):
        return data["sillythings"] + data["humorousthings"]**2.0
 
@@ -244,7 +292,7 @@
 
 .. code-block:: python
 
-   @yt.derived_field(name = "funthings")
+   @yt.derived_field(name = ("gas","funthings"))
    def funthings(field, data):
        data._debug()
        return data["sillythings"] + data["humorousthings"]**2.0

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/developing/creating_frontend.rst
--- a/doc/source/developing/creating_frontend.rst
+++ b/doc/source/developing/creating_frontend.rst
@@ -104,6 +104,43 @@
 have a display name of ``r"\rho"``.  Omitting the ``"display_name"``
 will result in using a capitalized version of the ``"name"``.
 
+.. _bfields-frontend:
+
+Creating Aliases for Magnetic Fields
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Setting up access to the magnetic fields in your dataset requires special
+handling, because in different unit systems magnetic fields have different
+dimensions (see :ref:`bfields` for an explanation). If your dataset includes 
+magnetic fields, you should include them in ``known_other_fields``, but do
+not set up aliases for them--instead use the special handling function 
+:meth:`~yt.fields.magnetic_fields.setup_magnetic_field_aliases`. It takes
+as arguments the ``FieldInfoContainer`` instance, the field type of the 
+frontend, and the list of magnetic fields from the frontend. Here is an
+example of how this is implemented in the FLASH frontend:
+
+.. code-block:: python
+
+    class FLASHFieldInfo(FieldInfoContainer):
+        known_other_fields = (
+            ...
+            ("magx", (b_units, [], "B_x")), # Note there is no alias here
+            ("magy", (b_units, [], "B_y")),
+            ("magz", (b_units, [], "B_z")),
+            ...
+        )
+
+        def setup_fluid_fields(self):
+            from yt.fields.magnetic_field import \
+                setup_magnetic_field_aliases
+            ...
+            setup_magnetic_field_aliases(self, "flash", ["mag%s" % ax for ax in "xyz"])    
+
+This function should always be imported and called from within the 
+``setup_fluid_fields`` method of the ``FieldInfoContainer``. If this 
+function is used, converting between magnetic fields in different 
+:ref:`unit systems <unit_systems>` will be handled automatically. 
+
 Data Localization Structures
 ----------------------------
 

diff -r a1385280dbe964d352623b28579dee0ea4834c26 -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 doc/source/developing/testing.rst
--- a/doc/source/developing/testing.rst
+++ b/doc/source/developing/testing.rst
@@ -453,18 +453,25 @@
    @requires_ds(my_ds)
    def test_my_ds():
        ds = data_dir_load(my_ds)
+
        def create_image(filename_prefix):
            plt.plot([1, 2], [1, 2])
            plt.savefig(filename_prefix)
        test = GenericImageTest(ds, create_image, 12)
+
+       # this ensures the test has a unique key in the
+       # answer test storage file
+       test.prefix = "my_unique_name"
+
        # this ensures a nice test name in nose's output
        test_my_ds.__description__ = test.description
+
        yield test_my_ds
 
 Another good example of an image comparison test is the
 ``PlotWindowAttributeTest`` defined in the answer testing framework and used in
 ``yt/visualization/tests/test_plotwindow.py``. This test shows how a new answer
-test subclass can be used to programitically test a variety of different methods
+test subclass can be used to programmatically test a variety of different methods
 of a complicated class using the same test class. This sort of image comparison
 test is more useful if you are finding yourself writing a ton of boilerplate
 code to get your image comparison test working.  The ``GenericImageTest`` is

This diff is so big that we needed to truncate the remainder.

https://bitbucket.org/yt_analysis/yt/commits/14ef9ac4f402/
Changeset:   14ef9ac4f402
Branch:      yt
User:        chummels
Date:        2016-03-15 22:58:38+00:00
Summary:     Updating answer tests for absorption spectrum.
Affected #:  1 file

diff -r 3d188f2aedfc1ed4a25cb292ba5c680bd4e11c64 -r 14ef9ac4f402fe9d0ce249c001683cd6b87a5643 tests/tests_2.7.yaml
--- a/tests/tests_2.7.yaml
+++ b/tests/tests_2.7.yaml
@@ -55,7 +55,7 @@
   local_ytdata_270:
     - yt/frontends/ytdata
 
-  local_absorption_spectrum_270:
+  local_absorption_spectrum_271:
     - yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py:test_absorption_spectrum_non_cosmo
     - yt/analysis_modules/absorption_spectrum/tests/test_absorption_spectrum.py:test_absorption_spectrum_cosmo

Repository URL: https://bitbucket.org/yt_analysis/yt/

--

This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.


More information about the yt-svn mailing list