AnalysisMixin

class Stoner.Analysis.AnalysisMixin[source]

Bases: object

A mixin calss designed to work with Stoner.Core.DataFile to provide additional analysis methods.

Methods Summary

apply(func[, col, replace, header])

Apply the given function to each row in the data set and adds to the data set.

clip(clipper[, column])

Clips the data based on the column and the clipper value.

decompose([xcol, ycol, sym, asym, replace])

Given (x,y) data, decomposes the y part into symmetric and antisymmetric contributions in x.

integrate([xcol, ycol, result, header, ...])

Inegrate a column of data, optionally returning the cumulative integral.

normalise([target, base, replace, header, ...])

Normalise data columns by dividing through by a base column value.

Methods Documentation

apply(func, col=None, replace=True, header=None, **kargs)[source]

Apply the given function to each row in the data set and adds to the data set.

Parameters
  • func (callable) – The function to apply to each row of the data.

  • col (index) – The column in which to place the result of the function

Keyword Arguments
  • replace (bool) – Either replace the existing column/complete data or create a new column or data file.

  • header (string or None) – The new column header(s) (defaults to the name of the function func

Note

If any extra keyword arguments are supplied then these are passed to the function directly. If you need to pass any arguments that overlap with the keyword arguments to :py:math:`AnalysisMixin.apply` then these can be supplied in a dictionary argument _extra.

The callable func should have a signature:

def func(row,**kargs):

and should return either a single float, in which case it will be used to repalce the specified column, or an array, in which case it is used to completely replace the row of data.

If the function returns a complete row of data, then the replace parameter will cause the return value to be a new datafile, leaving the original unchanged. The headers parameter can give the complete column headers for the new data file.

Returns

(Stoner.Data) – The newly modified Data object.

clip(clipper, column=None)[source]

Clips the data based on the column and the clipper value.

Parameters
  • column (index) – Column to look for the maximum in

  • clipper (tuple or array) – Either a tuple of (min,max) or a numpy.ndarray - in which case the max and min values in that array will be used as the clip limits

Returns

(Stoner.Data) – The newly modified Data object.

Note

If column is not defined (or is None) the DataFile.setas column assignments are used.

decompose(xcol=None, ycol=None, sym=None, asym=None, replace=True, **kwords)[source]

Given (x,y) data, decomposes the y part into symmetric and antisymmetric contributions in x.

Keyword Arguments
  • xcol (index) – Index of column with x data - defaults to first x column in self.setas

  • ycol (index or list of indices) – indices of y column(s) data

  • sym (index) – Index of column to place symmetric data in default, append to end of data

  • asym (index) – Index of column for asymmetric part of ata. Defaults to appending to end of data

  • replace (bool) – Overwrite data with output (true)

Returns

self – The newly modified AnalysisMixin.

Example

"""Decompose Into symmetric and antisymmetric parts example."""
from numpy import linspace, reshape, array

from Stoner import Data
from Stoner.tools import format_val

x = linspace(-10, 10, 201)
y = 0.3 * x ** 3 - 6 * x ** 2 + 11 * x - 20
d = Data(x, y, setas="xy", column_headers=["X", "Y"])
d.decompose()
d.setas = "xyyy"
coeffs = d.polyfit(polynomial_order=3)
str_coeffs = [format_val(c, mode="eng", places=1) for c in coeffs.ravel()]
str_coeffs = reshape(array(str_coeffs), coeffs.shape)
d.plot()
d.text(
    -6,
    -800,
    "Coefficients\n{}".format(str_coeffs),
    fontdict={"size": "x-small"},
)
d.ylabel = "Data"
d.title = "Decompose Example"
d.tight_layout()

(png, hires.png, pdf)

../_images/decompose.png
integrate(xcol=None, ycol=None, result=None, header=None, result_name=None, output='data', bounds=<function AnalysisMixin.<lambda>>, **kargs)[source]

Inegrate a column of data, optionally returning the cumulative integral.

Parameters
  • xcol (index) – The X data column index (or header)

  • ycol (index)

  • The Y data column index (or header)

Keyword Arguments
  • result (index or None) – Either a column index (or header) to overwrite with the cumulative data, or True to add a new column or None to not store the cumulative result.

  • result_name (str) – The metadata name for the final result

  • header (str) – The name of the header for the results column.

  • output (Str) – What to return - ‘data’ (default) - this object, ‘result’: final result

  • bounds (callable) – A function that evaluates for each row to determine if the data should be integrated over.

  • **kargs – Other keyword arguements are fed direct to the scipy.integrate.cumtrapz method

Returns

(Stoner.Data) – The newly modified Data object.

Note

This is a pass through to the scipy.integrate.cumtrapz routine which just uses trapezoidal integration. A better alternative would be to offer a variety of methods including simpson’s rule and interpolation of data. If xcol or ycol are not specified then the current values from the Stoner.Core.DataFile.setas attribute are used.

normalise(target=None, base=None, replace=True, header=None, scale=None, limits=(0.0, 1.0))[source]

Normalise data columns by dividing through by a base column value.

Parameters

target (index) – One or more target columns to normalise can be a string, integer or list of strings or integers. If None then the default ‘y’ column is used.

Keyword Arguments
  • base (index) – The column to normalise to, can be an integer or string. Depricated can also be a tuple (low, high) being the output range

  • replace (bool) – Set True(default) to overwrite the target data columns

  • header (string or None) – The new column header - default is target name(norm)

  • scale (None or tuple of float,float) – Output range after normalising - low,high or None to map to -1,1

  • limits (float,float) – (low,high) - Take the input range from the high and low fraction of the input when sorted.

Returns

(Stoner.Data) – The newly modified Data object.

Notes

The limits parameter is used to set the input scale being normalised from - if the data has a few outliers then this setting can be used to clip the input range before normalising. The parameters in the limit are the values at the low and high fractions of the cumulative distribution function of the data.