Iris 0.9

Previous topic

Iris reference documentation

Next topic

iris.analysis.calculus

This Page

iris.analysis

A package providing various analysis facilities.

Primarily, this module provides definitions of statistical operations, such as MEAN and STD_DEV, that can be applied to Cubes via methods such as: iris.cube.Cube.collapsed() and iris.cube.Cube.aggregated_by().

Note

These statistical operations define how to transform both the metadata and the data.

In this module:


iris.analysis.COUNT = <iris.analysis.Aggregator object at 0x18bb410>

The number of data that match the given function.

Args:

  • function:

    A function which returns True or False given a value in the data array.

For example, the number of ensembles with precipitation exceeding 10 (in cube data units) could be calculated with:

result = precip_cube.collapsed('ensemble_member', iris.analysis.COUNT,
                               function=lambda data_value: data_value > 10)

iris.analysis.GMEAN = <iris.analysis.Aggregator object at 0x18c81d0>

The geometric mean, as computed by scipy.stats.mstats.gmean().

For example, to compute zonal geometric means:

result = cube.collapsed('longitude', iris.analysis.GMEAN)

iris.analysis.HMEAN = <iris.analysis.Aggregator object at 0x18c84d0>

The harmonic mean, as computed by scipy.stats.mstats.hmean().

For example, to compute zonal harmonic means:

result = cube.collapsed('longitude', iris.analysis.HMEAN)

Note

The harmonic mean is only valid if all data values are greater than zero.


iris.analysis.MAX = <iris.analysis.Aggregator object at 0x18c8510>

The maximum, as computed by numpy.ma.max().

For example, to compute zonal maximums:

result = cube.collapsed('longitude', iris.analysis.MAX)

iris.analysis.MEAN = <iris.analysis.WeightedAggregator object at 0x18c8590>

The mean, as computed by numpy.ma.average().

For example, to compute zonal means:

result = cube.collapsed('longitude', iris.analysis.MEAN)

Additional kwargs available:

  • weights

    Optional array of floats. If supplied, the shape must match the cube.

    LatLon area weights can be calculated using iris.analysis.cartography.area_weights().

  • returned

    Set this to True to indicate the collapsed weights are to be returned along with the collapsed data. Defaults to False.

For example:

cube_out, weights_out = cube_in.collapsed(coord_names, iris.analysis.MEAN, weights=weights_in, returned=True)

iris.analysis.MEDIAN = <iris.analysis.Aggregator object at 0x1aee590>

The median, as computed by numpy.ma.median().

For example, to compute zonal medians:

result = cube.collapsed('longitude', iris.analysis.MEDIAN)

iris.analysis.MIN = <iris.analysis.Aggregator object at 0x1aee5d0>

The minimum, as computed by numpy.ma.min().

For example, to compute zonal minimums:

result = cube.collapsed('longitude', iris.analysis.MIN)

iris.analysis.PERCENTILE = <iris.analysis.Aggregator object at 0x1aee610>

The percentile, as computed by scipy.stats.mstats.scoreatpercentile().

Required kwargs:

  • percent:

    Percentile rank at which to extract value. No default.

For example, to compute the 90th percentile over time:

result = cube.collapsed('time', iris.analysis.PERCENTILE, percent=90)

Note

The default values of alphap and betap are both 1. For detailed meanings on these values see scipy.stats.mstats.mquantiles().


iris.analysis.PROPORTION = <iris.analysis.Aggregator object at 0x1aee650>

The proportion, as a decimal, of data that match the given function.

The proportion calculation takes into account masked values, therefore if the number of non-masked values is zero the result itself will be a masked array.

Args:

  • function:

    A function which returns True or False given a value in the data array.

For example, the probability of precipitation exceeding 10 (in cube data units) across ensemble members could be calculated with:

result = precip_cube.collapsed('ensemble_member', iris.analysis.PROPORTION,
                               function=lambda data_value: data_value > 10)

Similarly, the proportion of times precipitation exceeded 10 (in cube data units) could be calculated with:

result = precip_cube.collapsed('time', iris.analysis.PROPORTION,
                               function=lambda data_value: data_value > 10)

iris.analysis.STD_DEV = <iris.analysis.Aggregator object at 0x1aee690>

The standard deviation, as computed by numpy.ma.std().

For example, to compute zonal standard deviations:

result = cube.collapsed('longitude', iris.analysis.STD_DEV)

Additional kwargs available:

  • ddof:

    Delta degrees of freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. By default ddof is one.

For example, to obtain the biased standard deviation:

result = cube.collapsed(coord_to_collapse, iris.analysis.STD_DEV, ddof=0)

iris.analysis.SUM = <iris.analysis.Aggregator object at 0x1aee6d0>

The sum of a dataset, as computed by numpy.ma.sum().

For example, to compute an accumulation over time:

result = cube.collapsed('time', iris.analysis.SUM)

iris.analysis.VARIANCE = <iris.analysis.Aggregator object at 0x1aee710>

The variance, as computed by numpy.ma.var().

For example, to compute zonal variance:

result = cube.collapsed('longitude', iris.analysis.VARIANCE)

Additional kwargs available:

  • ddof:

    Delta degrees of freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. By default ddof is one.

For example, to obtain the biased variance:

result = cube.collapsed(coord_to_collapse, iris.analysis.VARIANCE, ddof=0)

iris.analysis.coord_comparison(*cubes)

Convenience function to help compare coordinates on one or more cubes by their metadata.

Return a dictionary where the key represents the statement, “Given these cubes list the coordinates which, when grouped by metadata, are/have...”

Keys:

  • grouped_coords

    A list of coordinate groups of all the coordinates grouped together by their coordinate definition

  • ungroupable

    A list of coordinate groups which contain at least one None, meaning not all Cubes provide an equivalent coordinate

  • not_equal

    A list of coordinate groups of which not all are equal (superset of ungroupable)

  • no_data_dimension

    A list of coordinate groups of which all have no data dimensions on their respective cubes

  • scalar

    A list of coordinate groups of which all have shape (1, )

  • non_equal_data_dimension

    A list of coordinate groups of which not all have the same data dimension on their respective cubes

  • non_equal_shape

    A list of coordinate groups of which not all have the same shape

  • equal_data_dimension

    A list of coordinate groups of which all have the same data dimension on their respective cubes

  • equal

    A list of coordinate groups of which all are equal

  • ungroupable_and_dimensioned

    A list of coordinate groups of which not all cubes had an equivalent (in metadata) coordinate which also describe a data dimension

  • dimensioned

    A list of coordinate groups of which all describe a data dimension on their respective cubes

  • ignorable

    A list of scalar, ungroupable non_equal coordinate groups

  • resamplable

    A list of equal, different data dimensioned coordinate groups

  • transposable

    A list of non equal, same data dimensioned, non scalar coordinate groups

Example usage:

result = coord_comparison(cube1, cube2)
print 'All equal coordinates: ', result['equal']

Convenience class that supports common aggregation functionality.

class iris.analysis.Aggregator(history, cell_method, call_func, **kwargs)

Create an aggregator for the given call_func.

Args:

  • history (string):

    History string that supports string format substitution.

  • cell_method (string):

    Cell method string that supports string format substitution.

  • call_func (callable):

    Data aggregation function.

Kwargs:

  • kwargs:

    Passed through to call_func.

aggregate(data, axis, **kwargs)

Perform the aggregation function given the data.

Keyword arguments are passed through to the data aggregation function (for example, the “percent” keyword for a percentile aggregator). This function is usually used in conjunction with update_metadata(), which should be passed the same keyword arguments.

Returns:
The aggregated data.
post_process(collapsed_cube, data_result, **kwargs)

Process the result from iris.analysis.Aggregator.aggregate().

Ensures data is an array, when collapsed to a single value.

Args:

update_metadata(cube, coords, **kwargs)

Update cube history and cell method metadata w.r.t the aggregation function.

Args:

Kwargs:

  • This function is intended to be used in conjuction with aggregate() and should be passed the same keywords (for example, the “percent” keyword for a percentile aggregator).
call_func = None

Data aggregation function.

cell_method = None

Cube cell method string.

history = None

Cube history string that supports string format substitution.


iris.analysis.clear_phenomenon_identity(cube)

Helper function to clear the standard_name, attributes, and cell_methods of a cube.