Package netCDF4 :: Module _netCDF4 :: Class Variable
[hide private]
[frames] | no frames]

Class Variable

object --+
         |
        Variable


A netCDF `netCDF4.Variable` is used to read and write netCDF data.  They are
analagous to numpy array objects. See `netCDF4.Variable.__init__` for more
details.

A list of attribute names corresponding to netCDF attributes defined for
the variable can be obtained with the `netCDF4.Variable.ncattrs` method. These
attributes can be created by assigning to an attribute of the
`netCDF4.Variable` instance. A dictionary containing all the netCDF attribute
name/value pairs is provided by the `__dict__` attribute of a
`netCDF4.Variable` instance.

The following class variables are read-only:

**`dimensions`**: A tuple containing the names of the
dimensions associated with this variable.

**`dtype`**: A numpy dtype object describing the
variable's data type.

**`ndim`**: The number of variable dimensions.

**`shape`**: A tuple with the current shape (length of all dimensions).

**`scale`**: If True, `scale_factor` and `add_offset` are
applied. Default is `True`, can be reset using `netCDF4.Variable.set_auto_scale` and
`netCDF4.Variable.set_auto_maskandscale` methods.

**`mask`**: If True, data is automatically converted to/from masked 
arrays when missing values or fill values are present. Default is `True`, can be
reset using `netCDF4.Variable.set_auto_mask` and `netCDF4.Variable.set_auto_maskandscale`
methods.

**`least_significant_digit`**: Describes the power of ten of the 
smallest decimal place in the data the contains a reliable value.  Data is
truncated to this decimal place when it is assigned to the `netCDF4.Variable`
instance. If `None`, the data is not truncated.

**`__orthogonal_indexing__`**: Always `True`.  Indicates to client code
that the object supports 'orthogonal indexing', which means that slices
that are 1d arrays or lists slice along each dimension independently.  This
behavior is similar to Fortran or Matlab, but different than numpy.

**`datatype`**: numpy data type (for primitive data types) or VLType/CompoundType
 instance (for compound or vlen data types).

**`name`**: String name.

**`size`**: The number of stored elements.
    

Instance Methods [hide private]
 
__array__(...)
 
__delattr__(...)
x.__delattr__('name') <==> del x.name
 
__delitem__(x, y)
del x[y]
 
__getattr__(...)
 
__getattribute__(...)
x.__getattribute__('name') <==> x.name
 
__getitem__(x, y)
x[y]
 
__init__(...)
**`__init__(self, group, name, datatype, dimensions=(), zlib=False, complevel=4, shuffle=True, fletcher32=False, contiguous=False, chunksizes=None, endian='native', least_significant_digit=None,fill_value=None)`**
 
__len__(x)
len(x)
a new object with type S, a subtype of T
__new__(T, S, ...)
 
__repr__(x)
repr(x)
 
__setattr__(...)
x.__setattr__('name', value) <==> x.name = value
 
__setitem__(x, i, y)
x[i]=y
 
__unicode__(...)
 
_assign_vlen(...)
private method to assign data to a single item in a VLEN variable
 
_get(...)
Private method to retrieve data from a netCDF variable
 
_getdims(...)
 
_getname(...)
 
_put(...)
Private method to put data into a netCDF variable
 
_toma(...)
 
assignValue(...)
**`assignValue(self, val)`**
 
chunking(...)
**`chunking(self)`**
 
delncattr(...)
**`delncattr(self,name,value)`**
 
endian(...)
**`endian(self)`**
 
filters(...)
**`filters(self)`**
 
getValue(...)
**`getValue(self)`**
 
get_var_chunk_cache(...)
**`get_var_chunk_cache(self)`**
 
getncattr(...)
**`getncattr(self,name)`**
 
group(...)
**`group(self)`**
 
ncattrs(...)
**`ncattrs(self)`**
 
renameAttribute(...)
**`renameAttribute(self, oldname, newname)`**
 
set_auto_mask(...)
**`set_auto_mask(self,mask)`**
 
set_auto_maskandscale(...)
**`set_auto_maskandscale(self,maskandscale)`**
 
set_auto_scale(...)
**`set_auto_scale(self,scale)`**
 
set_var_chunk_cache(...)
**`set_var_chunk_cache(self,size=None,nelems=None,preemption=None)`**
 
setncattr(...)
**`setncattr(self,name,value)`**
 
setncatts(...)
**`setncatts(self,attdict)`**

Inherited from object: __format__, __hash__, __reduce__, __reduce_ex__, __sizeof__, __str__, __subclasshook__

Properties [hide private]
  __orthogonal_indexing__
  _cmptype
  _grp
  _grpid
  _iscompound
  _isprimitive
  _isvlen
  _name
  _nunlimdim
  _varid
  _vltype
  datatype
numpy data type (for primitive data types) or VLType/CompoundType instance (for compound or vlen data types)
  dimensions
get variables's dimension names
  dtype
  mask
  name
string name of Variable instance
  ndim
  scale
  shape
find current sizes of all variable dimensions
  size
Return the number of stored elements.

Inherited from object: __class__

Method Details [hide private]

__delattr__(...)

 

x.__delattr__('name') <==> del x.name

Overrides: object.__delattr__

__getattribute__(...)

 

x.__getattribute__('name') <==> x.name

Overrides: object.__getattribute__

__init__(...)
(Constructor)

 

**`__init__(self, group, name, datatype, dimensions=(), zlib=False, complevel=4, shuffle=True, fletcher32=False, contiguous=False, chunksizes=None, endian='native', least_significant_digit=None,fill_value=None)`**

`netCDF4.Variable` constructor.

**`group`**: `netCDF4.Group` or `netCDF4.Dataset` instance to associate with variable.

**`name`**: Name of the variable.

**`datatype`**: `netCDF4.Variable` data type. Can be specified by providing a numpy dtype object, or a string that describes a numpy dtype object. Supported values, corresponding to `str` attribute of numpy dtype objects, include `'f4'` (32-bit floating point), `'f8'` (64-bit floating point), `'i4'` (32-bit signed integer), `'i2'` (16-bit signed integer), `'i8'` (64-bit singed integer), `'i4'` (8-bit singed integer), `'i1'` (8-bit signed integer), `'u1'` (8-bit unsigned integer), `'u2'` (16-bit unsigned integer), `'u4'` (32-bit unsigned integer), `'u8'` (64-bit unsigned integer), or `'S1'` (single-character string). From compatibility with Scientific.IO.NetCDF, the old Numeric single character typecodes can also be used (`'f'` instead of `'f4'`, `'d'` instead of `'f8'`, `'h'` or `'s'` instead of `'i2'`, `'b'` or `'B'` instead of `'i1'`, `'c'` instead of `'S1'`, and `'i'` or `'l'` instead of `'i4'`). `datatype` can also be a `netCDF4.CompoundType` instance (for a structured, or compound array), a `netCDF4.VLType` instance (for a variable-length array), or the python `str` builtin (for a variable-length string array). Numpy string and unicode datatypes with length greater than one are aliases for `str`.

**`dimensions`**: a tuple containing the variable's dimension names (defined previously with `createDimension`). Default is an empty tuple which means the variable is a scalar (and therefore has no dimensions).

**`zlib`**: if `True`, data assigned to the `netCDF4.Variable` instance is compressed on disk. Default `False`.

**`complevel`**: the level of zlib compression to use (1 is the fastest, but poorest compression, 9 is the slowest but best compression). Default 4. Ignored if `zlib=False`.

**`shuffle`**: if `True`, the HDF5 shuffle filter is applied to improve compression. Default `True`. Ignored if `zlib=False`.

**`fletcher32`**: if `True` (default `False`), the Fletcher32 checksum algorithm is used for error detection.

**`contiguous`**: if `True` (default `False`), the variable data is stored contiguously on disk. Default `False`. Setting to `True` for a variable with an unlimited dimension will trigger an error.

**`chunksizes`**: Can be used to specify the HDF5 chunksizes for each dimension of the variable. A detailed discussion of HDF chunking and I/O performance is available [here](http://www.hdfgroup.org/HDF5/doc/H5.user/Chunking.html). Basically, you want the chunk size for each dimension to match as closely as possible the size of the data block that users will read from the file. `chunksizes` cannot be set if `contiguous=True`.

**`endian`**: Can be used to control whether the data is stored in little or big endian format on disk. Possible values are `little, big` or `native` (default). The library will automatically handle endian conversions when the data is read, but if the data is always going to be read on a computer with the opposite format as the one used to create the file, there may be some performance advantage to be gained by setting the endian-ness. For netCDF 3 files (that don't use HDF5), only `endian='native'` is allowed.

The `zlib, complevel, shuffle, fletcher32, contiguous` and `chunksizes` keywords are silently ignored for netCDF 3 files that do not use HDF5.

**`least_significant_digit`**: If specified, variable data will be truncated (quantized). In conjunction with `zlib=True` this produces 'lossy', but significantly more efficient compression. For example, if `least_significant_digit=1`, data will be quantized using around(scale*data)/scale, where scale = 2**bits, and bits is determined so that a precision of 0.1 is retained (in this case bits=4). Default is `None`, or no quantization.

**`fill_value`**: If specified, the default netCDF `_FillValue` (the value that the variable gets filled with before any data is written to it) is replaced with this value. If fill_value is set to `False`, then the variable is not pre-filled. The default netCDF fill values can be found in `netCDF4.default_fillvals`.

***Note***: `netCDF4.Variable` instances should be created using the `netCDF4.Dataset.createVariable` method of a `netCDF4.Dataset` or `netCDF4.Group` instance, not using this class directly.

Overrides: object.__init__

__new__(T, S, ...)

 
Returns: a new object with type S, a subtype of T
Overrides: object.__new__

__repr__(x)
(Representation operator)

 

repr(x)

Overrides: object.__repr__

__setattr__(...)

 

x.__setattr__('name', value) <==> x.name = value

Overrides: object.__setattr__

assignValue(...)

 

**`assignValue(self, val)`**

assign a value to a scalar variable. Provided for compatibility with Scientific.IO.NetCDF, can also be done by assigning to an Ellipsis slice ([...]).

chunking(...)

 

**`chunking(self)`**

return variable chunking information. If the dataset is defined to be contiguous (and hence there is no chunking) the word 'contiguous' is returned. Otherwise, a sequence with the chunksize for each dimension is returned.

delncattr(...)

 

**`delncattr(self,name,value)`**

delete a netCDF variable attribute. Use if you need to delete a netCDF attribute with the same name as one of the reserved python attributes.

endian(...)

 

**`endian(self)`**

return endian-ness (`little,big,native`) of variable (as stored in HDF5 file).

filters(...)

 

**`filters(self)`**

return dictionary containing HDF5 filter parameters.

getValue(...)

 

**`getValue(self)`**

get the value of a scalar variable. Provided for compatibility with Scientific.IO.NetCDF, can also be done by slicing with an Ellipsis ([...]).

get_var_chunk_cache(...)

 

**`get_var_chunk_cache(self)`**

return variable chunk cache information in a tuple (size,nelems,preemption). See netcdf C library documentation for `nc_get_var_chunk_cache` for details.

getncattr(...)

 

**`getncattr(self,name)`**

retrievel a netCDF variable attribute. Use if you need to set a netCDF attribute with the same name as one of the reserved python attributes.

group(...)

 

**`group(self)`**

return the group that this `netCDF4.Variable` is a member of.

ncattrs(...)

 

**`ncattrs(self)`**

return netCDF attribute names for this `netCDF4.Variable` in a list.

renameAttribute(...)

 

**`renameAttribute(self, oldname, newname)`**

rename a `netCDF4.Variable` attribute named `oldname` to `newname`.

set_auto_mask(...)

 

**`set_auto_mask(self,mask)`**

turn on or off automatic conversion of variable data to and from masked arrays .

If `mask` is set to `True`, when data is read from a variable it is converted to a masked array if any of the values are exactly equal to the either the netCDF _FillValue or the value specified by the missing_value variable attribute. The fill_value of the masked array is set to the missing_value attribute (if it exists), otherwise the netCDF _FillValue attribute (which has a default value for each data type). When data is written to a variable, the masked array is converted back to a regular numpy array by replacing all the masked values by the fill_value of the masked array.

The default value of `mask` is `True` (automatic conversions are performed).

set_auto_maskandscale(...)

 

**`set_auto_maskandscale(self,maskandscale)`**

turn on or off automatic conversion of variable data to and from masked arrays and automatic packing/unpacking of variable data using `scale_factor` and `add_offset` attributes.

If `maskandscale` is set to `True`, when data is read from a variable it is converted to a masked array if any of the values are exactly equal to the either the netCDF _FillValue or the value specified by the missing_value variable attribute. The fill_value of the masked array is set to the missing_value attribute (if it exists), otherwise the netCDF _FillValue attribute (which has a default value for each data type). When data is written to a variable, the masked array is converted back to a regular numpy array by replacing all the masked values by the fill_value of the masked array.

If `maskandscale` is set to `True`, and the variable has a `scale_factor` or an `add_offset` attribute, then data read from that variable is unpacked using:

   data = self.scale_factor*data + self.add_offset

When data is written to a variable it is packed using:

   data = (data - self.add_offset)/self.scale_factor

If either scale_factor is present, but add_offset is missing, add_offset is assumed zero. If add_offset is present, but scale_factor is missing, scale_factor is assumed to be one. For more information on how `scale_factor` and `add_offset` can be used to provide simple compression, see the [PSD metadata conventions](http://www.esrl.noaa.gov/psd/data/gridded/conventions/cdc_netcdf_standard.shtml).

The default value of `maskandscale` is `True` (automatic conversions are performed).

set_auto_scale(...)

 

**`set_auto_scale(self,scale)`**

turn on or off automatic packing/unpacking of variable data using `scale_factor` and `add_offset` attributes.

If `scale` is set to `True`, and the variable has a `scale_factor` or an `add_offset` attribute, then data read from that variable is unpacked using:

   data = self.scale_factor*data + self.add_offset

When data is written to a variable it is packed using:

   data = (data - self.add_offset)/self.scale_factor

If either scale_factor is present, but add_offset is missing, add_offset is assumed zero. If add_offset is present, but scale_factor is missing, scale_factor is assumed to be one. For more information on how `scale_factor` and `add_offset` can be used to provide simple compression, see the [PSD metadata conventions](http://www.esrl.noaa.gov/psd/data/gridded/conventions/cdc_netcdf_standard.shtml).

The default value of `scale` is `True` (automatic conversions are performed).

set_var_chunk_cache(...)

 

**`set_var_chunk_cache(self,size=None,nelems=None,preemption=None)`**

change variable chunk cache settings. See netcdf C library documentation for `nc_set_var_chunk_cache` for details.

setncattr(...)

 

**`setncattr(self,name,value)`**

set a netCDF variable attribute using name,value pair. Use if you need to set a netCDF attribute with the same name as one of the reserved python attributes.

setncatts(...)

 

**`setncatts(self,attdict)`**

set a bunch of netCDF variable attributes at once using a python dictionary. This may be faster when setting a lot of attributes for a `NETCDF3` formatted file, since nc_redef/nc_enddef is not called in between setting each attribute