Skip to content

ndv.models #

Models for ndv.

Classes:

  • ArrayDisplayModel

    Model of how to display an array.

  • ArrayViewerModel

    Options and state for the ArrayViewer.

  • ChannelMode

    Channel display mode.

  • ClimPolicy

    ABC for contrast limit policies.

  • ClimsManual

    Manually specified contrast limits.

  • ClimsMinMax

    Autoscale contrast limits based on the minimum and maximum values in the data.

  • ClimsPercentile

    Autoscale contrast limits based on percentiles of the data.

  • ClimsStdDev

    Automatically set contrast limits based on standard deviations from the mean.

  • DataWrapper

    Interface for wrapping different array-like data types.

  • LUTModel

    Representation of how to display a channel of an array.

  • NDVModel

    Base evented model for NDV models.

  • RectangularROIModel

    Representation of an axis-aligned rectangular Region of Interest (ROI).

  • ResolvedDisplayState

    Frozen snapshot of resolved display state (ArrayDisplayModel + DataWrapper).

  • RingBuffer

    Ring buffer structure with a given capacity and element type.

  • RingBufferWrapper

    Wrapper for ring buffer objects.

ArrayDisplayModel #

Bases: NDVModel

Model of how to display an array.

An ArrayDisplayModel is used to specify how to display a multi-dimensional array. It specifies which axes are visible, how to reduce along axes that are not visible, how to display channels, and how to apply lookup tables to channels. It is typically paired with a ndv.DataWrapper in order to resolve axis keys and slice data.

Info

In the following types, Hashable is used to refer to a type that will typically be either an integer index or a string label for an axis.

Attributes:

  • visible_axes (tuple[Hashable, ...]) –

    Ordered list of axes to visualize, from slowest to fastest. e.g. ('Z', -2, -1)

  • current_index (Mapping[Hashable, int | slice]) –

    The currently displayed position/slice along each dimension. e.g. {0: 0, 'time': slice(10, 20)} Not all axes need be present, and axes not present are assumed to be slice(None), meaning it is up to the controller of this model to restrict indices to an efficient range for retrieval. If the number of non-singleton axes is greater than n_visible_axes, then reducers are used to reduce the data along the remaining axes.

  • reducers (Mapping[Hashable | None, ufunc]) –

    Function used to reduce data along axes remaining after slicing. Ideally, the ufunc should accept an axis argument. (TODO: what happens if not?)

  • default_reducer (ufunc) –

    Default reducer to use when no reducer is specified for an axis. By default, this is numpy.max.

  • channel_mode (ChannelMode) –

    How to display channel information:

    • GRAYSCALE: ignore channel axis, use default_lut
    • COMPOSITE: display all channels as a composite image, using luts
    • COLOR: display a single channel at a time, using luts
    • RGBA: display as an RGB image, using default_lut (except for cmap)

    If channel_mode is set to anything other than GRAYSCALE, then channel_axis must be set to a valid axis; if no channel_axis is set, at the time of display, the DataWrapper MAY guess the channel_axis, and set it on the model.

  • channel_axis (Hashable | None) –

    The dimension index or name of the channel dimension. The implication of setting channel_axis is that all elements along the channel dimension are shown, with different LUTs applied to each channel. If None, then a single lookup table is used for all channels (luts[None]). NOTE: it is an error for channel_axis to be in visible_axes (or ignore it?)

  • luts (Mapping[int | None, LUTModel]) –

    Instructions for how to display each channel of the array. Keys represent position along the dimension specified by channel_axis. Values are LUT objects that specify how to display the channel. The special key None is used to represent a fallback LUT for all channels, and is used when channel_axis is None. It should always be present

  • default_lut (LUTModel) –

    Default lookup table to use when no LUTModel is specified for a channel in luts.

n_visible_axes property #

n_visible_axes: Literal[2, 3]

Number of dims is derived from the length of visible_axes.

ArrayViewerModel #

Bases: NDVModel

Options and state for the ArrayViewer.

Attributes:

  • interaction_mode (InteractionMode) –

    Describes the current interaction mode of the Viewer.

  • show_controls ((bool, optional)) –

    Control visibility of all controls at once. By default True.

  • show_3d_button ((bool, optional)) –

    Whether to show the 3D button, by default True.

  • show_histogram_button ((bool, optional)) –

    Whether to show the histogram button, by default True.

  • use_shared_histogram ((bool, optional)) –

    Whether to use a shared histogram overlay for all channels, by default True. When True, per-channel histogram buttons are hidden.

  • show_reset_zoom_button ((bool, optional)) –

    Whether to show the reset zoom button, by default True.

  • show_roi_button ((bool, optional)) –

    Whether to show the ROI button, by default False.

  • show_channel_mode_selector ((bool, optional)) –

    Whether to show the channel mode selector, by default True.

  • show_play_button ((bool, optional)) –

    Whether to show the play button, by default True.

  • show_progress_spinner ((bool, optional)) –

    Whether to show the progress spinner, by default False.

  • show_data_info ((bool, optional)) –

    Whether to show shape, dtype, size, etc. about the array

  • default_luts ((list[Colormap], optional)) –

    List of colormaps to use when populating the LUT dropdown menu in the viewer. Only editable upon initialization. Values may be any cmap ColormapLike object (most commonly, just a string name of the colormap, like "gray" or "viridis").

Classes:

ArrayViewerModelEvents #

Bases: SignalGroup

Signal group for ArrayViewerModel.

ChannelMode #

Bases: str, Enum

Channel display mode.

Attributes:

  • GRAYSCALE (str) –

    The array is displayed as a single channel, with a single lookup table applied. In this mode, there effective is no channel axis: all non-visible dimensions have sliders, and there is a single LUT control (the default_lut).

  • COMPOSITE (str) –

    Display all (or a subset of) channels as a composite image, with a different lookup table applied to each channel. In this mode, the slider for the channel axis is hidden by default, and LUT controls for each channel are shown.

  • COLOR (str) –

    Display a single channel at a time as a color image, with a channel-specific lookup table applied. In this mode, the slider for the channel axis is shown, and the user can select which channel to display. LUT controls are shown for all channels.

  • RGBA (str) –

    The array is displayed as an RGB image, with a single lookup table applied. In this mode, the slider for the channel axis is hidden, and a single LUT control is shown. Only valid when channel axis has length <= 4.

  • RGB (str) –

    Alias for RGBA.

Methods:

is_multichannel #

is_multichannel() -> bool

Return whether this mode displays multiple channels.

If is_multichannel is True, then the channel_axis slider should be hidden.

Source code in src/ndv/models/_array_display_model.py
112
113
114
115
116
117
def is_multichannel(self) -> bool:
    """Return whether this mode displays multiple channels.

    If `is_multichannel` is True, then the `channel_axis` slider should be hidden.
    """
    return self in (ChannelMode.COMPOSITE, ChannelMode.RGBA)

ClimPolicy #

Bases: BaseModel, ABC

ABC for contrast limit policies.

Methods:

  • get_limits

    Return the contrast limits for the given image.

get_limits abstractmethod #

get_limits(image: NDArray) -> tuple[float, float]

Return the contrast limits for the given image.

Source code in src/ndv/models/_lut_model.py
29
30
31
@abstractmethod
def get_limits(self, image: npt.NDArray) -> tuple[float, float]:
    """Return the contrast limits for the given image."""

ClimsManual #

Bases: ClimPolicy

Manually specified contrast limits.

Attributes:

  • min (float) –

    The minimum contrast limit.

  • max (float) –

    The maximum contrast limit.

ClimsMinMax #

Bases: ClimPolicy

Autoscale contrast limits based on the minimum and maximum values in the data.

ClimsPercentile #

Bases: ClimPolicy

Autoscale contrast limits based on percentiles of the data.

Attributes:

  • min_percentile (float) –

    The lower percentile for the contrast limits.

  • max_percentile (float) –

    The upper percentile for the contrast limits.

ClimsStdDev #

Bases: ClimPolicy

Automatically set contrast limits based on standard deviations from the mean.

Attributes:

  • n_stdev (float) –

    Number of standard deviations to use.

  • center (Optional[float]) –

    Center value for the standard deviation calculation. If None, the mean is used.

DataWrapper #

DataWrapper(data: ArrayT)

Bases: Generic[ArrayT], ABC

Interface for wrapping different array-like data types.

DataWrapper.create() is a factory method that returns a DataWrapper instance for the given data type. If your datastore type is not supported, you may implement a new DataWrapper subclass to handle your data type. To do this, import and subclass DataWrapper, and (minimally) implement the supports and isel methods. Ensure that your class is imported before the DataWrapper.create method is called, and it will be automatically detected and used to wrap your data.

This base class provides basic support for numpy-like array types. If the data supports getitem and shape attributes, it will work. If the data does not support getitem, the subclass MUST implement the isel method. If the data does not have a shape attribute, the subclass MUST implement the dims and coords properties.

Methods:

  • axis_scales

    Return per-axis scale factors from coordinate spacing.

  • channel_names

    Return channel display names from data metadata.

  • clear_cache

    Clear any cached properties.

  • create

    Create a DataWrapper instance for the given data.

  • guess_channel_axis

    Return the (best guess) axis name for the channel dimension.

  • guess_z_axis

    Return the (best guess) axis name for the z (3rd spatial) dimension.

  • isel

    Return a slice of the data as a numpy array.

  • normalize_axis_key

    Return positive index for axis (which can be +/- int or str label).

  • sizes

    Return the sizes of the dimensions.

  • summary_info

    Return info label with information about the data.

  • supports

    Return True if this wrapper can handle the given object.

Attributes:

Source code in src/ndv/models/_data_wrapper.py
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
def __init__(self, data: ArrayT) -> None:
    self._data = data
    if not hasattr(self._data, "__getitem__") and "isel" not in type(self).__dict__:
        raise NotImplementedError(
            "DataWrapper subclass MUST implement `isel` method if data does not "
            "support __getitem__."
        )

    has_shape = hasattr(self._data, "shape") and isinstance(self._data.shape, tuple)
    has_methods = "dims" in type(self).__dict__ and "coords" in type(self).__dict__
    if not has_shape and not has_methods:
        raise NotImplementedError(
            "DataWrapper subclass MUST implement `dims` and `coords` properties"
            " if data does not have a `shape` attribute or if the shape is not "
            "a tuple."
        )
    self.dims_changed.connect(self.clear_cache)

axis_map cached property #

axis_map: Mapping[Hashable, int]

Mapping of ALL valid axis keys to normalized, positive integer keys.

coords property #

Return the coordinates for the data.

data property #

data: ArrayT

Return the data being wrapped.

data_changed class-attribute instance-attribute #

data_changed = Signal()

Signal emitted when the data changes.

NOTE: It is up to data wrappers, or even end-users to emit this signal when the data object changes. We do not currently use object proxies to spy on mutation of the underlying data.

dims property #

dims: tuple[Hashable, ...]

Return the dimension labels for the data.

dims_changed class-attribute instance-attribute #

dims_changed = Signal()

Signal to emit when the dimensions of the data change.

NOTE: It is up to data wrappers, or even end-users to emit this signal when the dimensions/shape of the wrapped _data object changes.

dtype property #

dtype: dtype

Return the dtype for the data.

significant_bits property #

significant_bits: int | None

Number of significant bits per sample, if known from metadata.

axis_scales #

axis_scales() -> dict[Hashable, float]

Return per-axis scale factors from coordinate spacing.

Source code in src/ndv/models/_data_wrapper.py
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
def axis_scales(self) -> dict[Hashable, float]:
    """Return per-axis scale factors from coordinate spacing."""
    scales: dict[Hashable, float] = {}
    for dim in self.dims:
        coord = self.coords.get(dim)
        if coord is None or len(coord) < 2:
            continue
        try:
            values = [float(v) for v in coord]
        except (TypeError, ValueError):
            continue
        if any(not np.isfinite(v) for v in values):
            continue
        diffs = [values[i + 1] - values[i] for i in range(len(values) - 1)]
        if all(abs(d - diffs[0]) < 1e-10 for d in diffs):
            scales[dim] = diffs[0]
    return scales

channel_names #

channel_names(channel_axis: int | None) -> dict[int, str]

Return channel display names from data metadata.

Source code in src/ndv/models/_data_wrapper.py
320
321
322
323
324
325
326
327
328
329
330
331
332
def channel_names(self, channel_axis: int | None) -> dict[int, str]:
    """Return channel display names from data metadata."""
    if channel_axis is None:
        return {}
    dim = self.dims[channel_axis]
    coords = self.coords.get(dim, None)
    # range coords are not informative, so return empty dict in that case
    # and let resolution logic fallback to generic channel naming
    if coords is None or isinstance(coords, range):
        return {}
    return {
        i: str(v.item() if hasattr(v, "item") else v) for i, v in enumerate(coords)
    }

clear_cache #

clear_cache() -> None

Clear any cached properties.

Source code in src/ndv/models/_data_wrapper.py
352
353
354
355
def clear_cache(self) -> None:
    """Clear any cached properties."""
    if hasattr(self, "axis_map"):
        del self.axis_map

create classmethod #

create(data: ArrayT) -> DataWrapper[ArrayT]

Create a DataWrapper instance for the given data.

This method will detect all subclasses of DataWrapper and check them in order of their PRIORITY class variable. The first subclass that supports the given data will be used to wrap it.

Tip

This means that you can subclass DataWrapper to handle new data types. Just make sure that your subclass is imported before calling create.

If no subclasses support the data, a NotImplementedError is raised.

If an instance of DataWrapper is passed in, it will be returned as-is.

Source code in src/ndv/models/_data_wrapper.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
@classmethod
def create(cls, data: ArrayT) -> DataWrapper[ArrayT]:
    """Create a DataWrapper instance for the given data.

    This method will detect all subclasses of DataWrapper and check them in order of
    their `PRIORITY` class variable. The first subclass that
    [`supports`][ndv.DataWrapper.supports] the given data will be used to wrap it.

    !!! tip

        This means that you can subclass DataWrapper to handle new data types.
        Just make sure that your subclass is imported before calling `create`.

    If no subclasses support the data, a `NotImplementedError` is raised.

    If an instance of `DataWrapper` is passed in, it will be returned as-is.
    """
    if isinstance(data, DataWrapper):
        return data

    # check subclasses for support
    # This allows users to define their own DataWrapper subclasses which will
    # be automatically detected (assuming they have been imported by this point)
    for subclass in sorted(_recurse_subclasses(cls), key=lambda x: x.PRIORITY):
        try:
            if subclass.supports(data):
                logger.debug("Using %s to wrap %s", subclass.__name__, type(data))
                return subclass(data)
        except Exception as e:
            warnings.warn(
                f"Error checking DataWrapper subclass {subclass.__name__}: {e}",
                RuntimeWarning,
                stacklevel=2,
            )
    raise NotImplementedError(f"Don't know how to wrap type {type(data)}")

guess_channel_axis #

guess_channel_axis() -> Hashable | None

Return the (best guess) axis name for the channel dimension.

Source code in src/ndv/models/_data_wrapper.py
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
def guess_channel_axis(self) -> Hashable | None:
    """Return the (best guess) axis name for the channel dimension."""
    # for arrays with labeled dimensions,
    # see if any of the dimensions are named "channel"
    sizes = self.sizes()
    if len(sizes) < 3 or min(sizes.values()) > self.MAX_CHANNELS:
        return None

    for dimkey, val in sizes.items():
        if str(dimkey).lower() in self.COMMON_CHANNEL_NAMES:
            if val <= self.MAX_CHANNELS:
                return self.normalize_axis_key(dimkey)

    # otherwise use the smallest dimension as the channel axis
    return min(sizes, key=sizes.get)  # type: ignore [arg-type]

guess_z_axis #

guess_z_axis() -> Hashable | None

Return the (best guess) axis name for the z (3rd spatial) dimension.

Source code in src/ndv/models/_data_wrapper.py
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
def guess_z_axis(self) -> Hashable | None:
    """Return the (best guess) axis name for the z (3rd spatial) dimension."""
    sizes = self.sizes()
    ch = self.guess_channel_axis()
    for dimkey in sizes:
        if str(dimkey).lower() in self.COMMON_Z_AXIS_NAMES:
            if (normed := self.normalize_axis_key(dimkey)) != ch:
                return normed

    # otherwise return the LAST axis that is neither in the last two dimensions
    # or the channel axis guess
    return next(
        (self.normalize_axis_key(x) for x in reversed(self.dims[:-2]) if x != ch),
        None,
    )

isel #

isel(index: Mapping[int, int | slice]) -> ndarray

Return a slice of the data as a numpy array.

index will look like (e.g.) {0: slice(0, 10), 1: 5}. The default implementation converts the index to a tuple of the same length as the self.dims, populating missing keys with slice(None), and then slices the data array using getitem.

Source code in src/ndv/models/_data_wrapper.py
147
148
149
150
151
152
153
154
155
156
157
158
159
def isel(self, index: Mapping[int, int | slice]) -> np.ndarray:
    """Return a slice of the data as a numpy array.

    `index` will look like (e.g.) `{0: slice(0, 10), 1: 5}`.
    The default implementation converts the index to a tuple of the same length as
    the self.dims, populating missing keys with `slice(None)`, and then slices the
    data array using __getitem__.
    """
    idx = tuple(index.get(k, slice(None)) for k in range(len(self.dims)))
    # this type ignore is asserted in the __init__ method
    # if the data does not support __getitem__, then the DataWrapper subclass will
    # fail to initialize
    return self._asarray(self._data[idx])  # type: ignore [index]

normalize_axis_key #

normalize_axis_key(axis: Hashable) -> int

Return positive index for axis (which can be +/- int or str label).

Source code in src/ndv/models/_data_wrapper.py
308
309
310
311
312
313
314
315
316
317
318
def normalize_axis_key(self, axis: Hashable) -> int:
    """Return positive index for `axis` (which can be +/- int or str label)."""
    try:
        return self.axis_map[axis]
    except KeyError as e:
        ndims = len(self.dims)
        if isinstance(axis, int):
            raise IndexError(
                f"Axis index {axis} out of bounds for data with {ndims} dimensions"
            ) from e
        raise IndexError(f"Axis label {axis} not found in data dimensions") from e

sizes #

sizes() -> Mapping[Hashable, int]

Return the sizes of the dimensions.

Source code in src/ndv/models/_data_wrapper.py
238
239
240
def sizes(self) -> Mapping[Hashable, int]:
    """Return the sizes of the dimensions."""
    return {dim: len(self.coords[dim]) for dim in self.dims}

summary_info #

summary_info() -> str

Return info label with information about the data.

Source code in src/ndv/models/_data_wrapper.py
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
def summary_info(self) -> str:
    """Return info label with information about the data."""
    package = getattr(self._data, "__module__", "").split(".")[0]
    info = f"{package}.{getattr(type(self._data), '__qualname__', '')}"

    if sizes := self.sizes():
        # if all of the dimension keys are just integers, omit them from size_str
        if all(isinstance(x, int) for x in sizes):
            size_str = repr(tuple(sizes.values()))
        # otherwise, include the keys in the size_str
        else:
            size_str = ", ".join(f"{k}:{v}" for k, v in sizes.items())
            size_str = f"({size_str})"
        info += f" {size_str}"
    if dtype := getattr(self._data, "dtype", ""):
        info += f", {dtype}"
    if nbytes := getattr(self._data, "nbytes", 0):
        info += f", {_human_readable_size(nbytes)}"
    return info

supports abstractmethod classmethod #

supports(obj: Any) -> TypeGuard[Any]

Return True if this wrapper can handle the given object.

Any exceptions raised by this method will be suppressed, so it is safe to directly import necessary dependencies without a try/except block.

Source code in src/ndv/models/_data_wrapper.py
120
121
122
123
124
125
126
127
@classmethod
@abstractmethod
def supports(cls, obj: Any) -> TypeGuard[Any]:
    """Return True if this wrapper can handle the given object.

    Any exceptions raised by this method will be suppressed, so it is safe to
    directly import necessary dependencies without a try/except block.
    """

LUTModel #

Bases: NDVModel

Representation of how to display a channel of an array.

Attributes:

  • name (str) –

    Display name for this channel. Empty string means no explicit name.

  • visible (bool) –

    Whether to display this channel. NOTE: This has implications for data retrieval, as we may not want to request channels that are not visible. See ArrayDisplayModel.current_index.

  • cmap (Colormap) –

    cmap.Colormap to use for this channel. This can be expressed as any channel. This can be expressed as any "colormap like" object

  • clims (Union[ClimsManual, ClimsPercentile, ClimsStdDev, ClimsMinMax]) –

    Method for determining the contrast limits for this channel. If a 2-element tuple or list is provided, it is interpreted as a manual contrast limit.

  • bounds (tuple[float | None, float | None]) –

    Optional extrema for limiting the range of the contrast limits

  • gamma (float) –

    Gamma applied to the data before applying the colormap. By default, 1.0.

NDVModel #

Bases: BaseModel

Base evented model for NDV models.

Uses pydantic.BaseModel and psygnal.SignalGroupDescriptor.

RectangularROIModel #

Bases: NDVModel

Representation of an axis-aligned rectangular Region of Interest (ROI).

Attributes:

  • visible (bool) –

    Whether to display this roi.

  • bounding_box (tuple[tuple[float, float], tuple[float, float]]) –

    The minimum and maximum (x, y) points of the region in data space (i.e. array indices, not scaled world coordinates). These two points define an axis-aligned bounding box.

ResolvedDisplayState dataclass #

ResolvedDisplayState(
    visible_axes: tuple[int, ...],
    channel_axis: int | None,
    channel_mode: ChannelMode,
    current_index: dict[int, int | slice],
    data_coords: dict[int, tuple],
    hidden_sliders: frozenset[Hashable],
    rgba_channel_count: int,
    summary_info: str,
    visible_scales: tuple[float, ...],
)

Frozen snapshot of resolved display state (ArrayDisplayModel + DataWrapper).

At any given time, the viewer's display state is influenced by both the current ArrayDisplayModel (e.g. which axes are visible, channel mode, current_index), which is mutable and can be updated by the user, and the DataWrapper, which has awareness of the underlying data (e.g. dimension names, coordinates, metadata).

The term "resolution" here refers to the process of taking the user's "intentions", as expressed in the ArrayDisplayModel, and resolving them against the reality of the data, as represented by the DataWrapper. This includes normalizing axis keys to positive integers, injecting values derived from the data (e.g. guessing a channel axis or inferring coordinates), and computing derived state.

The output of that resolution process is this ResolvedDisplayState, which is a deterministic, immutable, hashable snapshot of the display state that can be used for diffing and driving the actual data slicing and visualization logic.

Produced by resolve(). Used for diffing in ArrayViewer._apply_changes().

RingBuffer #

Bases: Sequence

Ring buffer structure with a given capacity and element type.

Parameters:

  • max_capacity #

    (int) –

    The maximum capacity of the ring buffer.

  • dtype #

    (DTypeLike, default: float ) –

    Desired type (and shape) of individual buffer elements. This is passed to np.empty, so it can be any dtype-like object. Common scenarios will be: - a fixed dtype (e.g. int, np.uint8, 'u2', np.dtype('f4')) - a (fixed_dtype, shape) tuple (e.g. ('uint16', (512, 512)))

  • allow_overwrite #

    (bool, default: True ) –

    If false, throw an IndexError when trying to append to an already full buffer.

  • create_buffer #

    (Callable[[int, DTypeLike], NDArray], default: empty ) –

    A callable that creates the underlying array. May be used to customize the initialization of the array. Defaults to np.empty.

Notes

Vendored from numpy-ringbuffer, by Eric Wieser (MIT License). And updated with typing and signals.

Methods:

  • append

    Append a value to the right end of the buffer.

  • appendleft

    Append a value to the left end of the buffer.

  • extend

    Extend the buffer with the given values.

  • extendleft

    Prepend the buffer with the given values.

  • pop

    Pop a value from the right end of the buffer.

  • popleft

    Pop a value from the left end of the buffer.

Attributes:

  • dtype (dtype) –

    Return the dtype of the buffer.

  • is_full (bool) –

    True if there is no more space in the buffer.

  • maxlen (int) –

    Return the maximum capacity of the buffer.

  • shape (tuple[int, ...]) –

    Return the shape of the valid buffer (excluding unused space).

Source code in src/ndv/models/_ring_buffer.py
76
77
78
79
80
81
82
83
84
85
86
87
88
def __init__(
    self,
    max_capacity: int,
    dtype: npt.DTypeLike = float,
    *,
    allow_overwrite: bool = True,
    create_buffer: Callable[[int, npt.DTypeLike], npt.NDArray] = np.empty,
) -> None:
    self._arr = create_buffer(max_capacity, dtype)
    self._left_index = 0
    self._right_index = 0
    self._capacity = max_capacity
    self._allow_overwrite = allow_overwrite

dtype property #

dtype: dtype

Return the dtype of the buffer.

is_full property #

is_full: bool

True if there is no more space in the buffer.

maxlen property #

maxlen: int

Return the maximum capacity of the buffer.

shape property #

shape: tuple[int, ...]

Return the shape of the valid buffer (excluding unused space).

append #

append(value: ArrayLike) -> None

Append a value to the right end of the buffer.

Source code in src/ndv/models/_ring_buffer.py
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
def append(self, value: npt.ArrayLike) -> None:
    """Append a value to the right end of the buffer."""
    if was_full := self.is_full:
        if not self._allow_overwrite:
            raise IndexError("append to a full RingBuffer with overwrite disabled")
        elif not len(self):
            return
        else:
            self._left_index += 1

    self._arr[self._right_index % self._capacity] = value
    self._right_index += 1
    self._fix_indices()
    if not was_full:
        self.resized.emit(len(self))

appendleft #

appendleft(value: ArrayLike) -> None

Append a value to the left end of the buffer.

Source code in src/ndv/models/_ring_buffer.py
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
def appendleft(self, value: npt.ArrayLike) -> None:
    """Append a value to the left end of the buffer."""
    if was_full := self.is_full:
        if not self._allow_overwrite:
            raise IndexError("append to a full RingBuffer with overwrite disabled")
        elif not len(self):
            return
        else:
            self._right_index -= 1

    self._left_index -= 1
    self._fix_indices()
    self._arr[self._left_index] = value
    if not was_full:
        self.resized.emit(len(self))

extend #

extend(values: ArrayLike) -> None

Extend the buffer with the given values.

Source code in src/ndv/models/_ring_buffer.py
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
def extend(self, values: npt.ArrayLike) -> None:
    """Extend the buffer with the given values."""
    values = np.asarray(values)
    lv = len(values)
    if len(self) + lv > self._capacity:
        if not self._allow_overwrite:
            raise IndexError(
                "Extending a RingBuffer such that it would overflow, "
                "with overwrite disabled."
            )
        elif not len(self):
            return
    if lv >= self._capacity:
        # wipe the entire array! - this may not be threadsafe
        self._arr[...] = values[-self._capacity :]
        self._right_index = self._capacity
        self._left_index = 0
        return

    was_full = self.is_full
    ri = self._right_index % self._capacity
    sl1 = np.s_[ri : min(ri + lv, self._capacity)]
    sl2 = np.s_[: max(ri + lv - self._capacity, 0)]
    self._arr[sl1] = values[: sl1.stop - sl1.start]
    self._arr[sl2] = values[sl1.stop - sl1.start :]
    self._right_index += lv

    self._left_index = max(self._left_index, self._right_index - self._capacity)
    self._fix_indices()
    if not was_full:
        self.resized.emit(len(self))

extendleft #

extendleft(values: ArrayLike) -> None

Prepend the buffer with the given values.

Source code in src/ndv/models/_ring_buffer.py
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
def extendleft(self, values: npt.ArrayLike) -> None:
    """Prepend the buffer with the given values."""
    values = np.asarray(values)
    lv = len(values)
    if len(self) + lv > self._capacity:
        if not self._allow_overwrite:
            raise IndexError(
                "Extending a RingBuffer such that it would overflow, "
                "with overwrite disabled"
            )
        elif not len(self):
            return
    if lv >= self._capacity:
        # wipe the entire array! - this may not be threadsafe
        self._arr[...] = values[: self._capacity]
        self._right_index = self._capacity
        self._left_index = 0
        return

    was_full = self.is_full
    self._left_index -= lv
    self._fix_indices()
    li = self._left_index
    sl1 = np.s_[li : min(li + lv, self._capacity)]
    sl2 = np.s_[: max(li + lv - self._capacity, 0)]
    self._arr[sl1] = values[: sl1.stop - sl1.start]
    self._arr[sl2] = values[sl1.stop - sl1.start :]

    self._right_index = min(self._right_index, self._left_index + self._capacity)
    if not was_full:
        self.resized.emit(len(self))

pop #

pop() -> ndarray

Pop a value from the right end of the buffer.

Source code in src/ndv/models/_ring_buffer.py
147
148
149
150
151
152
153
154
155
def pop(self) -> np.ndarray:
    """Pop a value from the right end of the buffer."""
    if len(self) == 0:
        raise IndexError("pop from an empty RingBuffer")
    self._right_index -= 1
    self._fix_indices()
    res = cast("np.ndarray", self._arr[self._right_index % self._capacity])
    self.resized.emit(len(self))
    return res

popleft #

popleft() -> ndarray

Pop a value from the left end of the buffer.

Source code in src/ndv/models/_ring_buffer.py
157
158
159
160
161
162
163
164
165
def popleft(self) -> np.ndarray:
    """Pop a value from the left end of the buffer."""
    if len(self) == 0:
        raise IndexError("pop from an empty RingBuffer")
    res = cast("np.ndarray", self._arr[self._left_index])
    self._left_index += 1
    self._fix_indices()
    self.resized.emit(len(self))
    return res

RingBufferWrapper #

RingBufferWrapper(
    max_capacity: int | RingBuffer,
    dtype: DTypeLike | None = None,
    *,
    allow_overwrite: bool = True,
)

Bases: DataWrapper[RingBuffer]

Wrapper for ring buffer objects.

Methods:

  • append

    Append a value to the right end of the buffer.

  • appendleft

    Append a value to the left end of the buffer.

  • axis_scales

    Return per-axis scale factors from coordinate spacing.

  • channel_names

    Return channel display names from data metadata.

  • clear_cache

    Clear any cached properties.

  • create

    Create a DataWrapper instance for the given data.

  • guess_channel_axis

    Return the (best guess) axis name for the channel dimension.

  • guess_z_axis

    Return the (best guess) axis name for the z (3rd spatial) dimension.

  • isel

    Return a slice of the data as a numpy array.

  • normalize_axis_key

    Return positive index for axis (which can be +/- int or str label).

  • pop

    Pop a value from the right end of the buffer.

  • popleft

    Pop a value from the left end of the buffer.

  • sizes

    Return the sizes of the dimensions.

  • summary_info

    Return info label with information about the data.

Attributes:

  • axis_map (Mapping[Hashable, int]) –

    Mapping of ALL valid axis keys to normalized, positive integer keys.

  • coords (Mapping) –

    Return the coordinates for the data.

  • data (ArrayT) –

    Return the data being wrapped.

  • data_changed

    Signal emitted when the data changes.

  • dims (tuple[int, ...]) –

    Return the dimensions of the data.

  • dims_changed

    Signal to emit when the dimensions of the data change.

  • dtype (dtype) –

    Return the dtype for the data.

  • significant_bits (int | None) –

    Number of significant bits per sample, if known from metadata.

Source code in src/ndv/models/_data_wrapper.py
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
def __init__(
    self,
    max_capacity: int | RingBuffer,
    dtype: npt.DTypeLike | None = None,
    *,
    allow_overwrite: bool = True,
):
    if isinstance(max_capacity, RingBuffer):
        if dtype is not None:  # pragma: no cover
            raise ValueError(
                "Cannot specify dtype when passing an existing RingBuffer."
            )
        self._ring = max_capacity
    else:
        if dtype is None:
            dtype = float
        self._ring = RingBuffer(
            max_capacity=max_capacity, dtype=dtype, allow_overwrite=allow_overwrite
        )
    self._ring.resized.connect(self.dims_changed)
    super().__init__(self._ring)

axis_map cached property #

axis_map: Mapping[Hashable, int]

Mapping of ALL valid axis keys to normalized, positive integer keys.

coords property #

coords: Mapping

Return the coordinates for the data.

data property #

data: ArrayT

Return the data being wrapped.

data_changed class-attribute instance-attribute #

data_changed = Signal()

Signal emitted when the data changes.

NOTE: It is up to data wrappers, or even end-users to emit this signal when the data object changes. We do not currently use object proxies to spy on mutation of the underlying data.

dims property #

dims: tuple[int, ...]

Return the dimensions of the data.

dims_changed class-attribute instance-attribute #

dims_changed = Signal()

Signal to emit when the dimensions of the data change.

NOTE: It is up to data wrappers, or even end-users to emit this signal when the dimensions/shape of the wrapped _data object changes.

dtype property #

dtype: dtype

Return the dtype for the data.

significant_bits property #

significant_bits: int | None

Number of significant bits per sample, if known from metadata.

append #

append(value: ArrayLike) -> None

Append a value to the right end of the buffer.

Source code in src/ndv/models/_data_wrapper.py
585
586
587
def append(self, value: npt.ArrayLike) -> None:
    """Append a value to the right end of the buffer."""
    self._ring.append(value)

appendleft #

appendleft(value: ArrayLike) -> None

Append a value to the left end of the buffer.

Source code in src/ndv/models/_data_wrapper.py
589
590
591
def appendleft(self, value: npt.ArrayLike) -> None:
    """Append a value to the left end of the buffer."""
    self._ring.appendleft(value)

axis_scales #

axis_scales() -> dict[Hashable, float]

Return per-axis scale factors from coordinate spacing.

Source code in src/ndv/models/_data_wrapper.py
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
def axis_scales(self) -> dict[Hashable, float]:
    """Return per-axis scale factors from coordinate spacing."""
    scales: dict[Hashable, float] = {}
    for dim in self.dims:
        coord = self.coords.get(dim)
        if coord is None or len(coord) < 2:
            continue
        try:
            values = [float(v) for v in coord]
        except (TypeError, ValueError):
            continue
        if any(not np.isfinite(v) for v in values):
            continue
        diffs = [values[i + 1] - values[i] for i in range(len(values) - 1)]
        if all(abs(d - diffs[0]) < 1e-10 for d in diffs):
            scales[dim] = diffs[0]
    return scales

channel_names #

channel_names(channel_axis: int | None) -> dict[int, str]

Return channel display names from data metadata.

Source code in src/ndv/models/_data_wrapper.py
320
321
322
323
324
325
326
327
328
329
330
331
332
def channel_names(self, channel_axis: int | None) -> dict[int, str]:
    """Return channel display names from data metadata."""
    if channel_axis is None:
        return {}
    dim = self.dims[channel_axis]
    coords = self.coords.get(dim, None)
    # range coords are not informative, so return empty dict in that case
    # and let resolution logic fallback to generic channel naming
    if coords is None or isinstance(coords, range):
        return {}
    return {
        i: str(v.item() if hasattr(v, "item") else v) for i, v in enumerate(coords)
    }

clear_cache #

clear_cache() -> None

Clear any cached properties.

Source code in src/ndv/models/_data_wrapper.py
352
353
354
355
def clear_cache(self) -> None:
    """Clear any cached properties."""
    if hasattr(self, "axis_map"):
        del self.axis_map

create classmethod #

create(data: ArrayT) -> DataWrapper[ArrayT]

Create a DataWrapper instance for the given data.

This method will detect all subclasses of DataWrapper and check them in order of their PRIORITY class variable. The first subclass that supports the given data will be used to wrap it.

Tip

This means that you can subclass DataWrapper to handle new data types. Just make sure that your subclass is imported before calling create.

If no subclasses support the data, a NotImplementedError is raised.

If an instance of DataWrapper is passed in, it will be returned as-is.

Source code in src/ndv/models/_data_wrapper.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
@classmethod
def create(cls, data: ArrayT) -> DataWrapper[ArrayT]:
    """Create a DataWrapper instance for the given data.

    This method will detect all subclasses of DataWrapper and check them in order of
    their `PRIORITY` class variable. The first subclass that
    [`supports`][ndv.DataWrapper.supports] the given data will be used to wrap it.

    !!! tip

        This means that you can subclass DataWrapper to handle new data types.
        Just make sure that your subclass is imported before calling `create`.

    If no subclasses support the data, a `NotImplementedError` is raised.

    If an instance of `DataWrapper` is passed in, it will be returned as-is.
    """
    if isinstance(data, DataWrapper):
        return data

    # check subclasses for support
    # This allows users to define their own DataWrapper subclasses which will
    # be automatically detected (assuming they have been imported by this point)
    for subclass in sorted(_recurse_subclasses(cls), key=lambda x: x.PRIORITY):
        try:
            if subclass.supports(data):
                logger.debug("Using %s to wrap %s", subclass.__name__, type(data))
                return subclass(data)
        except Exception as e:
            warnings.warn(
                f"Error checking DataWrapper subclass {subclass.__name__}: {e}",
                RuntimeWarning,
                stacklevel=2,
            )
    raise NotImplementedError(f"Don't know how to wrap type {type(data)}")

guess_channel_axis #

guess_channel_axis() -> Hashable | None

Return the (best guess) axis name for the channel dimension.

Source code in src/ndv/models/_data_wrapper.py
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
def guess_channel_axis(self) -> Hashable | None:
    """Return the (best guess) axis name for the channel dimension."""
    # for arrays with labeled dimensions,
    # see if any of the dimensions are named "channel"
    sizes = self.sizes()
    if len(sizes) < 3 or min(sizes.values()) > self.MAX_CHANNELS:
        return None

    for dimkey, val in sizes.items():
        if str(dimkey).lower() in self.COMMON_CHANNEL_NAMES:
            if val <= self.MAX_CHANNELS:
                return self.normalize_axis_key(dimkey)

    # otherwise use the smallest dimension as the channel axis
    return min(sizes, key=sizes.get)  # type: ignore [arg-type]

guess_z_axis #

guess_z_axis() -> Hashable | None

Return the (best guess) axis name for the z (3rd spatial) dimension.

Source code in src/ndv/models/_data_wrapper.py
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
def guess_z_axis(self) -> Hashable | None:
    """Return the (best guess) axis name for the z (3rd spatial) dimension."""
    sizes = self.sizes()
    ch = self.guess_channel_axis()
    for dimkey in sizes:
        if str(dimkey).lower() in self.COMMON_Z_AXIS_NAMES:
            if (normed := self.normalize_axis_key(dimkey)) != ch:
                return normed

    # otherwise return the LAST axis that is neither in the last two dimensions
    # or the channel axis guess
    return next(
        (self.normalize_axis_key(x) for x in reversed(self.dims[:-2]) if x != ch),
        None,
    )

isel #

isel(index: Mapping[int, int | slice]) -> ndarray

Return a slice of the data as a numpy array.

index will look like (e.g.) {0: slice(0, 10), 1: 5}. The default implementation converts the index to a tuple of the same length as the self.dims, populating missing keys with slice(None), and then slices the data array using getitem.

Source code in src/ndv/models/_data_wrapper.py
147
148
149
150
151
152
153
154
155
156
157
158
159
def isel(self, index: Mapping[int, int | slice]) -> np.ndarray:
    """Return a slice of the data as a numpy array.

    `index` will look like (e.g.) `{0: slice(0, 10), 1: 5}`.
    The default implementation converts the index to a tuple of the same length as
    the self.dims, populating missing keys with `slice(None)`, and then slices the
    data array using __getitem__.
    """
    idx = tuple(index.get(k, slice(None)) for k in range(len(self.dims)))
    # this type ignore is asserted in the __init__ method
    # if the data does not support __getitem__, then the DataWrapper subclass will
    # fail to initialize
    return self._asarray(self._data[idx])  # type: ignore [index]

normalize_axis_key #

normalize_axis_key(axis: Hashable) -> int

Return positive index for axis (which can be +/- int or str label).

Source code in src/ndv/models/_data_wrapper.py
308
309
310
311
312
313
314
315
316
317
318
def normalize_axis_key(self, axis: Hashable) -> int:
    """Return positive index for `axis` (which can be +/- int or str label)."""
    try:
        return self.axis_map[axis]
    except KeyError as e:
        ndims = len(self.dims)
        if isinstance(axis, int):
            raise IndexError(
                f"Axis index {axis} out of bounds for data with {ndims} dimensions"
            ) from e
        raise IndexError(f"Axis label {axis} not found in data dimensions") from e

pop #

pop() -> ndarray

Pop a value from the right end of the buffer.

Source code in src/ndv/models/_data_wrapper.py
593
594
595
def pop(self) -> np.ndarray:
    """Pop a value from the right end of the buffer."""
    return self._ring.pop()

popleft #

popleft() -> ndarray

Pop a value from the left end of the buffer.

Source code in src/ndv/models/_data_wrapper.py
597
598
599
def popleft(self) -> np.ndarray:
    """Pop a value from the left end of the buffer."""
    return self._ring.popleft()

sizes #

sizes() -> Mapping[Hashable, int]

Return the sizes of the dimensions.

Source code in src/ndv/models/_data_wrapper.py
238
239
240
def sizes(self) -> Mapping[Hashable, int]:
    """Return the sizes of the dimensions."""
    return {dim: len(self.coords[dim]) for dim in self.dims}

summary_info #

summary_info() -> str

Return info label with information about the data.

Source code in src/ndv/models/_data_wrapper.py
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
def summary_info(self) -> str:
    """Return info label with information about the data."""
    package = getattr(self._data, "__module__", "").split(".")[0]
    info = f"{package}.{getattr(type(self._data), '__qualname__', '')}"

    if sizes := self.sizes():
        # if all of the dimension keys are just integers, omit them from size_str
        if all(isinstance(x, int) for x in sizes):
            size_str = repr(tuple(sizes.values()))
        # otherwise, include the keys in the size_str
        else:
            size_str = ", ".join(f"{k}:{v}" for k, v in sizes.items())
            size_str = f"({size_str})"
        info += f" {size_str}"
    if dtype := getattr(self._data, "dtype", ""):
        info += f", {dtype}"
    if nbytes := getattr(self._data, "nbytes", 0):
        info += f", {_human_readable_size(nbytes)}"
    return info