Skip to content

Utilities

Minnt framework provides a few utilities for startup initialization, parameter initialization override, logdir formatting, tensor manipulation, logging, and versioning.

Startup

minnt.startup

startup(
    seed: int | None = None,
    threads: int | None = None,
    *,
    forkserver_instead_of_fork: bool = True,
    allow_tf32: bool = True,
    expandable_segments: bool | None = True
) -> None

Initialize the environment.

  • Set the random seed if given.
  • Set the number of threads if given.
  • Use forkserver instead of fork multiprocessing start method unless disallowed.
  • Allow using TF32 for matrix multiplication unless disallowed.
  • Enable expandable segments in the CUDA memory allocator unless disallowed.

Parameters:

  • seed (int | None, default: None ) –

    If not None, set the Python, Numpy, and PyTorch random seeds to this value.

  • threads (int | None, default: None ) –

    If not None of 0, set the number of threads to this value. Otherwise, use as many threads as cores.

  • forkserver_instead_of_fork (bool, default: True ) –

    If True, use forkserver instead of fork as the default start multiprocessing method. This will be the default one in Python 3.14.

  • allow_tf32 (bool, default: True ) –

    If False, disable TF32 for matrix multiplication even when available.

  • expandable_segments (bool | None, default: True ) –

    If True, enable expandable segments in the CUDA memory allocator; if False, disable them; if None, do not change the current setting.

Environment variables: The following environment variables can be used to override the method parameters:

  • MINNT_START_METHOD: If set to fork or forkserver, uses the specified method as the multiprocessing start method.
  • MINNT_ALLOW_TF32: If set to 0 or 1, overrides the allow_tf32 parameter.
  • MINNT_EXPANDABLE_SEGMENTS: If set to 0 or 1, overrides the expandable_segments parameter.
Source code in minnt/startup_impl.py
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
def startup(
    seed: int | None = None,
    threads: int | None = None,
    *,
    forkserver_instead_of_fork: bool = True,
    allow_tf32: bool = True,
    expandable_segments: bool | None = True,
) -> None:
    """Initialize the environment.

    - Set the random seed if given.
    - Set the number of threads if given.
    - Use `forkserver` instead of `fork` multiprocessing start method unless disallowed.
    - Allow using TF32 for matrix multiplication unless disallowed.
    - Enable expandable segments in the CUDA memory allocator unless disallowed.

    Parameters:
      seed: If not `None`, set the Python, Numpy, and PyTorch random seeds to this value.
      threads: If not `None` of 0, set the number of threads to this value.
        Otherwise, use as many threads as cores.
      forkserver_instead_of_fork: If `True`, use `forkserver` instead of `fork` as the
        default start multiprocessing method. This will be the default one in Python 3.14.
      allow_tf32: If `False`, disable TF32 for matrix multiplication even when available.
      expandable_segments: If `True`, enable expandable segments in the CUDA memory allocator;
        if `False`, disable them; if `None`, do not change the current setting.

    **Environment variables:** The following environment variables can be used
    to override the method parameters:

    - `MINNT_START_METHOD`: If set to `fork` or `forkserver`, uses the specified method as
      the multiprocessing start method.
    - `MINNT_ALLOW_TF32`: If set to `0` or `1`, overrides the `allow_tf32` parameter.
    - `MINNT_EXPANDABLE_SEGMENTS`: If set to `0` or `1`, overrides the `expandable_segments` parameter.
    """

    # Set random seed if not None.
    if seed is not None:
        random.seed(seed)
        np.random.seed(seed)
        torch.manual_seed(seed)

    # Set number of threads if > 0; otherwise, use as many threads as cores.
    if threads is not None and threads > 0:
        if torch.get_num_threads() != threads:
            torch.set_num_threads(threads)
        if torch.get_num_interop_threads() != threads:
            torch.set_num_interop_threads(threads)

    # If instructed, use `forkserver` instead of `fork` (which will be the default in Python 3.14).
    if "fork" in torch.multiprocessing.get_all_start_methods():
        if os.environ.get("MINNT_START_METHOD") == "fork":
            if torch.multiprocessing.get_start_method(allow_none=True) != "fork":
                torch.multiprocessing.set_start_method("fork")
        elif forkserver_instead_of_fork or os.environ.get("MINNT_START_METHOD") == "forkserver":
            if torch.multiprocessing.get_start_method(allow_none=True) != "forkserver":
                torch.multiprocessing.set_start_method("forkserver")

    # Allow TF32 for matrix multiplication if available, unless instructed otherwise.
    if os.environ.get("MINNT_ALLOW_TF32") in ["0", "1"]:
        allow_tf32 = os.environ.get("MINNT_ALLOW_TF32") == "1"
    torch.backends.cuda.matmul.allow_tf32 = allow_tf32

    # On NVIDIA GPUs, allow or disallow expandable segments in the CUDA memory allocator if requested.
    if os.environ.get("MINNT_EXPANDABLE_SEGMENTS") in ["0", "1"]:
        expandable_segments = os.environ.get("MINNT_EXPANDABLE_SEGMENTS") == "1"
    if expandable_segments is not None:
        expandable_segments = bool(expandable_segments)
        if f"expandable_segments:{str(not expandable_segments)}" not in os.environ.get("PYTORCH_CUDA_ALLOC_CONF", ""):
            if torch.cuda.is_available() and torch.version.cuda:
                torch.cuda.memory._set_allocator_settings(f"expandable_segments:{str(expandable_segments)}")

minnt.global_keras_initializers

global_keras_initializers(
    parameter_initialization: bool = True,
    batchnorm_momentum_override: float | None = 0.01,
    norm_layer_epsilon_override: float | None = 0.001,
) -> None

Change default PyTorch initializers to Keras defaults.

The following initializers are used:

  • Linear, Conv1d, Conv2d, Conv3d, ConvTranspose1d, ConvTranspose2d, ConvTranspose3d, Bilinear: Xavier uniform for weights, zeros for biases.
  • Embedding, EmbeddingBag: Uniform [-0.05, 0.05] for weights.
  • RNN, RNNCell, LSTM, LSTMCell, GRU, GRUCell: Xavier uniform for input weights, orthogonal for recurrent weights, zeros for biases (with LSTM forget gate bias set to 1).

Furthermore, for batch normalization layers, the default momentum value is changed from 0.1 to the Keras default of 0.01 (or any other value specified).

Finally, for batch normalization, layer normalization, and group normalization layers, the default epsilon value is changed from 1e-5 to the Keras default of 1e-3 (or any other value specified).

Parameters:

  • parameter_initialization (bool, default: True ) –

    If True, override the default PyTorch initializers with Keras defaults.

  • batchnorm_momentum_override (float | None, default: 0.01 ) –

    If not None, override the default value of batch normalization momentum from 0.1 to this value.

  • norm_layer_epsilon_override (float | None, default: 0.001 ) –

    If not None, override the default value of epsilon for batch normalization, layer normalization, and group normalization layers from 1e-5 to this value.

Source code in minnt/initializers_override.py
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
def global_keras_initializers(
    parameter_initialization: bool = True,
    batchnorm_momentum_override: float | None = 0.01,
    norm_layer_epsilon_override: float | None = 0.001,
) -> None:
    """Change default PyTorch initializers to Keras defaults.

    The following initializers are used:

    - `Linear`, `Conv1d`, `Conv2d`, `Conv3d`, `ConvTranspose1d`, `ConvTranspose2d`, `ConvTranspose3d`, `Bilinear`:
      Xavier uniform for weights, zeros for biases.
    - `Embedding`, `EmbeddingBag`: Uniform [-0.05, 0.05] for weights.
    - `RNN`, `RNNCell`, `LSTM`, `LSTMCell`, `GRU`, `GRUCell`: Xavier uniform for input weights,
      orthogonal for recurrent weights, zeros for biases (with LSTM forget gate bias set to 1).

    Furthermore, for batch normalization layers, the default momentum value is changed
    from 0.1 to the Keras default of 0.01 (or any other value specified).

    Finally, for batch normalization, layer normalization, and group normalization layers,
    the default epsilon value is changed from 1e-5 to the Keras default of 1e-3
    (or any other value specified).

    Parameters:
     parameter_initialization: If True, override the default PyTorch initializers with Keras defaults.
     batchnorm_momentum_override: If not None, override the default value of batch normalization
       momentum from 0.1 to this value.
     norm_layer_epsilon_override: If not None, override the default value of epsilon
       for batch normalization, layer normalization, and group normalization layers from
       1e-5 to this value.
    """
    if parameter_initialization:
        for class_, reset_parameters_method in KerasParameterInitialization.overrides.items():
            class_.reset_parameters = reset_parameters_method

    if batchnorm_momentum_override is not None:
        for batch_norm_super in KerasNormalizationLayers.batch_norms:
            for batch_norm in [batch_norm_super] + batch_norm_super.__subclasses__():
                KerasNormalizationLayers.override_default_argument_value(
                    batch_norm.__init__, "momentum", batchnorm_momentum_override
                )

    if norm_layer_epsilon_override is not None:
        for norm_layer_super in KerasNormalizationLayers.all_norms:
            for norm_layer in [norm_layer_super] + norm_layer_super.__subclasses__():
                KerasNormalizationLayers.override_default_argument_value(
                    norm_layer.__init__, "eps", norm_layer_epsilon_override
                )

Formatting

minnt.format_logdir

format_logdir(logdir_template: str, **kwargs: Any) -> str

Format the log directory path by filling in placeholders.

The logdir_template is formatted using str.format, where the {key} placeholders are replaced by the corresponding values from kwargs. Importantly, several placeholders are always provided automatically:

  • {config}: A comma-separated list of key=value pairs for sorted key-value items in kwargs. The keys are abbreviated to their first character per segment (with segments separated by hyphens or underscores). The maximum length of the placeholder is limited to 200 characters; if exceeded, the longest entries are truncated with ellipses (...) to fit within the limit.
  • {file}: The base name of the script file (without extension) that called this function; empty string if called from an interactive environment (e.g., Jupyter notebook).
  • {timestamp}: The current date and time in the format YYYYMMDD_HHMMSS.

Path-unsafe characters in the placeholder values are replaced with underscores, and for convenience, several additional variants of each placeholder are supported:

  • {key-}, {key_}: same as {key}, but with additional hyphen/underscore if the value is non-empty,
  • {-key}, {_key}: same as {key}, but with leading hyphen/underscore if the value is non-empty.

Finally, both slashes and backslashes are replaced with the current OS path separator.

Parameters:

  • logdir_template (str) –

    The log directory template with placeholders.

  • **kwargs (Any, default: {} ) –

    The keyword arguments to fill the template.

Returns:

  • str

    The formatted log directory path.

Example
parser = argparse.ArgumentParser()
...
args = parser.parse_args()

logdir = minnt.format_logdir("logs/{file-}{timestamp}{-config}", **vars(args))
Source code in minnt/format_logdir_impl.py
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
def format_logdir(logdir_template: str, **kwargs: Any) -> str:
    """Format the log directory path by filling in placeholders.

    The `logdir_template` is formatted using `str.format`, where the `{key}` placeholders
    are replaced by the corresponding values from `kwargs`. Importantly, several
    placeholders are always provided automatically:

    - `{config}`: A comma-separated list of `key=value` pairs for sorted key-value items in `kwargs`.
        The keys are abbreviated to their first character per segment (with segments separated by
        hyphens or underscores). The maximum length of the placeholder is limited to 200 characters;
        if exceeded, the longest entries are truncated with ellipses (`...`) to fit within the limit.
    - `{file}`: The base name of the script file (without extension) that called this function;
        empty string if called from an interactive environment (e.g., Jupyter notebook).
    - `{timestamp}`: The current date and time in the format `YYYYMMDD_HHMMSS`.

    Path-unsafe characters in the placeholder values are replaced with underscores, and for convenience,
    several additional variants of each placeholder are supported:

    - `{key-}`, `{key_}`: same as `{key}`, but with additional hyphen/underscore if the value is non-empty,
    - `{-key}`, `{_key}`: same as `{key}`, but with leading hyphen/underscore if the value is non-empty.

    Finally, both slashes and backslashes are replaced with the current OS path separator.

    Parameters:
      logdir_template: The log directory template with placeholders.
      **kwargs: The keyword arguments to fill the template.

    Returns:
      The formatted log directory path.

    Example:
      ```python
      parser = argparse.ArgumentParser()
      ...
      args = parser.parse_args()

      logdir = minnt.format_logdir("logs/{file-}{timestamp}{-config}", **vars(args))
      ```
    """
    # Create {config} placeholder.
    items = [(re.sub("(.)[^-_]*[-_]?", r"\1", str(k)), str(v)) for k, v in sorted(kwargs.items())]
    if sum(len(k) + 1 + min(len(v), 5) + 1 for k, v in items) - 1 > 200:
        raise ValueError("Signature is too long to fit even with maximum truncation.")

    limit = max(len(v) for k, v in items)
    while sum(len(k) + 1 + min(len(v), limit) + 1 for k, v in items) - 1 > 200:  # guaranteed False when limit == 5
        limit -= 1
    items = [(k, v if len(v) <= limit else v[:limit // 2 - 1] + "..." + v[-limit // 2 + 2:]) for k, v in items]
    kwargs["config"] = ",".join(f"{k}={v}" for k, v in items)

    # Create {file} placeholder.
    current_frame = inspect.currentframe()
    if current_frame and current_frame.f_back:
        kwargs["file"] = os.path.splitext(os.path.basename(current_frame.f_back.f_globals.get("__file__")))[0]
    else:
        kwargs["file"] = ""

    # Create {timestamp} placeholder.
    kwargs["timestamp"] = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")

    # Sanitize placeholder values and create variants.
    for key in list(kwargs.keys()):
        value = sanitize_path(str(kwargs[key]))
        kwargs[key] = value
        for separator in ["-", "_"]:
            kwargs[f"{key}{separator}"] = value and f"{value}{separator}"
            kwargs[f"{separator}{key}"] = value and f"{separator}{value}"

    return fill_and_standardize_path(logdir_template, **kwargs)

Tensor Manipulation

minnt.tensors_to_device

tensors_to_device(x: TensorOrTensors, device: device) -> TensorOrTensors

Asynchronously move the input tensor or the input tensor structure to the given device.

Parameters:

  • x (TensorOrTensors) –

    The input tensor or tensor structure to move to the device, where tensor structures can be tuples, lists, or dictionaries containing other tensor structures and non-tensor values, or completely custom data structures. All tensors in tuples, lists, and dictionary values are moved.

  • device (device) –

    The device to move the tensors to.

Returns:

  • TensorOrTensors

    The input tensor or tensor structure with all tensors moved to the given device.

Source code in minnt/trainable_module.py
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
def tensors_to_device(x: TensorOrTensors, device: torch.device) -> TensorOrTensors:
    """Asynchronously move the input tensor or the input tensor structure to the given device.

    Parameters:
      x: The input tensor or tensor structure to move to the device, where tensor structures
        can be tuples, lists, or dictionaries containing other tensor structures and non-tensor
        values, or completely custom data structures. All tensors in tuples, lists, and dictionary
        values are moved.
      device: The device to move the tensors to.

    Returns:
      The input tensor or tensor structure with all tensors moved to the given device.
    """
    if isinstance(x, (torch.Tensor, torch.nn.utils.rnn.PackedSequence)):
        return x.to(device, non_blocking=True)
    elif isinstance(x, tuple):
        return tuple(tensors_to_device(a, device) for a in x)
    elif isinstance(x, list):
        return [tensors_to_device(a, device) for a in x]
    elif isinstance(x, dict):
        return {k: tensors_to_device(v, device) for k, v in x.items()}
    return x

Logging

minnt.ProgressLogger

Bases: tqdm

A slim wrapper around tqdm.tqdm for showing a progress bar, optionally with logs.

Source code in minnt/progress_logger.py
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
class ProgressLogger(tqdm.tqdm):
    """A slim wrapper around `tqdm.tqdm` for showing a progress bar, optionally with logs."""

    monitor_interval = 0  # Disable internal monitoring thread.

    _report_only_first = int(os.environ.get("MINNT_PROGRESS_FIRST", -1))  # Optional global limit to first N reports.

    @staticmethod
    def get_console_verbosity(console: int | None) -> int:
        if console is None and "MINNT_PROGRESS" in os.environ:
            console = int(os.environ["MINNT_PROGRESS"])
        elif console is None and ("MINNT_PROGRESS_FIRST" in os.environ or "MINNT_PROGRESS_EACH" in os.environ):
            console = 3 if ProgressLogger._report_only_first != 0 else 1
        elif console is None:
            console = 2
        return console

    def __init__(
        self,
        data: Iterable,
        description: str,
        console: int | None = None,
        logs_fn: Callable[[], Logs] | None = None,
    ) -> None:
        """Create a ProgressLogger instance.

        Parameters:
          data: Any iterable data to wrap, usually a [torch.utils.data.DataLoader][].
          description: A description string to show in front of the progress bar.
          console: Controls the console verbosity: 0 and 1 for silent, 2 for
            only-when-writing-to-console progress bar, 3 for persistent progress bar.
            The default is 2, but can be overridden by the `MINNT_PROGRESS` environment variable.
          logs_fn: An optional function returning the current logs to show alongside the progress bar.
            If given, the logs are fully computed and shown on each refresh.
        """
        console = self.get_console_verbosity(console)

        kwargs = {}
        if "MINNT_PROGRESS_EACH" in os.environ:
            kwargs["miniters"] = int(os.environ["MINNT_PROGRESS_EACH"])
            kwargs["mininterval"] = None

        self._console = console
        self._description = description
        self._logs_fn = logs_fn
        super().__init__(data, unit="batch", leave=False, disable=None if console == 2 else console < 2, **kwargs)

    def refresh(self, nolock=False, lock_args=None):
        if ProgressLogger._report_only_first > 0:
            ProgressLogger._report_only_first -= 1
        elif ProgressLogger._report_only_first == 0:
            return

        description = self._description
        if self._logs_fn is not None:
            description += (description and " ") + BaseLogger.format_metrics(compute_logs(self._logs_fn()))
        self.set_description(description, refresh=False)

        super().refresh(nolock=nolock, lock_args=lock_args)

    @staticmethod
    def log_console(message: str, end: str = "\n", progress_only: bool = False, console: int | None = None) -> None:
        """Write the given message to the console, correctly even if a progress bar is being used.

        Parameters:
          message: The message to write.
          end: The string appended after the message.
          progress_only: If `False` (the default), the message is written to standard output when current console
            verbosity is at least 1; if `True`, the message is written to standard error only when the progress bar
            is being shown (console verbosity 2 and writing to the console, or console verbosity 3).
          console: Controls the current console verbosity. The default is 2, but can be overridden by the
            `MINNT_PROGRESS` environment variable.
        """
        console = ProgressLogger.get_console_verbosity(console)
        if progress_only and ((console == 2 and sys.stderr.isatty()) or console >= 3):
            tqdm.tqdm.write(message, end=end, file=sys.stderr)
        elif (not progress_only) and console >= 1:
            tqdm.tqdm.write(message, end=end, file=sys.stdout)

__init__

__init__(
    data: Iterable,
    description: str,
    console: int | None = None,
    logs_fn: Callable[[], Logs] | None = None,
) -> None

Create a ProgressLogger instance.

Parameters:

  • data (Iterable) –

    Any iterable data to wrap, usually a torch.utils.data.DataLoader.

  • description (str) –

    A description string to show in front of the progress bar.

  • console (int | None, default: None ) –

    Controls the console verbosity: 0 and 1 for silent, 2 for only-when-writing-to-console progress bar, 3 for persistent progress bar. The default is 2, but can be overridden by the MINNT_PROGRESS environment variable.

  • logs_fn (Callable[[], Logs] | None, default: None ) –

    An optional function returning the current logs to show alongside the progress bar. If given, the logs are fully computed and shown on each refresh.

Source code in minnt/progress_logger.py
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
def __init__(
    self,
    data: Iterable,
    description: str,
    console: int | None = None,
    logs_fn: Callable[[], Logs] | None = None,
) -> None:
    """Create a ProgressLogger instance.

    Parameters:
      data: Any iterable data to wrap, usually a [torch.utils.data.DataLoader][].
      description: A description string to show in front of the progress bar.
      console: Controls the console verbosity: 0 and 1 for silent, 2 for
        only-when-writing-to-console progress bar, 3 for persistent progress bar.
        The default is 2, but can be overridden by the `MINNT_PROGRESS` environment variable.
      logs_fn: An optional function returning the current logs to show alongside the progress bar.
        If given, the logs are fully computed and shown on each refresh.
    """
    console = self.get_console_verbosity(console)

    kwargs = {}
    if "MINNT_PROGRESS_EACH" in os.environ:
        kwargs["miniters"] = int(os.environ["MINNT_PROGRESS_EACH"])
        kwargs["mininterval"] = None

    self._console = console
    self._description = description
    self._logs_fn = logs_fn
    super().__init__(data, unit="batch", leave=False, disable=None if console == 2 else console < 2, **kwargs)

log_console staticmethod

log_console(
    message: str,
    end: str = "\n",
    progress_only: bool = False,
    console: int | None = None,
) -> None

Write the given message to the console, correctly even if a progress bar is being used.

Parameters:

  • message (str) –

    The message to write.

  • end (str, default: '\n' ) –

    The string appended after the message.

  • progress_only (bool, default: False ) –

    If False (the default), the message is written to standard output when current console verbosity is at least 1; if True, the message is written to standard error only when the progress bar is being shown (console verbosity 2 and writing to the console, or console verbosity 3).

  • console (int | None, default: None ) –

    Controls the current console verbosity. The default is 2, but can be overridden by the MINNT_PROGRESS environment variable.

Source code in minnt/progress_logger.py
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
@staticmethod
def log_console(message: str, end: str = "\n", progress_only: bool = False, console: int | None = None) -> None:
    """Write the given message to the console, correctly even if a progress bar is being used.

    Parameters:
      message: The message to write.
      end: The string appended after the message.
      progress_only: If `False` (the default), the message is written to standard output when current console
        verbosity is at least 1; if `True`, the message is written to standard error only when the progress bar
        is being shown (console verbosity 2 and writing to the console, or console verbosity 3).
      console: Controls the current console verbosity. The default is 2, but can be overridden by the
        `MINNT_PROGRESS` environment variable.
    """
    console = ProgressLogger.get_console_verbosity(console)
    if progress_only and ((console == 2 and sys.stderr.isatty()) or console >= 3):
        tqdm.tqdm.write(message, end=end, file=sys.stderr)
    elif (not progress_only) and console >= 1:
        tqdm.tqdm.write(message, end=end, file=sys.stdout)

Versioning

minnt.__version__ module-attribute

__version__ = '1.0.1'

The current version of the Minnt package, formatted according to Semantic Versioning.

The version string is in the format major.minor.patch[-prerelease], where the prerelease part is optional and is empty for stable releases.

minnt.require_version

require_version(required_version: str) -> None

Verify the installed version is at least required_version, and set API compatibility for that version.

This method has two purposes: to ensure that the installed version of Minnt meets the minimum required version, and to set the API compatibility level for the Minnt package.

The goal of the API compatibility is to ensure that the API in newer versions of Minnt has the same intended behavior as in the required_version.

Example

If a package required Minnt version 1.3 and version 1.4 introduced for example a new default override in minnt.global_keras_initializers, with minnt.require_version("1.3") the package would still get the old behavior from version 1.3, even when running with Minnt 1.4 or newer.

Warning

The API compatibility does not guarantee completely identical behavior between versions, for example bugs may be fixed in newer versions changing the original behavior. That is why we talk about intended behavior.

Parameters:

  • required_version (str) –

    The minimum required version, in the format major.minor.patch, and the required API compatibility.

Source code in minnt/version.py
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
def require_version(required_version: str) -> None:
    """Verify the installed version is at least `required_version`, and set API compatibility for that version.

    This method has two purposes: to ensure that the installed version of Minnt meets the minimum
    required version, and to set the **API compatibility level** for the Minnt package.

    The goal of the API compatibility is to ensure that the API in newer versions of Minnt has the same
    **intended behavior** as in the `required_version`.

    Example:
      If a package required Minnt version `1.3` and version `1.4` introduced for example a new default
      override in [minnt.global_keras_initializers][], with `minnt.require_version("1.3")` the package
      would still get the old behavior from version `1.3`, even when running with Minnt `1.4` or newer.

    Warning:
      The API compatibility does not guarantee completely identical behavior between versions, for example
      bugs may be fixed in newer versions changing the original behavior. That is why we talk about
      **intended behavior**.

    Parameters:
      required_version: The minimum required version, in the format _major.minor.patch_,
        and the required API compatibility.
    """
    required = required_version.split(".")
    assert len(required) <= 3, "Expected at most 3 version components"
    assert all(part.isdecimal() for part in required), "Expected only numeric version components"

    required = list(map(int, required))
    current = list(map(int, __version__.split("-", maxsplit=1)[0].split(".")))

    assert current[:len(required)] >= required, \
        f"The minnt>={required_version} is required, but found only {__version__}."