././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5878823 h5py-3.13.0/0000755000175000017500000000000014755127217013374 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/LICENSE0000644000175000017500000000276014045746670014410 0ustar00takluyvertakluyverCopyright (c) 2008 Andrew Collette and contributors All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/MANIFEST.in0000644000175000017500000000201514350630273015120 0ustar00takluyvertakluyverinclude AUTHORS include api_gen.py include dev-install.sh include LICENSE include MANIFEST.in include pylintrc include README.rst include setup_build.py include setup_configure.py include tox.ini include pytest.ini include *.toml recursive-include docs * prune docs/_build recursive-include docs_api * prune docs_api/_build recursive-include examples *.py recursive-include h5py *.h *.pyx *.pxd *.pxi *.py *.txt exclude h5py/config.pxi exclude h5py/defs.pxd exclude h5py/defs.pyx exclude h5py/_hdf5.pxd recursive-include h5py/tests *.h5 recursive-include licenses * recursive-include lzf * recursive-exclude * .DS_Store exclude ci other .github recursive-exclude ci * recursive-exclude other * recursive-exclude .github * exclude *.yml exclude *.yaml recursive-exclude * __pycache__ recursive-exclude * *.py[co] exclude .coveragerc exclude .coverage_dir recursive-exclude .coverage_dir * exclude .mailmap exclude github_deploy_key_h5py_h5py.enc exclude rever.xsh prune news include asv.conf.json recursive-include benchmarks *.py ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5878823 h5py-3.13.0/PKG-INFO0000644000175000017500000000464414755127217014501 0ustar00takluyvertakluyverMetadata-Version: 2.1 Name: h5py Version: 3.13.0 Summary: Read and write HDF5 files from Python Author-email: Andrew Collette Maintainer-email: Thomas Kluyver , Thomas A Caswell License: BSD-3-Clause Project-URL: Homepage, https://www.h5py.org/ Project-URL: Source, https://github.com/h5py/h5py Project-URL: Documentation, https://docs.h5py.org/en/stable/ Project-URL: Release notes, https://docs.h5py.org/en/stable/whatsnew/index.html Project-URL: Discussion forum, https://forum.hdfgroup.org/c/hdf5/h5py Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: Science/Research Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: Unix Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Programming Language :: Cython Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Topic :: Scientific/Engineering Classifier: Topic :: Database Classifier: Topic :: Software Development :: Libraries :: Python Modules Requires-Python: >=3.9 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: numpy>=1.19.3 The h5py package provides both a high- and low-level interface to the HDF5 library from Python. The low-level interface is intended to be a complete wrapping of the HDF5 API, while the high-level component supports access to HDF5 files, datasets and groups using established Python and NumPy concepts. A strong emphasis on automatic conversion between Python (Numpy) datatypes and data structures and their HDF5 equivalents vastly simplifies the process of reading and writing data from Python. Wheels are provided for several popular platforms, with an included copy of the HDF5 library (usually the latest version when h5py is released). You can also `build h5py from source `_ with any HDF5 stable release from version 1.10.4 onwards, although naturally new HDF5 versions released after this version of h5py may not work. Odd-numbered minor versions of HDF5 (e.g. 1.13) are experimental, and may not be supported. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0 h5py-3.13.0/README.rst0000644000175000017500000000326014675110407015056 0ustar00takluyvertakluyver.. image:: https://travis-ci.org/h5py/h5py.png :target: https://travis-ci.org/h5py/h5py .. image:: https://ci.appveyor.com/api/projects/status/h3iajp4d1myotprc/branch/master?svg=true :target: https://ci.appveyor.com/project/h5py/h5py/branch/master .. image:: https://dev.azure.com/h5pyappveyor/h5py/_apis/build/status/h5py.h5py?branchName=master :target: https://dev.azure.com/h5pyappveyor/h5py/_build/latest?definitionId=1&branchName=master HDF5 for Python =============== `h5py` is a thin, pythonic wrapper around `HDF5 `_, which runs on Python 3 (3.9+). Websites -------- * Main website: https://www.h5py.org * Source code: https://github.com/h5py/h5py * Discussion forum: https://forum.hdfgroup.org/c/hdf5/h5py Installation ------------ Pre-built `h5py` can either be installed via your Python Distribution (e.g. `Continuum Anaconda`_, `Enthought Canopy`_) or from `PyPI`_ via `pip`_. `h5py` is also distributed in many Linux Distributions (e.g. Ubuntu, Fedora), and in the macOS package managers `Homebrew `_, `Macports `_, or `Fink `_. More detailed installation instructions, including how to install `h5py` with MPI support, can be found at: https://docs.h5py.org/en/latest/build.html. Reporting bugs -------------- Open a bug at https://github.com/h5py/h5py/issues. For general questions, ask on the HDF forum (https://forum.hdfgroup.org/c/hdf5/h5py). .. _`Continuum Anaconda`: http://continuum.io/downloads .. _`Enthought Canopy`: https://www.enthought.com/products/canopy/ .. _`PyPI`: https://pypi.org/project/h5py/ .. _`pip`: https://pip.pypa.io/en/stable/ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0 h5py-3.13.0/api_gen.py0000644000175000017500000002563714755054371015365 0ustar00takluyvertakluyver """ Generate the lowest-level Cython bindings to HDF5. In order to translate HDF5 errors to exceptions, the raw HDF5 API is wrapped with Cython "error wrappers". These are cdef functions with the same names and signatures as their HDF5 equivalents, but implemented in the h5py.defs extension module. The h5py.defs files (defs.pyx and defs.pxd), along with the "real" HDF5 function definitions (_hdf5.pxd), are auto-generated by this script from api_functions.txt. This file also contains annotations which indicate whether a function requires a certain minimum version of HDF5, an MPI-aware build of h5py, or special error handling. This script is called automatically by the h5py build system when the output files are missing, or api_functions.txt has been updated. See the Line class in this module for documentation of the format for api_functions.txt. h5py/_hdf5.pxd: Cython "extern" definitions for HDF5 functions h5py/defs.pxd: Cython definitions for error wrappers h5py/defs.pyx: Cython implementations of error wrappers """ import re import os from hashlib import md5 from pathlib import Path from setup_configure import BuildConfig def replace_or_remove(new: Path) -> None: assert new.is_file() assert new.suffix == '.new' old = new.with_suffix('') def get_hash(p: Path) -> str: return md5(p.read_bytes()).hexdigest() if not old.exists() or get_hash(new) != get_hash(old): os.replace(new, old) else: os.remove(new) class Line: """ Represents one line from the api_functions.txt file. Exists to provide the following attributes: nogil: String indicating if we should release the GIL to call this function. Any Python callbacks it could trigger must acquire the GIL (e.g. using 'with gil' in Cython). mpi: Bool indicating if MPI required ros3: Bool indicating if ROS3 required direct_vfd: Bool indicating if DIRECT_VFD required version: None or a minimum-version tuple code: String with function return type fname: String with function name sig: String with raw function signature args: String with sequence of arguments to call function Example: MPI 1.12.2 int foo(char* a, size_t b) .nogil: "" .mpi: True .ros3: False .direct_vfd: False .version: (1, 12, 2) .code: "int" .fname: "foo" .sig: "char* a, size_t b" .args: "a, b" """ PATTERN = re.compile("""(?P(MPI)[ ]+)? (?P(ROS3)[ ]+)? (?P(DIRECT_VFD)[ ]+)? (?P([0-9]+\.[0-9]+\.[0-9]+))? (-(?P([0-9]+\.[0-9]+\.[0-9]+)))? ([ ]+)? (?P(unsigned[ ]+)?[a-zA-Z_]+[a-zA-Z0-9_]*\**)[ ]+ (?P[a-zA-Z_]+[a-zA-Z0-9_]*)[ ]* \((?P[a-zA-Z0-9_,* ]*)\) ([ ]+)? (?P(nogil))? """, re.VERBOSE) SIG_PATTERN = re.compile(""" (?:unsigned[ ]+)? (?:[a-zA-Z_]+[a-zA-Z0-9_]*\**) [ ]+[ *]* (?P[a-zA-Z_]+[a-zA-Z0-9_]*) """, re.VERBOSE) def __init__(self, text): """ Break the line into pieces and populate object attributes. text: A valid function line, with leading/trailing whitespace stripped. """ m = self.PATTERN.match(text) if m is None: raise ValueError("Invalid line encountered: {0}".format(text)) parts = m.groupdict() self.nogil = "nogil" if parts['nogil'] else "" self.mpi = parts['mpi'] is not None self.ros3 = parts['ros3'] is not None self.direct_vfd = parts['direct_vfd'] is not None self.min_version = parts['min_version'] if self.min_version is not None: self.min_version = tuple(int(x) for x in self.min_version.split('.')) self.max_version = parts['max_version'] if self.max_version is not None: self.max_version = tuple(int(x) for x in self.max_version.split('.')) self.code = parts['code'] self.fname = parts['fname'] self.sig = parts['sig'] sig_const_stripped = self.sig.replace('const', '') self.args = self.SIG_PATTERN.findall(sig_const_stripped) if self.args is None: raise ValueError("Invalid function signature: {0}".format(self.sig)) self.args = ", ".join(self.args) # Figure out what test and return value to use with error reporting if '*' in self.code or self.code in ('H5T_conv_t',): self.err_condition = "==NULL" self.err_value = f"<{self.code}>NULL" elif self.code in ('int', 'herr_t', 'htri_t', 'hid_t', 'hssize_t', 'ssize_t') \ or re.match(r'H5[A-Z]+_[a-zA-Z_]+_t', self.code): self.err_condition = "<0" self.err_value = f"<{self.code}>-1" elif self.code in ('unsigned int', 'haddr_t', 'hsize_t', 'size_t'): self.err_condition = "==0" self.err_value = f"<{self.code}>0" else: raise ValueError("Return code <<%s>> unknown" % self.code) raw_preamble = """\ # cython: language_level=3 # # Warning: this file is auto-generated from api_gen.py. DO NOT EDIT! # include "config.pxi" from .api_types_hdf5 cimport * from .api_types_ext cimport * """ def_preamble = """\ # cython: language_level=3 # # Warning: this file is auto-generated from api_gen.py. DO NOT EDIT! # include "config.pxi" from .api_types_hdf5 cimport * from .api_types_ext cimport * """ imp_preamble = """\ # cython: language_level=3 # # Warning: this file is auto-generated from api_gen.py. DO NOT EDIT! # include "config.pxi" from .api_types_ext cimport * from .api_types_hdf5 cimport * from . cimport _hdf5 from ._errors cimport set_exception, set_default_error_handler """ class LineProcessor: def __init__(self, config) -> None: self.config = config def run(self): # Function definitions file self.functions = Path('h5py', 'api_functions.txt').open('r') # Create output files path_raw_defs = Path('h5py', '_hdf5.pxd.new') path_cython_defs = Path('h5py', 'defs.pxd.new') path_cython_imp = Path('h5py', 'defs.pyx.new') self.raw_defs = path_raw_defs.open('w') self.cython_defs = path_cython_defs.open('w') self.cython_imp = path_cython_imp.open('w') self.raw_defs.write(raw_preamble) self.cython_defs.write(def_preamble) self.cython_imp.write(imp_preamble) for text in self.functions: # Directive specifying a header file if not text.startswith(' ') and not text.startswith('#') and \ len(text.strip()) > 0: inc = text.split(':')[0] self.raw_defs.write('cdef extern from "%s.h":\n' % inc) continue text = text.strip() # Whitespace or comment line if len(text) == 0 or text[0] == '#': continue # Valid function line self.line = Line(text) self.write_raw_sig() self.write_cython_sig() self.write_cython_imp() self.functions.close() self.cython_imp.close() self.cython_defs.close() self.raw_defs.close() replace_or_remove(path_cython_imp) replace_or_remove(path_cython_defs) replace_or_remove(path_raw_defs) def check_settings(self) -> bool: """ Return True if and only if the code block should be compiled within given settings """ if ( (self.line.mpi and not self.config.mpi) or (self.line.ros3 and not self.config.ros3) or (self.line.direct_vfd and not self.config.direct_vfd) or (self.line.min_version is not None and self.config.hdf5_version < self.line.min_version) or (self.line.max_version is not None and self.config.hdf5_version > self.line.max_version) ): return False else: return True def write_raw_sig(self): """ Write out "cdef extern"-style definition for an HDF5 function """ if not self.check_settings(): return raw_sig = "{0.code} {0.fname}({0.sig}) {0.nogil}\n".format(self.line) raw_sig = "\n".join((" " + x if x.strip() else x) for x in raw_sig.split("\n")) self.raw_defs.write(raw_sig) def write_cython_sig(self): """ Write out Cython signature for wrapper function """ if not self.check_settings(): return if self.line.fname == 'H5Dget_storage_size': # Special case: https://github.com/h5py/h5py/issues/1475 cython_sig = "cdef {0.code} {0.fname}({0.sig}) except? {0.err_value}\n".format(self.line) else: cython_sig = "cdef {0.code} {0.fname}({0.sig}) except {0.err_value}\n".format(self.line) self.cython_defs.write(cython_sig) def write_cython_imp(self): """ Write out Cython wrapper implementation """ if not self.check_settings(): return if self.line.nogil: imp = """\ cdef {0.code} {0.fname}({0.sig}) except {0.err_value}: cdef {0.code} r with nogil: set_default_error_handler() r = _hdf5.{0.fname}({0.args}) if r{0.err_condition}: if set_exception(): return {0.err_value} else: raise RuntimeError("Unspecified error in {0.fname} (return value {0.err_condition})") return r """ else: if self.line.fname == 'H5Dget_storage_size': # Special case: https://github.com/h5py/h5py/issues/1475 imp = """\ cdef {0.code} {0.fname}({0.sig}) except? {0.err_value}: cdef {0.code} r set_default_error_handler() r = _hdf5.{0.fname}({0.args}) if r{0.err_condition}: if set_exception(): return {0.err_value} return r """ else: imp = """\ cdef {0.code} {0.fname}({0.sig}) except {0.err_value}: cdef {0.code} r set_default_error_handler() r = _hdf5.{0.fname}({0.args}) if r{0.err_condition}: if set_exception(): return {0.err_value} else: raise RuntimeError("Unspecified error in {0.fname} (return value {0.err_condition})") return r """ imp = imp.format(self.line) self.cython_imp.write(imp) def run(): # Get configuration from environment variables config = BuildConfig.from_env() lp = LineProcessor(config) lp.run() if __name__ == '__main__': run() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/asv.conf.json0000644000175000017500000001531214045746670016010 0ustar00takluyvertakluyver{ // The version of the config file format. Do not change, unless // you know what you are doing. "version": 1, // The name of the project being benchmarked "project": "h5py", // The project's homepage "project_url": "https://www.h5py.org/", // The URL or local path of the source code repository for the // project being benchmarked "repo": ".", // The Python project's subdirectory in your repo. If missing or // the empty string, the project is assumed to be located at the root // of the repository. // "repo_subdir": "", // Customizable commands for building, installing, and // uninstalling the project. See asv.conf.json documentation. // // "install_command": ["in-dir={env_dir} python -mpip install {wheel_file}"], // "uninstall_command": ["return-code=any python -mpip uninstall -y {project}"], // "build_command": [ // "python setup.py build", // "PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}" // ], // List of branches to benchmark. If not provided, defaults to "master" // (for git) or "default" (for mercurial). // "branches": ["master"], // for git // "branches": ["default"], // for mercurial // The DVCS being used. If not set, it will be automatically // determined from "repo" by looking at the protocol in the URL // (if remote), or by looking for special directories, such as // ".git" (if local). // "dvcs": "git", // The tool to use to create environments. May be "conda", // "virtualenv" or other value depending on the plugins in use. // If missing or the empty string, the tool will be automatically // determined by looking for tools on the PATH environment // variable. "environment_type": "virtualenv", // timeout in seconds for installing any dependencies in environment // defaults to 10 min //"install_timeout": 600, // the base URL to show a commit for the project. // "show_commit_url": "http://github.com/owner/project/commit/", // The Pythons you'd like to test against. If not provided, defaults // to the current version of Python used to run `asv`. // "pythons": ["2.7", "3.6"], // The list of conda channel names to be searched for benchmark // dependency packages in the specified order // "conda_channels": ["conda-forge", "defaults"], // The matrix of dependencies to test. Each key is the name of a // package (in PyPI) and the values are version numbers. An empty // list or empty string indicates to just test against the default // (latest) version. null indicates that the package is to not be // installed. If the package to be tested is only available from // PyPi, and the 'environment_type' is conda, then you can preface // the package name by 'pip+', and the package will be installed via // pip (with all the conda available packages installed first, // followed by the pip installed packages). // // "matrix": { // "numpy": ["1.6", "1.7"], // "six": ["", null], // test with and without six installed // "pip+emcee": [""], // emcee is only available for install with pip. // }, // Combinations of libraries/python versions can be excluded/included // from the set to test. Each entry is a dictionary containing additional // key-value pairs to include/exclude. // // An exclude entry excludes entries where all values match. The // values are regexps that should match the whole string. // // An include entry adds an environment. Only the packages listed // are installed. The 'python' key is required. The exclude rules // do not apply to includes. // // In addition to package names, the following keys are available: // // - python // Python version, as in the *pythons* variable above. // - environment_type // Environment type, as above. // - sys_platform // Platform, as in sys.platform. Possible values for the common // cases: 'linux2', 'win32', 'cygwin', 'darwin'. // // "exclude": [ // {"python": "3.2", "sys_platform": "win32"}, // skip py3.2 on windows // {"environment_type": "conda", "six": null}, // don't run without six on conda // ], // // "include": [ // // additional env for python2.7 // {"python": "2.7", "numpy": "1.8"}, // // additional env if run on windows+conda // {"platform": "win32", "environment_type": "conda", "python": "2.7", "libpython": ""}, // ], // The directory (relative to the current directory) that benchmarks are // stored in. If not provided, defaults to "benchmarks" // "benchmark_dir": "benchmarks", // The directory (relative to the current directory) to cache the Python // environments in. If not provided, defaults to "env" "env_dir": ".asv/env", // The directory (relative to the current directory) that raw benchmark // results are stored in. If not provided, defaults to "results". "results_dir": ".asv/results", // The directory (relative to the current directory) that the html tree // should be written to. If not provided, defaults to "html". "html_dir": ".asv/html", // The number of characters to retain in the commit hashes. // "hash_length": 8, // `asv` will cache results of the recent builds in each // environment, making them faster to install next time. This is // the number of builds to keep, per environment. // "build_cache_size": 2, // The commits after which the regression search in `asv publish` // should start looking for regressions. Dictionary whose keys are // regexps matching to benchmark names, and values corresponding to // the commit (exclusive) after which to start looking for // regressions. The default is to start from the first commit // with results. If the commit is `null`, regression detection is // skipped for the matching benchmark. // // "regressions_first_commits": { // "some_benchmark": "352cdf", // Consider regressions only after this commit // "another_benchmark": null, // Skip regression detection altogether // }, // The thresholds for relative change in results, after which `asv // publish` starts reporting regressions. Dictionary of the same // form as in ``regressions_first_commits``, with values // indicating the thresholds. If multiple entries match, the // maximum is taken. If no entry matches, the default is 5%. // // "regressions_thresholds": { // "some_benchmark": 0.01, // Threshold of 1% // "another_benchmark": 0.5, // Threshold of 50% // }, } ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1739894414.537882 h5py-3.13.0/benchmarks/0000755000175000017500000000000014755127217015511 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/benchmarks/__init__.py0000644000175000017500000000000014045746670017612 0ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0 h5py-3.13.0/benchmarks/benchmark_slicing.py0000644000175000017500000001566214532312724021527 0ustar00takluyvertakluyver#!/usr/bin/env python3 import os import time import numpy from tempfile import TemporaryDirectory import logging logger = logging.getLogger(__name__) import h5py #Needed for multithreading: from queue import Queue from threading import Thread, Event import multiprocessing class Reader(Thread): """Thread executing tasks from a given tasks queue""" def __init__(self, queue_in, queue_out, quit_event): Thread.__init__(self) self._queue_in = queue_in self._queue_out = queue_out self._quit_event = quit_event self.daemon = True self.start() def run(self): while not self._quit_event.is_set(): task = self._queue_in.get() if task: fn, ds, position = task else: logger.debug("Swallow a bitter pill: %s", task) break try: r = fn(ds, position) self._queue_out.put((position, r)) except Exception as e: raise(e) finally: self._queue_in.task_done() class SlicingBenchmark: """ Benchmark for reading slices in the most pathlogical way in a chunked dataset Allows the test """ def __init__(self, ndim=3, size=1024, chunk=64, dtype="float32", precision=16, compression_kwargs=None): """ Defines some parameters for the benchmark, can be tuned later on. :param ndim: work in 3D datasets :param size: Volume size 1024**3 elements :param chunk: size of one chunk, with itemsize = 32bits this makes block size of 1MB by default :param dtype: the type of data to be stored :param precision: to gain a bit in compression, number of trailing bits to be zeroed. :param compression_kwargs: a dict with all options for configuring the compression """ self.ndim = ndim self.size = size self.dtype = numpy.dtype(dtype) self.chunk = chunk self.precision = precision self.tmpdir = None self.filename = None self.h5path = "data" self.total_size = self.size ** self.ndim * self.dtype.itemsize self.needed_memory = self.size ** (self.ndim-1) * self.dtype.itemsize * self.chunk if compression_kwargs is None: self.compression = {} else: self.compression = dict(compression_kwargs) def setup(self): self.tmpdir = TemporaryDirectory() self.filename = os.path.join(self.tmpdir.name, "benchmark_slicing.h5") logger.info("Saving data in %s", self.filename) logger.info("Total size: %i^%i volume size: %.3fGB, Needed memory: %.3fGB", self.size, self.ndim, self.total_size/1e9, self.needed_memory/1e9) shape = [self.size] * self.ndim chunks = (self.chunk,) * self.ndim if self.precision and self.dtype.char in "df": if self.dtype.itemsize == 4: mask = numpy.uint32(((1<<32) - (1<<(self.precision)))) elif self.dtype.itemsize == 8: mask = numpy.uint64(((1<<64) - (1<<(self.precision)))) else: logger.warning("Precision reduction: only float32 and float64 are supported") else: self.precision = 0 t0 = time.time() with h5py.File(self.filename, 'w') as h: ds = h.create_dataset(self.h5path, shape, chunks=chunks, **self.compression) for i in range(0, self.size, self.chunk): x, y, z = numpy.ogrid[i:i+self.chunk, :self.size, :self.size] data = (numpy.sin(x/3)*numpy.sin(y/5)*numpy.sin(z/7)).astype(self.dtype) if self.precision: idata = data.view(mask.dtype) idata &= mask # mask out the last XX bits ds[i:i+self.chunk] = data t1 = time.time() dt = t1 - t0 filesize = os.stat(self.filename).st_size logger.info("Compression: %.3f time %.3fs uncompressed data saving speed %.3f MB/s effective write speed %.3f MB/s ", self.total_size/filesize, dt, self.total_size/dt/1e6, filesize/dt/1e6) def teardown(self): self.tmpdir.cleanup() self.filename = None @staticmethod def read_slice(dataset, position): """This reads all hyperplans crossing at the given position: enforces many reads of different chunks, Probably one of the most pathlogical use-case""" assert dataset.ndim == len(position) l = len(position) res = [] noneslice = slice(None) for i, w in enumerate(position): where = [noneslice]*i + [w] + [noneslice]*(l - 1 - i) res.append(dataset[tuple(where)]) return res def time_sequential_reads(self, nb_read=64): "Perform the reading of many orthogonal hyperplanes" where = [[(i*(self.chunk+1+j))%self.size for j in range(self.ndim)] for i in range(nb_read)] with h5py.File(self.filename, "r") as h: ds = h[self.h5path] t0 = time.time() for i in where: self.read_slice(ds, i) t1 = time.time() dt = t1 - t0 logger.info("Time for reading %sx%s slices: %.3fs fps: %.3f "%(self.ndim, nb_read, dt, self.ndim*nb_read/dt) + "Uncompressed data read speed %.3f MB/s"%(self.ndim*nb_read*self.needed_memory/dt/1e6)) return dt def time_threaded_reads(self, nb_read=64, nthreads=multiprocessing.cpu_count()): "Perform the reading of many orthogonal hyperplanes, threaded version" where = [[(i*(self.chunk+1+j))%self.size for j in range(self.ndim)] for i in range(nb_read)] tasks = Queue() results = Queue() quitevent = Event() pool = [Reader(tasks, results, quitevent) for i in range(nthreads)] res = [] with h5py.File(self.filename, "r") as h: ds = h[self.h5path] t0 = time.time() for i in where: tasks.put((self.read_slice, ds, i)) for i in where: a = results.get() res.append(a[0]) results.task_done() tasks.join() results.join() t1 = time.time() # destroy the threads in the pool quitevent.set() for i in range(nthreads): tasks.put(None) dt = t1 - t0 logger.info("Time for %s-threaded reading %sx%s slices: %.3fs fps: %.3f "%(nthreads, self.ndim, nb_read, dt, self.ndim*nb_read/dt) + "Uncompressed data read speed %.3f MB/s"%(self.ndim*nb_read*self.needed_memory/dt/1e6)) return dt if __name__ == "__main__": logging.basicConfig(level=logging.INFO) benchmark = SlicingBenchmark() benchmark.setup() benchmark.time_sequential_reads() benchmark.time_threaded_reads() benchmark.teardown() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/benchmarks/benchmarks.py0000644000175000017500000000320514045746670020202 0ustar00takluyvertakluyver# Write the benchmarking functions here. # See "Writing benchmarks" in the asv docs for more information. import os.path as osp import numpy as np from tempfile import TemporaryDirectory import h5py class TimeSuite: """ An example benchmark that times the performance of various kinds of iterating over dictionaries in Python. """ def setup(self): self._td = TemporaryDirectory() path = osp.join(self._td.name, 'test.h5') with h5py.File(path, 'w') as f: f['a'] = np.arange(100000) self.f = h5py.File(path, 'r') def teardown(self): self.f.close() self._td.cleanup() def time_many_small_reads(self): ds = self.f['a'] for i in range(10000): arr = ds[i * 10:(i + 1) * 10] class WritingTimeSuite: """Based on example in GitHub issue 492: https://github.com/h5py/h5py/issues/492 """ def setup(self): self._td = TemporaryDirectory() path = osp.join(self._td.name, 'test.h5') self.f = h5py.File(path, 'w') self.shape = shape = (128, 1024, 512) self.f.create_dataset( 'a', shape=shape, dtype=np.float32, chunks=(1, shape[1], 64) ) def teardown(self): self.f.close() self._td.cleanup() def time_write_index_last_axis(self): ds = self.f['a'] data = np.zeros(self.shape[:2]) for i in range(self.shape[2]): ds[..., i] = data def time_write_slice_last_axis(self): ds = self.f['a'] data = np.zeros(self.shape[:2]) for i in range(self.shape[2]): ds[..., i:i+1] = data[..., np.newaxis] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/dev-install.sh0000755000175000017500000000056714350630273016155 0ustar00takluyvertakluyver# Install h5py in a convenient way for frequent reinstallation as you work on it. # This disables the mechanisms to find and install build dependencies, so you # need to already have those (Cython, pkgconfig, numpy & optionally mpi4py) installed # in the current environment. set -e H5PY_SETUP_REQUIRES=0 python3 setup.py build python3 -m pip install . --no-build-isolation ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1739894414.541882 h5py-3.13.0/docs/0000755000175000017500000000000014755127217014324 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1672853193.0 h5py-3.13.0/docs/Makefile0000644000175000017500000001526314355333311015761 0ustar00takluyvertakluyver# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = -W SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/h5py.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/h5py.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/h5py" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/h5py" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." show: @python -m webbrowser -t "file://$(shell pwd)/_build/html/index.html" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0 h5py-3.13.0/docs/build.rst0000644000175000017500000002254014507227212016147 0ustar00takluyvertakluyver.. _install: Installation ============ .. _install_recommends: It is highly recommended that you use a pre-built version of h5py, either from a Python Distribution, an OS-specific package manager, or a pre-built wheel from PyPI. Be aware however that most pre-built versions lack MPI support, and that they are built against a specific version of HDF5. If you require MPI support, or newer HDF5 features, you will need to build from source. After installing h5py, you should run the tests to be sure that everything was installed correctly. This can be done in the python interpreter via:: import h5py h5py.run_tests() .. _prebuilt_install: Pre-built installation (recommended) ----------------------------------------- Pre-build h5py can be installed via many Python Distributions, OS-specific package managers, or via h5py wheels. Python Distributions .................... If you do not already use a Python Distribution, we recommend either `Anaconda `_/`Miniconda `_ or `Enthought Canopy `_, both of which support most versions of Microsoft Windows, OSX/MacOS, and a variety of Linux Distributions. Installation of h5py can be done on the command line via:: $ conda install h5py for Anaconda/MiniConda, and via:: $ enpkg h5py for Canopy. Wheels ...... If you have an existing Python installation (e.g. a python.org download, or one that comes with your OS), then on Windows, MacOS/OSX, and Linux on Intel computers, pre-built h5py wheels can be installed via pip from PyPI:: $ pip install h5py Additionally, for Windows users, `Chris Gohlke provides third-party wheels which use Intel's MKL `_. OS-Specific Package Managers ............................ On OSX/MacOS, h5py can be installed via `Homebrew `_, `Macports `_, or `Fink `_. The current state of h5py in various Linux Distributions can be seen at https://pkgs.org/download/python-h5py, and can be installed via the package manager. As far as the h5py developers know, none of the Windows package managers (e.g. `Chocolatey `_, `nuget `_) have h5py included, however they may assist in installing h5py's requirements when building from source. .. _source_install: Source installation ------------------- To install h5py from source, you need: * A supported Python version with development headers * HDF5 1.10.4 or newer with development headers * HDF5 versions newer than the h5py version you're using might not work. * Odd minor versions of HDF5 (e.g. 1.13) are experimental, and might not work. Use a 'maintenance' version like 1.12.x if possible. * If you need support for older HDF5 versions, h5py up to version 3.9 supported HDF5 1.8.4 and above. * A C compiler On Unix platforms, you also need ``pkg-config`` unless you explicitly specify a path for HDF5 as described in :ref:`custom_install`. There are notes below on installing HDF5, Python and a C compiler on different platforms. Building h5py also requires several Python packages, but in most cases pip will automatically install these in a build environment for you, so you don't need to deal with them manually. See :ref:`dev_install` for a list. The actual installation of h5py should be done via:: $ pip install --no-binary=h5py h5py or, from a tarball or git :ref:`checkout `:: $ pip install -v . .. _dev_install: Development installation ........................ When modifying h5py, you often want to reinstall it quickly to test your changes. To benefit from caching and use NumPy & Cython from your existing Python environment, run:: $ H5PY_SETUP_REQUIRES=0 python3 setup.py build $ python3 -m pip install . --no-build-isolation For convenience, these commands are also in a script ``dev-install.sh`` in the h5py git repository. This skips setting up a build environment, so you should have already installed Cython, NumPy, pkgconfig (a Python interface to ``pkg-config``) and mpi4py (if you want MPI integration - see :ref:`build_mpi`). See ``setup.py`` for minimum versions. This will normally rebuild Cython files automatically when they change, but sometimes it may be necessary to force a full rebuild. The easiest way to achieve this is to discard everything but the code committed to git. In the root of your git checkout, run:: $ git clean -xfd Then build h5py again as above. Source installation on OSX/MacOS ................................ HDF5 and Python are most likely in your package manager (e.g. `Homebrew `_, `Macports `_, or `Fink `_). Be sure to install the development headers, as sometimes they are not included in the main package. XCode comes with a C compiler (clang), and your package manager will likely have other C compilers for you to install. Source installation on Linux/Other Unix ....................................... HDF5 and Python are most likely in your package manager. A C compiler almost definitely is, usually there is some kind of metapackage to install the default build tools, e.g. ``build-essential``, which should be sufficient for our needs. Make sure that that you have the development headers, as they are usually not installed by default. They can usually be found in ``python-dev`` or similar and ``libhdf5-dev`` or similar. Source installation on Windows .............................. Installing from source on Windows is a much more difficult prospect than installing from source on other OSs, as not only are you likely to need to compile HDF5 from source, everything must be built with the correct version of Visual Studio. Additional patches are also needed to HDF5 to get HDF5 and Python to work together. We recommend examining the appveyor build scripts, and using those to build and install HDF5 and h5py. Downstream packagers .................... If you are building h5py for another packaging system - e.g. Linux distros or packaging aimed at HPC users - you probably want to satisfy build dependencies from your packaging system. To build without automatically fetching dependencies, use a command like:: H5PY_SETUP_REQUIRES=0 pip install . --no-deps --no-build-isolation Depending on your packaging system, you may need to use the ``--prefix`` or ``--root`` options to control where files get installed. h5py's Python packaging has build dependencies on the oldest compatible versions of NumPy and mpi4py. You can build with newer versions of these, but the resulting h5py binaries will only work with the NumPy & mpi4py versions they were built with (or newer). Mpi4py is an optional dependency, only required for :ref:`parallel` features. You should also look at the build options under :ref:`custom_install`. .. _custom_install: Custom installation ------------------- .. important:: Remember that pip installs wheels by default. To perform a custom installation with pip, you should use:: $ pip install --no-binary=h5py h5py or build from a git checkout or downloaded tarball to avoid getting a pre-built version of h5py. You can specify build options for h5py as environment variables when you build it from source:: $ HDF5_DIR=/path/to/hdf5 pip install --no-binary=h5py h5py $ HDF5_VERSION=X.Y.Z pip install --no-binary=h5py h5py $ CC="mpicc" HDF5_MPI="ON" HDF5_DIR=/path/to/parallel-hdf5 pip install --no-binary=h5py h5py The supported build options are: - To specify where to find HDF5, use one of these options: - ``HDF5_LIBDIR`` and ``HDF5_INCLUDEDIR``: the directory containing the compiled HDF5 libraries and the directory containing the C header files, respectively. - ``HDF5_DIR``: a shortcut for common installations, a directory with ``lib`` and ``include`` subdirectories containing compiled libraries and C headers. - ``HDF5_PKGCONFIG_NAME``: A name to query ``pkg-config`` for. If none of these options are specified, h5py will query ``pkg-config`` by default for ``hdf5``, or ``hdf5-openmpi`` if building with MPI support. - ``HDF5_MPI=ON`` to build with MPI integration - see :ref:`build_mpi`. - ``HDF5_VERSION`` to force a specified HDF5 version. In most cases, you don't need to set this; the version number will be detected from the HDF5 library. - ``H5PY_SYSTEM_LZF=1`` to build the bundled LZF compression filter (see :ref:`dataset_compression`) against an external LZF library, rather than using the bundled LZF C code. .. _build_mpi: Building against Parallel HDF5 ------------------------------ If you just want to build with ``mpicc``, and don't care about using Parallel HDF5 features in h5py itself:: $ export CC=mpicc $ pip install --no-binary=h5py h5py If you want access to the full Parallel HDF5 feature set in h5py (:ref:`parallel`), you will further have to build in MPI mode. This can be done by setting the ``HDF5_MPI`` environment variable:: $ export CC=mpicc $ export HDF5_MPI="ON" $ pip install --no-binary=h5py h5py You will need a shared-library build of Parallel HDF5 as well, i.e. built with ``./configure --enable-shared --enable-parallel``. On Windows, MS-MPI is usually used which does not have an ``mpicc`` wrapper. Instead, you may use the ``H5PY_MSMPI`` environment variable to ``ON`` in order to query the system for MS-MPI's information. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739891098.0 h5py-3.13.0/docs/conf.py0000644000175000017500000002051314755120632015616 0ustar00takluyvertakluyver# -*- coding: utf-8 -*- # # h5py documentation build configuration file, created by # sphinx-quickstart on Fri Jan 31 11:23:59 2014. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.intersphinx', 'sphinx.ext.extlinks', 'sphinx.ext.mathjax', ] intersphinx_mapping = {'low': ('https://api.h5py.org', None)} extlinks = { 'issue': ('https://github.com/h5py/h5py/issues/%s', 'GH%s'), 'pr': ('https://github.com/h5py/h5py/pull/%s', 'PR %s'), } # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = 'h5py' copyright = '2014, Andrew Collette and contributors' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The full version, including alpha/beta/rc tags. release = '3.13.0' # The short X.Y version. version = '.'.join(release.split('.')[:2]) # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'sphinx_rtd_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'h5pydoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'h5py.tex', 'h5py Documentation', 'Andrew Collette and contributors', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'h5py', 'h5py Documentation', ['Andrew Collette and contributors'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'h5py', 'h5py Documentation', 'Andrew Collette and contributors', 'h5py', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1674231870.0 h5py-3.13.0/docs/config.rst0000644000175000017500000000231014362540076016313 0ustar00takluyvertakluyverConfiguring h5py ================ Library configuration --------------------- A few library options are available to change the behavior of the library. You can get a reference to the global library configuration object via the function :func:`h5py.get_config`. This object supports the following attributes: **complex_names** Set to a 2-tuple of strings (real, imag) to control how complex numbers are saved. The default is ('r','i'). **bool_names** Booleans are saved as HDF5 enums. Set this to a 2-tuple of strings (false, true) to control the names used in the enum. The default is ("FALSE", "TRUE"). **track_order** Whether to track dataset/group/attribute creation order. If container creation order is tracked, its links and attributes are iterated in ascending creation order (consistent with ``dict`` in Python 3.7+); otherwise in ascending alphanumeric order. Global configuration value can be overridden for particular container by specifying ``track_order`` argument to :class:`h5py.File`, :meth:`h5py.Group.create_group`, :meth:`h5py.Group.create_dataset`. The default is ``False``. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/contributing.rst0000644000175000017500000004007214701434742017563 0ustar00takluyvertakluyverBug Reports & Contributions =========================== Contributions and bug reports are welcome from anyone! Some of the best features in h5py, including thread support, dimension scales, and the scale-offset filter, came from user code contributions. Since we use GitHub, the workflow will be familiar to many people. If you have questions about the process or about the details of implementing your feature, feel free to ask on Github itself, or on the h5py section of the HDF5 forum: https://forum.hdfgroup.org/c/hdf5/h5py Posting on this forum requires registering for a free account with HDF group. Anyone can post to this list. Your first message will be approved by a moderator, so don't worry if there's a brief delay. This guide is divided into three sections. The first describes how to file a bug report. The second describes the mechanics of how to submit a contribution to the h5py project; for example, how to create a pull request, which branch to base your work on, etc. We assume you're are familiar with Git, the version control system used by h5py. If not, `here's a great place to start `_. Finally, we describe the various subsystems inside h5py, and give technical guidance as to how to implement your changes. How to File a Bug Report ------------------------ Bug reports are always welcome! The issue tracker is at: https://github.com/h5py/h5py/issues If you're unsure whether you've found a bug ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Always feel free to ask on the mailing list (h5py at Google Groups). Discussions there are seen by lots of people and are archived by Google. Even if the issue you're having turns out not to be a bug in the end, other people can benefit from a record of the conversation. By the way, nobody will get mad if you file a bug and it turns out to be something else. That's just how software development goes. What to include ~~~~~~~~~~~~~~~ When filing a bug, there are two things you should include. The first is the output of ``h5py.version.info``:: >>> import h5py >>> print(h5py.version.info) The second is a detailed explanation of what went wrong. Unless the bug is really trivial, **include code if you can**, either via GitHub's inline markup:: ``` import h5py h5py.explode() # Destroyed my computer! ``` or by uploading a code sample to `Github Gist `_. How to Get Your Code into h5py ------------------------------ This section describes how to contribute changes to the h5py code base. Before you start, be sure to read the h5py license and contributor agreement in "license.txt". You can find this in the source distribution, or view it online at the main h5py repository at GitHub. The basic workflow is to clone h5py with git, make your changes in a topic branch, and then create a pull request at GitHub asking to merge the changes into the main h5py project. Here are some tips to getting your pull requests accepted: 1. Let people know you're working on something. This could mean posting a comment in an open issue, or sending an email to the mailing list. There's nothing wrong with just opening a pull request, but it might save you time if you ask for advice first. 2. Keep your changes focused. If you're fixing multiple issues, file multiple pull requests. Try to keep the amount of reformatting clutter small so the maintainers can easily see what you've changed in a diff. 3. Unit tests are mandatory for new features. This doesn't mean hundreds (or even dozens) of tests! Just enough to make sure the feature works as advertised. The maintainers will let you know if more are needed. .. _git_checkout: Clone the h5py repository ~~~~~~~~~~~~~~~~~~~~~~~~~ The best way to do this is by signing in to GitHub and cloning the h5py project directly. You'll end up with a new repository under your account; for example, if your username is ``yourname``, the repository would be at http://github.com/yourname/h5py. Then, clone your new copy of h5py to your local machine:: $ git clone http://github.com/yourname/h5py Create a topic branch for your feature ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Check out a new branch for the bugfix or feature you're writing:: $ git checkout -b newfeature master The exact name of the branch can be anything you want. For bug fixes, one approach is to put the issue number in the branch name. We develop all changes against the *master* branch. If we're making a bugfix release, a bot will backport merged pull requests. Implement the feature! ~~~~~~~~~~~~~~~~~~~~~~ You can implement the feature as a number of small changes, or as one big commit; there's no project policy. Double-check to make sure you've included all your files; run ``git status`` and check the output. .. _contrib-run-tests: Run the tests ~~~~~~~~~~~~~ The easiest way to run the tests is with `tox `_:: pip install tox # Get tox tox -e py312-test-deps # Run tests in one environment tox # Run tests in all possible environments tox -a # List defined environments Write a release note ~~~~~~~~~~~~~~~~~~~~ Changes which could affect people building and using h5py after the next release should have a news entry. You don't need to do this if your changes don't affect usage, e.g. adding tests or correcting comments. In the ``news/`` folder, make a copy of ``TEMPLATE.rst`` named after your branch. Edit the new file, adding a sentence or two about what you've added or fixed. Commit this to git too. News entries are merged into the :doc:`what's new documents ` for each release. They should allow someone to quickly understand what a new feature is, or whether a bug they care about has been fixed. E.g.:: Bug fixes --------- * Fix reading data for region references pointing to an empty selection. The *Building h5py* section is for changes which affect how people build h5py from source. It's not about how we make prebuilt wheels; changes to that which make a visible difference can go in *New features* or *Bug fixes*. Push your changes back and open a pull request ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Push your topic branch back up to your GitHub clone:: $ git push origin newfeature Then, `create a pull request `_ based on your topic branch. Work with the maintainers ~~~~~~~~~~~~~~~~~~~~~~~~~ Your pull request might be accepted right away. More commonly, the maintainers will post comments asking you to fix minor things, like add a few tests, clean up the style to be PEP-8 compliant, etc. The pull request page also shows the results of building and testing the modified code on Travis and Appveyor CI and Azure Pipelines. Check back after about 30 minutes to see if the build succeeded, and if not, try to modify your changes to make it work. When making changes after creating your pull request, just add commits to your topic branch and push them to your GitHub repository. Don't try to rebase or open a new pull request! We don't mind having a few extra commits in the history, and it's helpful to keep all the history together in one place. How to Modify h5py ------------------ This section is a little more involved, and provides tips on how to modify h5py. The h5py package is built in layers. Starting from the bottom, they are: 1. The HDF5 C API (provided by libhdf5) 2. Auto-generated Cython wrappers for the C API (``api_gen.py``) 3. Low-level interface, written in Cython, using the wrappers from (2) 4. High-level interface, written in Python, with things like ``h5py.File``. 5. Unit test code Rather than talk about the layers in an abstract way, the parts below are guides to adding specific functionality to various parts of h5py. Most sections span at least two or three of these layers. Adding a function from the HDF5 C API ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is one of the most common contributed changes. The example below shows how one would add the function ``H5Dget_storage_size``, which determines the space on disk used by an HDF5 dataset. This function is already partially wrapped in h5py, so you can see how it works. It's recommended that you follow along, if not by actually adding the feature then by at least opening the various files as we work through the example. First, get ahold of the function signature; the easiest place for this is at the `online HDF5 Reference Manual `_. Then, add the function's C signature to the file ``api_functions.txt``:: hsize_t H5Dget_storage_size(hid_t dset_id) This particular signature uses types (``hsize_t``, ``hid_t``) which are already defined elsewhere. But if the function you're adding needs a struct or enum definition, you can add it using Cython code to the file ``api_types_hdf5.pxd``. The next step is to add a Cython function or method which calls the function you added. The h5py modules follow the naming convention of the C API; functions starting with ``H5D`` are wrapped in ``h5d.pyx``. Opening ``h5d.pyx``, we notice that since this function takes a dataset identifier as the first argument, it belongs as a method on the DatasetID object. We write a wrapper method:: def get_storage_size(self): """ () => LONG storage_size Determine the amount of file space required for a dataset. Note this only counts the space which has actually been allocated; it may even be zero. """ return H5Dget_storage_size(self.id) The first line of the docstring gives the method signature. This is necessary because Cython will use a "generic" signature like ``method(*args, **kwds)`` when the file is compiled. The h5py documentation system will extract the first line and use it as the signature. Next, we decide whether we want to add access to this function to the high-level interface. That means users of the top-level ``h5py.Dataset`` object will be able to see how much space on disk their files use. The high-level interface is implemented in the subpackage ``h5py._hl``, and the Dataset object is in module ``dataset.py``. Opening it up, we add a property on the ``Dataset`` object:: @property def storagesize(self): """ Size (in bytes) of this dataset on disk. """ return self.id.get_storage_size() You'll see that the low-level ``DatasetID`` object is available on the high-level ``Dataset`` object as ``obj.id``. This is true of all the high-level objects, like ``File`` and ``Group`` as well. Finally (and don't skip this step), we write **unit tests** for this feature. Since the feature is ultimately exposed at the high-level interface, it's OK to write tests for the ``Dataset.storagesize`` property only. Unit tests for the high-level interface are located in the "tests" subfolder, right near ``dataset.py``. It looks like the right file is ``test_dataset.py``. Unit tests are implemented as methods on custom ``unittest.UnitTest`` subclasses; each new feature should be tested by its own new class. In the ``test_dataset`` module, we see there's already a subclass called ``BaseDataset``, which implements some simple set-up and cleanup methods and provides a ``h5py.File`` object as ``obj.f``. We'll base our test class on that:: class TestStorageSize(BaseDataset): """ Feature: Dataset.storagesize indicates how much space is used. """ def test_empty(self): """ Empty datasets take no space on disk """ dset = self.f.create_dataset("x", (100,100)) self.assertEqual(dset.storagesize, 0) def test_data(self): """ Storage size is correct for non-empty datasets """ dset = self.f.create_dataset("x", (100,), dtype='uint8') dset[...] = 42 self.assertEqual(dset.storagesize, 100) This set of tests would be adequate to get a pull request approved. We don't test every combination under the sun (different ranks, datasets with more than 2**32 elements, datasets with the string "kumquat" in the name...), but the basic, commonly encountered set of conditions. To build and test our changes, we have to do a few things. First of all, run the file ``api_gen.py`` to re-generate the Cython wrappers from ``api_functions.txt``:: $ python api_gen.py Then build the project, which recompiles ``h5d.pyx``:: $ python setup.py build Finally, run the test suite, which includes the two methods we just wrote:: $ python setup.py test If the tests pass, the feature is ready for a pull request. Adding a function only available in certain versions of HDF5 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ At the moment, h5py must be compatible with HDF5 back to version 1.10.4. But it's possible to conditionally include functions which only appear in newer versions of HDF5. It's also possible to mark functions which require Parallel HDF5. For example, the function ``H5Fset_mpi_atomicity`` was introduced in HDF5 1.8.9 and requires Parallel HDF5. Specifiers before the signature in ``api_functions.txt`` communicate this:: MPI 1.8.9 herr_t H5Fset_mpi_atomicity(hid_t file_id, hbool_t flag) You can specify either, both or none of "MPI" or a version number in "X.Y.Z" format. In the Cython code, these show up as "preprocessor" defines ``MPI`` and ``HDF5_VERSION``. So the low-level implementation (as a method on ``h5py.h5f.FileID``) looks like this:: IF MPI and HDF5_VERSION >= (1, 8, 9): def set_mpi_atomicity(self, bint atomicity): """ (BOOL atomicity) For MPI-IO driver, set to atomic (True), which guarantees sequential I/O semantics, or non-atomic (False), which improves performance. Default is False. Feature requires: 1.8.9 and Parallel HDF5 """ H5Fset_mpi_atomicity(self.id, atomicity) High-level code can check the version of the HDF5 library, or check to see if the method is present on ``FileID`` objects. Testing MPI-only features/code ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Typically to run code under MPI, ``mpirun`` must be used to start the MPI processes. Similarly, tests using MPI features (such as collective IO), must also be run under ``mpirun``. h5py uses pytest markers (specifically ``pytest.mark.mpi`` and other markers from `pytest-mpi `_) to specify which tests require usage of ``mpirun``, and will handle skipping the tests as needed. A simple example of how to do this is:: @pytest.mark.mpi def test_mpi_feature(): import mpi4py # test the MPI feature To run these tests, you'll need to: 1. Have ``tox`` installed (e.g. via ``pip install tox``) 2. Have HDF5 built with MPI as per :ref:`build_mpi` Then running:: $ CC='mpicc' HDF5_MPI=ON tox -e py312-test-deps-mpi4py should run the tests. You may need to pass ``HDF5_DIR`` depending on the location of the HDF5 with MPI support. You can choose which python version to build against by changing py37 (e.g. py36 runs python 3.6, this is a tox feature), and test with the minimum version requirements by using ``mindeps`` rather than ``deps``. If you get an error similar to:: There are not enough slots available in the system to satisfy the 4 slots that were requested by the application: python Either request fewer slots for your application, or make more slots available for use. then you need to reduce the number of MPI processes you are asking MPI to use. If you have already reduced the number of processes requested (or are running the default number which is 2), you will need to look up the documentation for your MPI implementation for handling this error. On OpenMPI (which is usually the default MPI implementation on most systems), running:: $ export OMPI_MCA_rmaps_base_oversubscribe=1 will instruct OpenMPI to allow more MPI processes than available cores on your system. If you need to pass additional environment variables to your MPI implementation, add these variables to the ``passenv`` setting in the ``tox.ini``, and send us a PR with that change noting the MPI implementation. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1674231870.0 h5py-3.13.0/docs/faq.rst0000644000175000017500000002374414362540076015633 0ustar00takluyvertakluyver.. _faq: FAQ === What datatypes are supported? ----------------------------- Below is a complete list of types for which h5py supports reading, writing and creating datasets. Each type is mapped to a native NumPy type. Fully supported types: ========================= ============================================ ================================ Type Precisions Notes ========================= ============================================ ================================ Bitfield 1, 2, 4 or 8 byte, BE/LE Read as unsigned integers Integer 1, 2, 4 or 8 byte, BE/LE, signed/unsigned Float 2, 4, 8, 12, 16 byte, BE/LE Complex 8 or 16 byte, BE/LE Stored as HDF5 struct Compound Arbitrary names and offsets Strings (fixed-length) Any length Strings (variable-length) Any length, ASCII or Unicode Opaque (kind 'V') Any length Boolean NumPy 1-byte bool Stored as HDF5 enum Array Any supported type Enumeration Any NumPy integer type Read/write as integers References Region and object Variable length array Any supported type See :ref:`Special Types ` ========================= ============================================ ================================ Other numpy dtypes, such as datetime64 and timedelta64, can optionally be stored in HDF5 opaque data using :func:`opaque_dtype`. h5py will read this data back with the same dtype, but other software probably will not understand it. Unsupported types: ========================= ============================================ Type Status ========================= ============================================ HDF5 "time" type NumPy "U" strings No HDF5 equivalent NumPy generic "O" Not planned ========================= ============================================ What compression/processing filters are supported? -------------------------------------------------- =================================== =========================================== ============================ Filter Function Availability =================================== =========================================== ============================ DEFLATE/GZIP Standard HDF5 compression All platforms SHUFFLE Increase compression ratio All platforms FLETCHER32 Error detection All platforms Scale-offset Integer/float scaling and truncation All platforms SZIP Fast, patented compression for int/float * UNIX: if supplied with HDF5. * Windows: read-only `LZF `_ Very fast compression, all types Ships with h5py, C source available =================================== =========================================== ============================ What file drivers are available? -------------------------------- A number of different HDF5 "drivers", which provide different modes of access to the filesystem, are accessible in h5py via the high-level interface. The currently supported drivers are: =================================== =========================================== ============================ Driver Purpose Notes =================================== =========================================== ============================ sec2 Standard optimized driver Default on UNIX/Windows stdio Buffered I/O using stdio.h core In-memory file (optionally backed to disk) family Multi-file driver mpio Parallel HDF5 file access =================================== =========================================== ============================ .. _h5py_pytable_cmp: What's the difference between h5py and PyTables? ------------------------------------------------ The two projects have different design goals. PyTables presents a database-like approach to data storage, providing features like indexing and fast "in-kernel" queries on dataset contents. It also has a custom system to represent data types. In contrast, h5py is an attempt to map the HDF5 feature set to NumPy as closely as possible. For example, the high-level type system uses NumPy dtype objects exclusively, and method and attribute naming follows Python and NumPy conventions for dictionary and array access (i.e. ".dtype" and ".shape" attributes for datasets, ``group[name]`` indexing syntax for groups, etc). Underneath the "high-level" interface to h5py (i.e. NumPy-array-like objects; what you'll typically be using) is a large Cython layer which calls into C. This "low-level" interface provides access to nearly all of the HDF5 C API. This layer is object-oriented with respect to HDF5 identifiers, supports reference counting, automatic translation between NumPy and HDF5 type objects, translation between the HDF5 error stack and Python exceptions, and more. This greatly simplifies the design of the complicated high-level interface, by relying on the "Pythonicity" of the C API wrapping. There's also a PyTables perspective on this question at the `PyTables FAQ `_. Does h5py support Parallel HDF5? -------------------------------- Starting with version 2.2, h5py supports Parallel HDF5 on UNIX platforms. ``mpi4py`` is required, as well as an MPIO-enabled build of HDF5. Check out :ref:`parallel` for details. Variable-length (VLEN) data --------------------------- Starting with version 2.3, all supported types can be stored in variable-length arrays (previously only variable-length byte and unicode strings were supported) See :ref:`Special Types ` for use details. Please note that since strings in HDF5 are encoded as ASCII or UTF-8, NUL bytes are not allowed in strings. Enumerated types ---------------- HDF5 enumerated types are supported. As NumPy has no native enum type, they are treated on the Python side as integers with a small amount of metadata attached to the dtype. NumPy object types ------------------ Storage of generic objects (NumPy dtype "O") is not implemented and not planned to be implemented, as the design goal for h5py is to expose the HDF5 feature set, not add to it. However, objects picked to the "plain-text" protocol (protocol 0) can be stored in HDF5 as strings. Appending data to a dataset --------------------------- The short response is that h5py is NumPy-like, not database-like. Unlike the HDF5 packet-table interface (and PyTables), there is no concept of appending rows. Rather, you can expand the shape of the dataset to fit your needs. For example, if I have a series of time traces 1024 points long, I can create an extendable dataset to store them: >>> dset = myfile.create_dataset("MyDataset", (10, 1024), maxshape=(None, 1024)) >>> dset.shape (10,1024) The keyword argument "maxshape" tells HDF5 that the first dimension of the dataset can be expanded to any size, while the second dimension is limited to a maximum size of 1024. We create the dataset with room for an initial ensemble of 10 time traces. If we later want to store 10 more time traces, the dataset can be expanded along the first axis: >>> dset.resize(20, axis=0) # or dset.resize((20,1024)) >>> dset.shape (20, 1024) Each axis can be resized up to the maximum values in "maxshape". Things to note: * Unlike NumPy arrays, when you resize a dataset the indices of existing data do not change; each axis grows or shrinks independently * The dataset rank (number of dimensions) is fixed when it is created Unicode ------- As of h5py 2.0.0, Unicode is supported for file names as well as for objects in the file. When object names are read, they are returned as Unicode by default. However, HDF5 has no predefined datatype to represent fixed-width UTF-16 or UTF-32 (NumPy format) strings. Therefore, the NumPy 'U' datatype is not supported. Exceptions ---------- h5py tries to map the error codes from hdf5 to the corresponding ``Exception`` class on the Python side. However the HDF5 group does not consider the error codes to be public API so we can not guarantee type stability of the exceptions raised. Development ----------- Building from Git ~~~~~~~~~~~~~~~~~ We moved to GitHub in December of 2012 (http://github.com/h5py/h5py). We use the following conventions for branches and tags: * master: integration branch for the next minor (or major) version * 2.0, 2.1, 2.2, etc: bugfix branches for released versions * tags 2.0.0, 2.0.1, etc: Released bugfix versions To build from a Git checkout: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Clone the project:: $ git clone https://github.com/h5py/h5py.git $ cd h5py (Optional) Choose which branch to build from (e.g. a stable branch):: $ git checkout 2.1 Build the project. If given, /path/to/hdf5 should point to a directory containing a compiled, shared-library build of HDF5 (containing things like "include" and "lib"):: $ python setup.py build [--hdf5=/path/to/hdf5] (Optional) Run the unit tests:: $ python setup.py test Report any failing tests to the mailing list (h5py at googlegroups), or by filing a bug report at GitHub. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1739894414.542882 h5py-3.13.0/docs/high/0000755000175000017500000000000014755127217015243 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1712073444.0 h5py-3.13.0/docs/high/attr.rst0000644000175000017500000001045414603025344016741 0ustar00takluyvertakluyver.. currentmodule:: h5py .. _attributes: Attributes ========== Attributes are a critical part of what makes HDF5 a "self-describing" format. They are small named pieces of data attached directly to :class:`Group` and :class:`Dataset` objects. This is the official way to store metadata in HDF5. Each Group or Dataset has a small proxy object attached to it, at ``.attrs``. Attributes have the following properties: - They may be created from any scalar or NumPy array - Each attribute should be small (generally < 64k) - There is no partial I/O (i.e. slicing); the entire attribute must be read. The ``.attrs`` proxy objects are of class :class:`AttributeManager`, below. This class supports a dictionary-style interface. By default, attributes are iterated in alphanumeric order. However, if group or dataset is created with ``track_order=True``, the attribute insertion order is remembered (tracked) in HDF5 file, and iteration uses that order. The latter is consistent with Python 3.7+ dictionaries. The default ``track_order`` for all new groups and datasets can be specified globally with ``h5.get_config().track_order``. Large attributes ---------------- HDF5 allows attributes to be larger than 64 KiB, but these need to be stored in a different way. As of March 2024, the way HDF5 documentation suggests you configure this does not work. Instead, enable order tracking when creating the object you want to attach attributes to:: grp = f.create_group('g', track_order=True) grp.attrs['large'] = np.arange(1_000_000, dtype=np.uint32) Reference --------- .. class:: AttributeManager(parent) AttributeManager objects are created directly by h5py. You should access instances by ``group.attrs`` or ``dataset.attrs``, not by manually creating them. .. method:: __iter__() Get an iterator over attribute names. .. method:: __contains__(name) Determine if attribute `name` is attached to this object. .. method:: __getitem__(name) Retrieve an attribute. .. method:: __setitem__(name, val) Create an attribute, overwriting any existing attribute. The type and shape of the attribute are determined automatically by h5py. .. method:: __delitem__(name) Delete an attribute. KeyError if it doesn't exist. .. method:: keys() Get the names of all attributes attached to this object. :return: set-like object. .. method:: values() Get the values of all attributes attached to this object. :return: collection or bag-like object. .. method:: items() Get ``(name, value)`` tuples for all attributes attached to this object. :return: collection or set-like object. .. method:: get(name, default=None) Retrieve `name`, or `default` if no such attribute exists. .. method:: get_id(name) Get the low-level :class:`AttrID ` for the named attribute. .. method:: create(name, data, shape=None, dtype=None) Create a new attribute, with control over the shape and type. Any existing attribute will be overwritten. :param name: Name of the new attribute :type name: String :param data: Value of the attribute; will be put through ``numpy.array(data)``. :param shape: Shape of the attribute. Overrides ``data.shape`` if both are given, in which case the total number of points must be unchanged. :type shape: Tuple :param dtype: Data type for the attribute. Overrides ``data.dtype`` if both are given. :type dtype: NumPy dtype .. method:: modify(name, value) Change the value of an attribute while preserving its type and shape. Unlike :meth:`AttributeManager.__setitem__`, if the attribute already exists, only its value will be changed. This can be useful for interacting with externally generated files, where the type and shape must not be altered. If the attribute doesn't exist, it will be created with a default shape and type. :param name: Name of attribute to modify. :type name: String :param value: New value. Will be put through ``numpy.array(value)``. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0 h5py-3.13.0/docs/high/dataset.rst0000644000175000017500000005770614755054371017441 0ustar00takluyvertakluyver.. currentmodule:: h5py .. _dataset: Datasets ======== Datasets are very similar to NumPy arrays. They are homogeneous collections of data elements, with an immutable datatype and (hyper)rectangular shape. Unlike NumPy arrays, they support a variety of transparent storage features such as compression, error-detection, and chunked I/O. They are represented in h5py by a thin proxy class which supports familiar NumPy operations like slicing, along with a variety of descriptive attributes: - **shape** attribute - **size** attribute - **ndim** attribute - **dtype** attribute - **nbytes** attribute h5py supports most NumPy dtypes, and uses the same character codes (e.g. ``'f'``, ``'i8'``) and dtype machinery as `Numpy `_. See :ref:`faq` for the list of dtypes h5py supports. .. _dataset_create: Creating datasets ----------------- New datasets are created using either :meth:`Group.create_dataset` or :meth:`Group.require_dataset`. Existing datasets should be retrieved using the group indexing syntax (``dset = group["name"]``). To initialise a dataset, all you have to do is specify a name, shape, and optionally the data type (defaults to ``'f'``):: >>> dset = f.create_dataset("default", (100,)) >>> dset = f.create_dataset("ints", (100,), dtype='i8') .. note:: This is not the same as creating an :ref:`Empty dataset `. You may also initialize the dataset to an existing NumPy array by providing the `data` parameter:: >>> arr = np.arange(100) >>> dset = f.create_dataset("init", data=arr) Assigning an array into a group works like specifying ``data`` and no other parameters:: >>> f["init"] = arr Keywords ``shape`` and ``dtype`` may be specified along with ``data``; if so, they will override ``data.shape`` and ``data.dtype``. It's required that (1) the total number of points in ``shape`` match the total number of points in ``data.shape``, and that (2) it's possible to cast ``data.dtype`` to the requested ``dtype``. .. _dataset_slicing: Reading & writing data ---------------------- HDF5 datasets reuse the NumPy slicing syntax to read and write to the file. Slice specifications are translated directly to HDF5 "hyperslab" selections, and are a fast and efficient way to access data in the file. The following slicing arguments are recognized: * Indices: anything that can be converted to a Python long * Slices (i.e. ``[:]`` or ``[0:10]``) * Field names, in the case of compound data * At most one ``Ellipsis`` (``...``) object * An empty tuple (``()``) to retrieve all data or `scalar` data Here are a few examples (output omitted). >>> dset = f.create_dataset("MyDataset", (10,10,10), 'f') >>> dset[0,0,0] >>> dset[0,2:10,1:9:3] >>> dset[:,::2,5] >>> dset[0] >>> dset[1,5] >>> dset[0,...] >>> dset[...,6] >>> dset[()] There's more documentation on what parts of numpy's :ref:`fancy indexing ` are available in h5py. For compound data, it is advised to separate field names from the numeric slices:: >>> dset.fields("FieldA")[:10] # Read a single field >>> dset[:10]["FieldA"] # Read all fields, select in NumPy It is also possible to mix indexing and field names (``dset[:10, "FieldA"]``), but this might be removed in a future version of h5py. To retrieve the contents of a `scalar` dataset, you can use the same syntax as in NumPy: ``result = dset[()]``. In other words, index into the dataset using an empty tuple. For simple slicing, broadcasting is supported: >>> dset[0,:,:] = np.arange(10) # Broadcasts to (10,10) Broadcasting is implemented using repeated hyperslab selections, and is safe to use with very large target selections. It is supported for the above "simple" (integer, slice and ellipsis) slicing only. .. warning:: Currently h5py does not support nested compound types, see :issue:`1197` for more information. Multiple indexing ~~~~~~~~~~~~~~~~~ Indexing a dataset once loads a numpy array into memory. If you try to index it twice to write data, you may be surprised that nothing seems to have happened: >>> f = h5py.File('my_hdf5_file.h5', 'w') >>> dset = f.create_dataset("test", (2, 2)) >>> dset[0][1] = 3.0 # No effect! >>> print(dset[0][1]) 0.0 The assignment above only modifies the loaded array. It's equivalent to this: >>> new_array = dset[0] >>> new_array[1] = 3.0 >>> print(new_array[1]) 3.0 >>> print(dset[0][1]) 0.0 To write to the dataset, combine the indexes in a single step: >>> dset[0, 1] = 3.0 >>> print(dset[0, 1]) 3.0 .. _dataset_iter: Length and iteration ~~~~~~~~~~~~~~~~~~~~ As with NumPy arrays, the ``len()`` of a dataset is the length of the first axis, and iterating over a dataset iterates over the first axis. However, modifications to the yielded data are not recorded in the file. Resizing a dataset while iterating has undefined results. On 32-bit platforms, ``len(dataset)`` will fail if the first axis is bigger than 2**32. It's recommended to use :meth:`Dataset.len` for large datasets. .. _dataset_chunks: Chunked storage --------------- An HDF5 dataset created with the default settings will be `contiguous`; in other words, laid out on disk in traditional C order. Datasets may also be created using HDF5's `chunked` storage layout. This means the dataset is divided up into regularly-sized pieces which are stored haphazardly on disk, and indexed using a B-tree. Chunked storage makes it possible to resize datasets, and because the data is stored in fixed-size chunks, to use compression filters. To enable chunked storage, set the keyword ``chunks`` to a tuple indicating the chunk shape:: >>> dset = f.create_dataset("chunked", (1000, 1000), chunks=(100, 100)) Data will be read and written in blocks with shape (100,100); for example, the data in ``dset[0:100,0:100]`` will be stored together in the file, as will the data points in range ``dset[400:500, 100:200]``. Chunking has performance implications. It's recommended to keep the total size of your chunks between 10 KiB and 1 MiB, larger for larger datasets. Also keep in mind that when any element in a chunk is accessed, the entire chunk is read from disk. Since picking a chunk shape can be confusing, you can have h5py guess a chunk shape for you:: >>> dset = f.create_dataset("autochunk", (1000, 1000), chunks=True) Auto-chunking is also enabled when using compression or ``maxshape``, etc., if a chunk shape is not manually specified. The iter_chunks method returns an iterator that can be used to perform chunk by chunk reads or writes:: >>> for s in dset.iter_chunks(): >>> arr = dset[s] # get numpy array for chunk .. _dataset_resize: Resizable datasets ------------------ In HDF5, datasets can be resized once created up to a maximum size, by calling :meth:`Dataset.resize`. You specify this maximum size when creating the dataset, via the keyword ``maxshape``:: >>> dset = f.create_dataset("resizable", (10,10), maxshape=(500, 20)) Any (or all) axes may also be marked as "unlimited", in which case they may be increased up to the HDF5 per-axis limit of 2**64 elements. Indicate these axes using ``None``:: >>> dset = f.create_dataset("unlimited", (10, 10), maxshape=(None, 10)) For a 1D dataset, ``maxshape`` can be an integer instead of a tuple. But to make an unlimited 1D dataset, ``maxshape`` must be a tuple ``(None,)``. Passing ``None`` gives the default behaviour, where the initial size is also the maximum. .. note:: Resizing an array with existing data works differently than in NumPy; if any axis shrinks, the data in the missing region is discarded. Data does not "rearrange" itself as it does when resizing a NumPy array. .. _dataset_compression: Filter pipeline --------------- Chunked data may be transformed by the HDF5 `filter pipeline`. The most common use is applying transparent compression. Data is compressed on the way to disk, and automatically decompressed when read. Once the dataset is created with a particular compression filter applied, data may be read and written as normal with no special steps required. Enable compression with the ``compression`` keyword to :meth:`Group.create_dataset`:: >>> dset = f.create_dataset("zipped", (100, 100), compression="gzip") Options for each filter may be specified with ``compression_opts``:: >>> dset = f.create_dataset("zipped_max", (100, 100), compression="gzip", compression_opts=9) Lossless compression filters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ GZIP filter (``"gzip"``) Available with every installation of HDF5, so it's best where portability is required. Good compression, moderate speed. ``compression_opts`` sets the compression level and may be an integer from 0 to 9, default is 4. LZF filter (``"lzf"``) Available with every installation of h5py (C source code also available). Low to moderate compression, very fast. No options. SZIP filter (``"szip"``) Patent-encumbered filter used in the NASA community. Not available with all installations of HDF5 due to legal reasons. Consult the HDF5 docs for filter options. Custom compression filters ~~~~~~~~~~~~~~~~~~~~~~~~~~ In addition to the compression filters listed above, compression filters can be dynamically loaded by the underlying HDF5 library. This is done by passing a filter number to :meth:`Group.create_dataset` as the ``compression`` parameter. The ``compression_opts`` parameter will then be passed to this filter. .. seealso:: `hdf5plugin `_ A Python package of several popular filters, including Blosc, LZ4 and ZFP, for convenient use with h5py `HDF5 Filter Plugins `_ A collection of filters as a single download from The HDF Group `Registered filter plugins `_ The index of publicly announced filter plugins .. note:: The underlying implementation of the compression filter will have the ``H5Z_FLAG_OPTIONAL`` flag set. This indicates that if the compression filter doesn't compress a block while writing, no error will be thrown. The filter will then be skipped when subsequently reading the block. .. _dataset_scaleoffset: Scale-Offset filter ~~~~~~~~~~~~~~~~~~~ Filters enabled with the ``compression`` keywords are *lossless*; what comes out of the dataset is exactly what you put in. HDF5 also includes a lossy filter which trades precision for storage space. Works with integer and floating-point data only. Enable the scale-offset filter by setting :meth:`Group.create_dataset` keyword ``scaleoffset`` to an integer. For integer data, this specifies the number of bits to retain. Set to 0 to have HDF5 automatically compute the number of bits required for lossless compression of the chunk. For floating-point data, indicates the number of digits after the decimal point to retain. .. warning:: Currently the scale-offset filter does not preserve special float values (i.e. NaN, inf), see https://forum.hdfgroup.org/t/scale-offset-filter-and-special-float-values-nan-infinity/3379 for more information and follow-up. .. _dataset_shuffle: Shuffle filter ~~~~~~~~~~~~~~ Block-oriented compressors like GZIP or LZF work better when presented with runs of similar values. Enabling the shuffle filter rearranges the bytes in the chunk and may improve compression ratio. No significant speed penalty, lossless. Enable by setting :meth:`Group.create_dataset` keyword ``shuffle`` to True. .. _dataset_fletcher32: Fletcher32 filter ~~~~~~~~~~~~~~~~~ Adds a checksum to each chunk to detect data corruption. Attempts to read corrupted chunks will fail with an error. No significant speed penalty. Obviously shouldn't be used with lossy compression filters. Enable by setting :meth:`Group.create_dataset` keyword ``fletcher32`` to True. .. _dataset_multi_block: Multi-Block Selection --------------------- The full H5Sselect_hyperslab API is exposed via the MultiBlockSlice object. This takes four elements to define the selection (start, count, stride and block) in contrast to the built-in slice object, which takes three elements. A MultiBlockSlice can be used in place of a slice to select a number of (count) blocks of multiple elements separated by a stride, rather than a set of single elements separated by a step. For an explanation of how this slicing works, see the `HDF5 documentation `_. For example:: >>> dset[...] array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) >>> dset[MultiBlockSlice(start=1, count=3, stride=4, block=2)] array([ 1, 2, 5, 6, 9, 10]) They can be used in multi-dimensional slices alongside any slicing object, including other MultiBlockSlices. For a more complete example of this, see the multiblockslice_interleave.py example script. .. _dataset_fancy: Fancy indexing -------------- A subset of the NumPy fancy-indexing syntax is supported. Use this with caution, as the underlying HDF5 mechanisms may have different performance than you expect. For any axis, you can provide an explicit list of points you want; for a dataset with shape (10, 10):: >>> dset.shape (10, 10) >>> result = dset[0, [1,3,8]] >>> result.shape (3,) >>> result = dset[1:6, [5,8,9]] >>> result.shape (5, 3) The following restrictions exist: * Selection coordinates must be given in increasing order * Duplicate selections are ignored * Very long lists (> 1000 elements) may produce poor performance NumPy boolean "mask" arrays can also be used to specify a selection. The result of this operation is a 1-D array with elements arranged in the standard NumPy (C-style) order. Behind the scenes, this generates a laundry list of points to select, so be careful when using it with large masks:: >>> arr = numpy.arange(100).reshape((10,10)) >>> dset = f.create_dataset("MyDataset", data=arr) >>> result = dset[arr > 50] >>> result.shape (49,) .. versionchanged:: 2.10 Selecting using an empty list is now allowed. This returns an array with length 0 in the relevant dimension. .. _dataset_empty: Creating and Reading Empty (or Null) datasets and attributes ------------------------------------------------------------ HDF5 has the concept of Empty or Null datasets and attributes. These are not the same as an array with a shape of (), or a scalar dataspace in HDF5 terms. Instead, it is a dataset with an associated type, no data, and no shape. In h5py, we represent this as either a dataset with shape ``None``, or an instance of ``h5py.Empty``. Empty datasets and attributes cannot be sliced. To create an empty attribute, use ``h5py.Empty`` as per :ref:`attributes`:: >>> obj.attrs["EmptyAttr"] = h5py.Empty("f") Similarly, reading an empty attribute returns ``h5py.Empty``:: >>> obj.attrs["EmptyAttr"] h5py.Empty(dtype="f") Empty datasets can be created either by defining a ``dtype`` but no ``shape`` in ``create_dataset``:: >>> grp.create_dataset("EmptyDataset", dtype="f") or by ``data`` to an instance of ``h5py.Empty``:: >>> grp.create_dataset("EmptyDataset", data=h5py.Empty("f")) An empty dataset has shape defined as ``None``, which is the best way of determining whether a dataset is empty or not. An empty dataset can be "read" in a similar way to scalar datasets, i.e. if ``empty_dataset`` is an empty dataset:: >>> empty_dataset[()] h5py.Empty(dtype="f") The dtype of the dataset can be accessed via ``.dtype`` as per normal. As empty datasets cannot be sliced, some methods of datasets such as ``read_direct`` will raise a ``TypeError`` exception if used on a empty dataset. Reference --------- .. class:: Dataset(identifier) Dataset objects are typically created via :meth:`Group.create_dataset`, or by retrieving existing datasets from a file. Call this constructor to create a new Dataset bound to an existing :class:`DatasetID ` identifier. .. method:: __getitem__(args) NumPy-style slicing to retrieve data. See :ref:`dataset_slicing`. .. method:: __setitem__(args) NumPy-style slicing to write data. See :ref:`dataset_slicing`. .. method:: __bool__() Check that the dataset is accessible. A dataset could be inaccessible for several reasons. For instance, the dataset, or the file it belongs to, may have been closed elsewhere. >>> f = h5py.open(filename) >>> dset = f["MyDS"] >>> f.close() >>> if dset: ... print("dataset accessible") ... else: ... print("dataset inaccessible") dataset inaccessible .. method:: read_direct(array, source_sel=None, dest_sel=None) Read from an HDF5 dataset directly into a NumPy array, which can avoid making an intermediate copy as happens with slicing. The destination array must be C-contiguous and writable, and must have a datatype to which the source data may be cast. Data type conversion will be carried out on the fly by HDF5. `source_sel` and `dest_sel` indicate the range of points in the dataset and destination array respectively. Use the output of ``numpy.s_[args]``:: >>> dset = f.create_dataset("dset", (100,), dtype='int64') >>> arr = np.zeros((100,), dtype='int32') >>> dset.read_direct(arr, np.s_[0:10], np.s_[50:60]) .. method:: write_direct(source, source_sel=None, dest_sel=None) Write data directly to HDF5 from a NumPy array. The source array must be C-contiguous. Selections must be the output of numpy.s_[]. Broadcasting is supported for simple indexing. .. method:: astype(dtype) Return a wrapper allowing you to read data as a particular type. Conversion is handled by HDF5 directly, on the fly:: >>> dset = f.create_dataset("bigint", (1000,), dtype='int64') >>> out = dset.astype('int16')[:] >>> out.dtype dtype('int16') .. versionchanged:: 3.9 :meth:`astype` can no longer be used as a context manager. .. method:: asstr(encoding=None, errors='strict') Only for string datasets. Returns a wrapper to read data as Python string objects:: >>> s = dataset.asstr()[0] encoding and errors work like ``bytes.decode()``, but the default encoding is defined by the datatype - ASCII or UTF-8. This is not guaranteed to be correct. .. versionadded:: 3.0 .. method:: fields(names) Get a wrapper to read a subset of fields from a compound data type:: >>> 2d_coords = dataset.fields(['x', 'y'])[:] If names is a string, a single field is extracted, and the resulting arrays will have that dtype. Otherwise, it should be an iterable, and the read data will have a compound dtype. .. versionadded:: 3.0 .. method:: iter_chunks Iterate over chunks in a chunked dataset. The optional ``sel`` argument is a slice or tuple of slices that defines the region to be used. If not set, the entire dataspace will be used for the iterator. For each chunk within the given region, the iterator yields a tuple of slices that gives the intersection of the given chunk with the selection area. This can be used to :ref:`read or write data in that chunk `. A TypeError will be raised if the dataset is not chunked. A ValueError will be raised if the selection region is invalid. .. versionadded:: 3.0 .. method:: resize(size, axis=None) Change the shape of a dataset. `size` may be a tuple giving the new dataset shape, or an integer giving the new length of the specified `axis`. Datasets may be resized only up to :attr:`Dataset.maxshape`. .. method:: len() Return the size of the first axis. .. method:: make_scale(name='') Make this dataset an HDF5 :ref:`dimension scale `. You can then attach it to dimensions of other datasets like this:: other_ds.dims[0].attach_scale(ds) You can optionally pass a name to associate with this scale. .. method:: virtual_sources If this dataset is a :doc:`virtual dataset `, return a list of named tuples: ``(vspace, file_name, dset_name, src_space)``, describing which parts of the dataset map to which source datasets. The two 'space' members are low-level :class:`SpaceID ` objects. .. attribute:: shape NumPy-style shape tuple giving dataset dimensions. .. attribute:: dtype NumPy dtype object giving the dataset's type. .. attribute:: size Integer giving the total number of elements in the dataset. .. attribute:: nbytes Integer giving the total number of bytes required to load the full dataset into RAM (i.e. `dset[()]`). This may not be the amount of disk space occupied by the dataset, as datasets may be compressed when written or only partly filled with data. This value also does not include the array overhead, as it only describes the size of the data itself. Thus the real amount of RAM occupied by this dataset may be slightly greater. .. versionadded:: 3.0 .. attribute:: ndim Integer giving the total number of dimensions in the dataset. .. attribute:: maxshape NumPy-style shape tuple indicating the maximum dimensions up to which the dataset may be resized. Axes with ``None`` are unlimited. .. attribute:: chunks Tuple giving the chunk shape, or None if chunked storage is not used. See :ref:`dataset_chunks`. .. attribute:: compression String with the currently applied compression filter, or None if compression is not enabled for this dataset. See :ref:`dataset_compression`. .. attribute:: compression_opts Options for the compression filter. See :ref:`dataset_compression`. .. attribute:: scaleoffset Setting for the HDF5 scale-offset filter (integer), or None if scale-offset compression is not used for this dataset. See :ref:`dataset_scaleoffset`. .. attribute:: shuffle Whether the shuffle filter is applied (T/F). See :ref:`dataset_shuffle`. .. attribute:: fletcher32 Whether Fletcher32 checksumming is enabled (T/F). See :ref:`dataset_fletcher32`. .. attribute:: fillvalue Value used when reading uninitialized portions of the dataset, or None if no fill value has been defined, in which case HDF5 will use a type-appropriate default value. Can't be changed after the dataset is created. .. attribute:: external If this dataset is stored in one or more external files, this is a list of 3-tuples, like the ``external=`` parameter to :meth:`Group.create_dataset`. Otherwise, it is ``None``. .. attribute:: is_virtual True if this dataset is a :doc:`virtual dataset `, otherwise False. .. attribute:: dims Access to :ref:`dimension_scales`. .. attribute:: is_scale Return ``True`` if the dataset is also a :ref:`dimension scale `, ``False`` otherwise. .. attribute:: attrs :ref:`attributes` for this dataset. .. attribute:: id The dataset's low-level identifier; an instance of :class:`DatasetID `. .. attribute:: ref An HDF5 object reference pointing to this dataset. See :ref:`refs_object`. .. attribute:: regionref Proxy object for creating HDF5 region references. See :ref:`refs_region`. .. attribute:: name String giving the full path to this dataset. .. attribute:: file :class:`File` instance in which this dataset resides .. attribute:: parent :class:`Group` instance containing this dataset. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/high/dims.rst0000644000175000017500000000730314701434742016727 0ustar00takluyvertakluyver.. _dimension_scales: Dimension Scales ================ Datasets are multidimensional arrays. HDF5 provides support for labeling the dimensions and associating one or more "dimension scales" with each dimension. A dimension scale is simply another HDF5 dataset. In principle, the length of the multidimensional array along the dimension of interest should be equal to the length of the dimension scale, but HDF5 does not enforce this property. The HDF5 library provides the H5DS API for working with dimension scales. H5py provides low-level bindings to this API in :mod:`h5py.h5ds`. These low-level bindings are in turn used to provide a high-level interface through the ``Dataset.dims`` property. Suppose we have the following data file:: f = File('foo.h5', 'w') f['data'] = np.ones((4, 3, 2), 'f') HDF5 allows the dimensions of ``data`` to be labeled, for example:: f['data'].dims[0].label = 'z' f['data'].dims[2].label = 'x' Note that the first dimension, which has a length of 4, has been labeled "z", the third dimension (in this case the fastest varying dimension), has been labeled "x", and the second dimension was given no label at all. We can also use HDF5 datasets as dimension scales. For example, if we have:: f['x1'] = [1, 2] f['x2'] = [1, 1.1] f['y1'] = [0, 1, 2] f['z1'] = [0, 1, 4, 9] We are going to treat the ``x1``, ``x2``, ``y1``, and ``z1`` datasets as dimension scales:: f['x1'].make_scale() f['x2'].make_scale('x2 name') f['y1'].make_scale('y1 name') f['z1'].make_scale('z1 name') When you create a dimension scale, you may provide a name for that scale. In this case, the ``x1`` scale was not given a name, but the others were. Now we can associate these dimension scales with the primary dataset:: f['data'].dims[0].attach_scale(f['z1']) f['data'].dims[1].attach_scale(f['y1']) f['data'].dims[2].attach_scale(f['x1']) f['data'].dims[2].attach_scale(f['x2']) Note that two dimension scales were associated with the third dimension of ``data``. You can also detach a dimension scale:: f['data'].dims[2].detach_scale(f['x2']) but for now, lets assume that we have both ``x1`` and ``x2`` still associated with the third dimension of ``data``. You can attach a dimension scale to any number of HDF5 datasets, you can even attach it to multiple dimensions of a single HDF5 dataset. Now that the dimensions of ``data`` have been labeled, and the dimension scales for the various axes have been specified, we have provided much more context with which ``data`` can be interpreted. For example, if you want to know the labels for the various dimensions of ``data``:: >>> [dim.label for dim in f['data'].dims] ['z', '', 'x'] If you want the names of the dimension scales associated with the "x" axis:: >>> f['data'].dims[2].keys() ['', 'x2 name'] :meth:`items` and :meth:`values` methods are also provided. The dimension scales themselves can also be accessed with:: f['data'].dims[2][1] or:: f['data'].dims[2]['x2 name'] such that:: >>> f['data'].dims[2][1] == f['x2'] True though, beware that if you attempt to index the dimension scales with a string, the first dimension scale whose name matches the string is the one that will be returned. There is no guarantee that the name of the dimension scale is unique. Nested dimension scales are not permitted: if a dataset has a dimension scale attached to it, converting the dataset to a dimension scale will fail, since the `HDF5 specification doesn't allow this `_. :: >>> f['data'].make_scale() RuntimeError: Unspecified error in H5DSset_scale (return value <0) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872510.0 h5py-3.13.0/docs/high/file.rst0000644000175000017500000006162414755054376016732 0ustar00takluyvertakluyver.. currentmodule:: h5py .. _file: File Objects ============ File objects serve as your entry point into the world of HDF5. In addition to the File-specific capabilities listed here, every File instance is also an :ref:`HDF5 group ` representing the `root group` of the file. .. _file_open: Opening & creating files ------------------------ HDF5 files work generally like standard Python file objects. They support standard modes like r/w/a, and should be closed when they are no longer in use. However, there is obviously no concept of "text" vs "binary" mode. >>> f = h5py.File('myfile.hdf5','r') The file name may be a byte string or unicode string. Valid modes are: ======== ================================================ r Readonly, file must exist (default) r+ Read/write, file must exist w Create file, truncate if exists w- or x Create file, fail if exists a Read/write if exists, create otherwise ======== ================================================ .. versionchanged:: 3.0 Files are now opened read-only by default. Earlier versions of h5py would pick different modes depending on the presence and permissions of the file. .. _file_driver: File drivers ------------ HDF5 ships with a variety of different low-level drivers, which map the logical HDF5 address space to different storage mechanisms. You can specify which driver you want to use when the file is opened:: >>> f = h5py.File('myfile.hdf5', driver=, ) For example, the HDF5 "core" driver can be used to create a purely in-memory HDF5 file, optionally written out to disk when it is closed. Here's a list of supported drivers and their options: None **Strongly recommended.** Use the standard HDF5 driver appropriate for the current platform. On UNIX, this is the H5FD_SEC2 driver; on Windows, it is H5FD_WINDOWS. 'sec2' Unbuffered, optimized I/O using standard POSIX functions. 'stdio' Buffered I/O using functions from stdio.h. 'core' Store and manipulate the data in memory, and optionally write it back out when the file is closed. Using this with an existing file and a reading mode will read the entire file into memory. Keywords: backing_store: If True (default), save changes to the real file at the specified path on :meth:`~.File.close` or :meth:`~.File.flush`. If False, any changes are discarded when the file is closed. block_size: Increment (in bytes) by which memory is extended. Default is 64k. 'family' Store the file on disk as a series of fixed-length chunks. Useful if the file system doesn't allow large files. Note: the filename you provide *must* contain a printf-style integer format code (e.g. %d"), which will be replaced by the file sequence number. Keywords: memb_size: Maximum file size (default is 2**31-1). 'fileobj' Store the data in a Python file-like object; see below. This is the default if a file-like object is passed to :class:`File`. 'split' Splits the meta data and raw data into separate files. Keywords: meta_ext: Metadata filename extension. Default is '-m.h5'. raw_ext: Raw data filename extension. Default is '-r.h5'. 'ros3' Enables read-only access to HDF5 files in the AWS S3 or S3-compatible object stores. HDF5 file name must be one of \http://, \https://, or s3:// resource location. An s3:// location will be translated into an AWS `path-style `_ location by h5py. Keywords: aws_region: AWS region of the S3 bucket with the file, e.g. ``b"us-east-1"``. Default is ``b''``. Required for s3:// locations. secret_id: AWS access key ID. Default is ``b''``. secret_key: AWS secret access key. Default is ``b''``. session_token: AWS temporary session token. Default is ``b''``.' Must be used together with temporary secret_id and secret_key. Available from HDF5 1.14.2. The argument values must be ``bytes`` objects. Arguments aws_region, secret_id, and secret_key are required to activate AWS authentication. .. note:: Pre-built h5py packages on PyPI do not include ros3 driver support. If you want this feature, you could use packages from conda-forge, or :ref:`build h5py from source ` against an HDF5 build with ros3. Alternatively, use the :ref:`file-like object ` support with a package like s3fs. .. _file_in_memory: In-memory 'files' ----------------- HDF5 can make a file in memory, without reading or writing a real file. To do this with h5py, use the :meth:`.File.in_memory` class method:: # Start a new HDF5 file in memory f = h5py.File.in_memory() f['a'] = [1, 2, 3] # Get the file data as bytes, e.g. to send over the network f.flush() hdf_data = f.id.get_file_image() # Turn the bytes back into an h5py File object f2 = h5py.File.in_memory(hdf_data) This uses HDF5's "core" :ref:`file driver `, which is likely to cause fewer odd problems than asking HDF5 to call back into a Python ``BytesIO`` object (:ref:`described below `). .. versionadded:: 3.13 .. _file_fileobj: Python file-like objects ------------------------ .. versionadded:: 2.9 The first argument to :class:`File` may be a Python file-like object, such as an :class:`io.BytesIO` or :class:`tempfile.TemporaryFile` instance. This is a convenient way to create temporary HDF5 files, e.g. for testing or to send over the network. The file-like object must be open for binary I/O, and must have these methods: ``read()`` (or ``readinto()``), ``write()``, ``seek()``, ``tell()``, ``truncate()`` and ``flush()``. >>> tf = tempfile.TemporaryFile() >>> f = h5py.File(tf, 'w') Accessing the :class:`File` instance after the underlying file object has been closed will result in undefined behaviour. When using an in-memory object such as :class:`io.BytesIO`, the data written will take up space in memory. If you want to write large amounts of data, a better option may be to store temporary data on disk using the functions in :mod:`tempfile`. .. literalinclude:: ../../examples/bytesio.py .. warning:: When using a Python file-like object for an HDF5 file, make sure to close the HDF5 file before closing the file object it's wrapping. If there is an error while trying to close the HDF5 file, segfaults may occur. .. warning:: When using a Python file-like object, using service threads to implement the file-like API can lead to process deadlocks. ``h5py`` serializes access to low-level hdf5 functions via a global lock. This lock is held when the file-like methods are called and is required to delete/deallocate ``h5py`` objects. Thus, if cyclic garbage collection is triggered on a service thread the program will deadlock. The service thread can not continue until it acquires the lock, and the thread holding the lock will not release it until the service thread completes its work. If possible, avoid creating circular references (either via ``weakrefs`` or manually breaking the cycles) that keep ``h5py`` objects alive. If this is not possible, manually triggering a garbage collection from the correct thread or temporarily disabling garbage collection may help. .. note:: Using a Python file-like object for HDF5 is internally more complex, as the HDF5 C code calls back into Python to access it. It inevitably has more ways to go wrong, and the failures may be less clear when it does. For some common use cases, you can easily avoid it: - To create a file in memory and never write it to disk, use the ``'core'`` driver with ``mode='w', backing_store=False`` (see :ref:`file_driver`). - To use a temporary file securely, make a temporary directory and :ref:`open a file path ` inside it. .. _file_version: Version bounding ---------------- HDF5 has been evolving for many years now. By default, the library will write objects in the most compatible fashion possible, so that older versions will still be able to read files generated by modern programs. However, there can be feature or performance advantages if you are willing to forgo a certain level of backwards compatibility. By using the "libver" option to :class:`File`, you can specify the minimum and maximum sophistication of these structures: >>> f = h5py.File('name.hdf5', libver='earliest') # most compatible >>> f = h5py.File('name.hdf5', libver='latest') # most modern Here "latest" means that HDF5 will always use the newest version of these structures without particular concern for backwards compatibility. The "earliest" option means that HDF5 will make a *best effort* to be backwards compatible. The default is "earliest". Specifying version bounds has changed from HDF5 version 1.10.2. There are two new compatibility levels: `v108` (for HDF5 1.8) and `v110` (for HDF5 1.10). This change enables, for example, something like this: >>> f = h5py.File('name.hdf5', libver=('earliest', 'v108')) which enforces full backward compatibility up to HDF5 1.8. Using any HDF5 feature that requires a newer format will raise an error. `latest` is now an alias to another bound label that represents the latest version. Because of this, the `File.libver` property will not use `latest` in its output for HDF5 1.10.2 or later. .. _file_closing: Closing files ------------- If you call :meth:`File.close`, or leave a ``with h5py.File(...)`` block, the file will be closed and any objects (such as groups or datasets) you have from that file will become unusable. This is equivalent to what HDF5 calls 'strong' closing. If a file object goes out of scope in your Python code, the file will only be closed when there are no remaining objects belonging to it. This is what HDF5 calls 'weak' closing. .. code-block:: with h5py.File('f1.h5', 'r') as f1: ds = f1['dataset'] # ERROR - can't access dataset, because f1 is closed: ds[0] def get_dataset(): f2 = h5py.File('f2.h5', 'r') return f2['dataset'] ds = get_dataset() # OK - f2 is out of scope, but the dataset reference keeps it open: ds[0] del ds # Now f2.h5 will be closed .. _file_userblock: User block ---------- HDF5 allows the user to insert arbitrary data at the beginning of the file, in a reserved space called the `user block`. The length of the user block must be specified when the file is created. It can be either zero (the default) or a power of two greater than or equal to 512. You can specify the size of the user block when creating a new file, via the ``userblock_size`` keyword to File; the userblock size of an open file can likewise be queried through the ``File.userblock_size`` property. Modifying the user block on an open file is not supported; this is a limitation of the HDF5 library. However, once the file is closed you are free to read and write data at the start of the file, provided your modifications don't leave the user block region. .. _file_filenames: Filenames on different systems ------------------------------ Different operating systems (and different file systems) store filenames with different encodings. Additionally, in Python there are at least two different representations of filenames, as encoded ``bytes`` or as a Unicode string (``str`` on Python 3). h5py's high-level interfaces always return filenames as ``str``, e.g. :attr:`File.filename`. h5py accepts filenames as either ``str`` or ``bytes``. In most cases, using Unicode (``str``) paths is preferred, but there are some caveats. .. note:: HDF5 handles filenames as bytes (C ``char *``), and the h5py :doc:`lowlevel` matches this. macOS (OSX) ........... macOS is the simplest system to deal with, it only accepts UTF-8, so using Unicode paths will just work (and should be preferred). Linux (and non-macOS Unix) .......................... Filenames on Unix-like systems are natively bytes. By convention, the locale encoding is used to convert to and from unicode; on most modern systems this will be UTF-8 by default (especially since Python 3.7, with :pep:`538`). Passing Unicode paths will mostly work, and Unicode paths from system functions like ``os.listdir()`` should always work. But if there are filenames that aren't in the expected encoding (e.g. on a network filesystem or a removable drive, or because something is misconfigured), you may want to handle them as bytes. Windows ....... Windows systems natively handle filenames as Unicode, and with HDF5 1.10.6 and above filenames passed to h5py as bytes will be used as UTF-8 encoded text, regardless of system configuration. HDF5 1.10.5 and below could only use filenames with characters from the active code page, e.g. `Windows-1252 `_ on many systems configured for European languages. This limitation applies whether you use ``str`` or ``bytes`` with h5py. .. _file_cache: Chunk cache ----------- :ref:`dataset_chunks` allows datasets to be stored on disk in separate pieces. When a part of any one of these pieces is needed, the entire chunk is read into memory before the requested part is copied to the user's buffer. To the extent possible those chunks are cached in memory, so that if the user requests a different part of a chunk that has already been read, the data can be copied directly from memory rather than reading the file again. The details of a given dataset's chunks are controlled when creating the dataset, but it is possible to adjust the behavior of the chunk *cache* when opening the file. The parameters controlling this behavior are prefixed by ``rdcc``, for *raw data chunk cache*. They apply to all datasets unless specifically changed for each one. * ``rdcc_nbytes`` sets the total size (measured in bytes) of the raw data chunk cache for each dataset. The default size is 1 MiB. This should be set to the size of each chunk times the number of chunks that are likely to be needed in cache. * ``rdcc_w0`` sets the policy for chunks to be removed from the cache when more space is needed. If the value is set to 0, then the library will always evict the least recently used chunk in cache. If the value is set to 1, the library will always evict the least recently used chunk which has been fully read or written, and if none have been fully read or written, it will evict the least recently used chunk. If the value is between 0 and 1, the behavior will be a blend of the two. Therefore, if the application will access the same data more than once, the value should be set closer to 0, and if the application does not, the value should be set closer to 1. * ``rdcc_nslots`` is the number of chunk slots in the cache for each dataset. In order to allow the chunks to be looked up quickly in cache, each chunk is assigned a unique hash value that is used to look up the chunk. The cache contains a simple array of pointers to chunks, which is called a hash table. A chunk's hash value is simply the index into the hash table of the pointer to that chunk. While the pointer at this location might instead point to a different chunk or to nothing at all, no other locations in the hash table can contain a pointer to the chunk in question. Therefore, the library only has to check this one location in the hash table to tell if a chunk is in cache or not. This also means that if two or more chunks share the same hash value, then only one of those chunks can be in the cache at the same time. When a chunk is brought into cache and another chunk with the same hash value is already in cache, the second chunk must be evicted first. Therefore it is very important to make sure that the size of the hash table (which is determined by the ``rdcc_nslots`` parameter) is large enough to minimize the number of hash value collisions. Due to the hashing strategy, this value should ideally be a prime number. As a rule of thumb, this value should be at least 10 times the number of chunks that can fit in ``rdcc_nbytes`` bytes. For maximum performance, this value should be set approximately 100 times that number of chunks. The default value is 521. Chunks and caching are described in greater detail in the `HDF5 documentation `_. .. _file_alignment: Data alignment -------------- When creating datasets within files, it may be advantageous to align the offset within the file itself. This can help optimize read and write times if the data become aligned with the underlying hardware, or may help with parallelism with MPI. Unfortunately, aligning small variables to large blocks can leave a lot of empty space in a file. To this effect, application developers are left with two options to tune the alignment of data within their file. The two variables ``alignment_threshold`` and ``alignment_interval`` in the :class:`File` constructor help control the threshold in bytes where the data alignment policy takes effect and the alignment in bytes within the file. The alignment is measured from the end of the user block. For more information, see the official HDF5 documentation `H5P_SET_ALIGNMENT `_. .. _file_meta_block_size: Meta block size --------------- Space for metadata is allocated in blocks within the HDF5 file. The argument ``meta_block_size`` of the :class:`File` constructor sets the minimum size of these blocks. Setting a large value can consolidate metadata into a small number of regions. Setting a small value can reduce the overall file size, especially in combination with the ``libver`` option. This controls how the overall data and metadata are laid out within the file. For more information, see the official HDF5 documentation `H5P_SET_META_BLOCK_SIZE `_. Reference --------- .. note:: Unlike Python file objects, the attribute :attr:`File.name` gives the HDF5 name of the root group, "``/``". To access the on-disk name, use :attr:`File.filename`. .. class:: File(name, mode='r', driver=None, libver=None, userblock_size=None, \ swmr=False, rdcc_nslots=None, rdcc_nbytes=None, rdcc_w0=None, \ track_order=None, fs_strategy=None, fs_persist=False, fs_threshold=1, \ fs_page_size=None, page_buf_size=None, min_meta_keep=0, min_raw_keep=0, \ locking=None, alignment_threshold=1, alignment_interval=1, **kwds) Open or create a new file. Note that in addition to the :class:`File`-specific methods and properties listed below, :class:`File` objects inherit the full interface of :class:`Group`. :param name: Name of file (`bytes` or `str`), or an instance of :class:`h5f.FileID` to bind to an existing file identifier, or a file-like object (see :ref:`file_fileobj`). :param mode: Mode in which to open file; one of ("w", "r", "r+", "a", "w-"). See :ref:`file_open`. :param driver: File driver to use; see :ref:`file_driver`. :param libver: Compatibility bounds; see :ref:`file_version`. :param userblock_size: Size (in bytes) of the user block. If nonzero, must be a power of 2 and at least 512. See :ref:`file_userblock`. :param swmr: If ``True`` open the file in single-writer-multiple-reader mode. Only used when mode="r". :param rdcc_nbytes: Total size of the raw data chunk cache in bytes. The default size is :math:`1024^2` (1 MiB) per dataset. :param rdcc_w0: Chunk preemption policy for all datasets. Default value is 0.75. :param rdcc_nslots: Number of chunk slots in the raw data chunk cache for this file. Default value is 521. :param track_order: Track dataset/group/attribute creation order under root group if ``True``. Default is ``h5.get_config().track_order``. :param fs_strategy: The file space handling strategy to be used. Only allowed when creating a new file. One of "fsm", "page", "aggregate", "none", or ``None`` (to use the HDF5 default). :param fs_persist: A boolean to indicate whether free space should be persistent or not. Only allowed when creating a new file. The default is False. :param fs_page_size: File space page size in bytes. Only use when fs_strategy="page". If ``None`` use the HDF5 default (4096 bytes). :param fs_threshold: The smallest free-space section size that the free space manager will track. Only allowed when creating a new file. The default is 1. :param page_buf_size: Page buffer size in bytes. Only allowed for HDF5 files created with fs_strategy="page". Must be a power of two value and greater or equal than the file space page size when creating the file. It is not used by default. :param min_meta_keep: Minimum percentage of metadata to keep in the page buffer before allowing pages containing metadata to be evicted. Applicable only if ``page_buf_size`` is set. Default value is zero. :param min_raw_keep: Minimum percentage of raw data to keep in the page buffer before allowing pages containing raw data to be evicted. Applicable only if ``page_buf_size`` is set. Default value is zero. :param locking: The file locking behavior. One of: - False (or "false") -- Disable file locking - True (or "true") -- Enable file locking - "best-effort" -- Enable file locking but ignore some errors - None -- Use HDF5 defaults .. warning:: The HDF5_USE_FILE_LOCKING environment variable can override this parameter. Only available with HDF5 >= 1.12.1 or 1.10.x >= 1.10.7. :param alignment_threshold: Together with ``alignment_interval``, this property ensures that any file object greater than or equal in size to the alignment threshold (in bytes) will be aligned on an address which is a multiple of alignment interval. :param alignment_interval: This property should be used in conjunction with ``alignment_threshold``. See the description above. For more details, see :ref:`file_alignment`. :param meta_block_size: Determines the current minimum size, in bytes, of new metadata block allocations. See :ref:`file_meta_block_size`. :param kwds: Driver-specific keywords; see :ref:`file_driver`. .. classmethod:: in_memory(file_image=None, block_size=64*1024, **kwargs) :param file_image: The initial file contents as bytes (or anything that supports the Python buffer interface). HDF5 takes a copy of this data. :param block_size: Chunk size for new memory alloactions (default 64 KiB). Other keyword arguments are like :class:`File`, although name, mode, driver and locking can't be passed. .. method:: __bool__() Check that the file descriptor is valid and the file open: >>> f = h5py.File(filename) >>> f.close() >>> if f: ... print("file is open") ... else: ... print("file is closed") file is closed .. method:: close() Close this file. All open objects will become invalid. .. method:: flush() Request that the HDF5 library flush its buffers to disk. .. attribute:: id Low-level identifier (an instance of :class:`FileID `). .. attribute:: filename Name of this file on disk, as a Unicode string. .. attribute:: mode String indicating if the file is open readonly ("r") or read-write ("r+"). Will always be one of these two values, regardless of the mode used to open the file. .. attribute:: swmr_mode True if the file access is using :doc:`/swmr`. Use :attr:`mode` to distinguish SWMR read from write. .. attribute:: driver String giving the driver used to open the file. Refer to :ref:`file_driver` for a list of drivers. .. attribute:: libver 2-tuple with library version settings. See :ref:`file_version`. .. attribute:: userblock_size Size of user block (in bytes). Generally 0. See :ref:`file_userblock`. .. attribute:: meta_block_size Minimum size, in bytes, of metadata block allocations. Default: 2048. See :ref:`file_meta_block_size`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0 h5py-3.13.0/docs/high/group.rst0000644000175000017500000005210214675110407017123 0ustar00takluyvertakluyver.. currentmodule:: h5py .. _group: Groups ====== Groups are the container mechanism by which HDF5 files are organized. From a Python perspective, they operate somewhat like dictionaries. In this case the "keys" are the names of group members, and the "values" are the members themselves (:class:`Group` and :class:`Dataset`) objects. Group objects also contain most of the machinery which makes HDF5 useful. The :ref:`File object ` does double duty as the HDF5 *root group*, and serves as your entry point into the file: >>> f = h5py.File('foo.hdf5','w') >>> f.name '/' >>> list(f.keys()) [] Names of all objects in the file are all text strings (``str``). These will be encoded with the HDF5-approved UTF-8 encoding before being passed to the HDF5 C library. Objects may also be retrieved using byte strings, which will be passed on to HDF5 as-is. .. _group_create: Creating groups --------------- New groups are easy to create:: >>> grp = f.create_group("bar") >>> grp.name '/bar' >>> subgrp = grp.create_group("baz") >>> subgrp.name '/bar/baz' Multiple intermediate groups can also be created implicitly:: >>> grp2 = f.create_group("/some/long/path") >>> grp2.name '/some/long/path' >>> grp3 = f['/some/long'] >>> grp3.name '/some/long' .. _group_links: Dict interface and links ------------------------ Groups implement a subset of the Python dictionary convention. They have methods like ``keys()``, ``values()`` and support iteration. Most importantly, they support the indexing syntax, and standard exceptions: >>> myds = subgrp["MyDS"] >>> missing = subgrp["missing"] KeyError: "Name doesn't exist (Symbol table: Object not found)" Objects can be deleted from the file using the standard syntax:: >>> del subgroup["MyDataset"] .. note:: When using h5py from Python 3, the keys(), values() and items() methods will return view-like objects instead of lists. These objects support membership testing and iteration, but can't be sliced like lists. By default, objects inside group are iterated in alphanumeric order. However, if group is created with ``track_order=True``, the insertion order for the group is remembered (tracked) in HDF5 file, and group contents are iterated in that order. The latter is consistent with Python 3.7+ dictionaries. The default ``track_order`` for all new groups can be specified globally with ``h5.get_config().track_order``. .. _group_hardlinks: Hard links ~~~~~~~~~~ What happens when assigning an object to a name in the group? It depends on the type of object being assigned. For NumPy arrays or other data, the default is to create an :ref:`HDF5 datasets `:: >>> grp["name"] = 42 >>> out = grp["name"] >>> out When the object being stored is an existing Group or Dataset, a new link is made to the object:: >>> grp["other name"] = out >>> grp["other name"] Note that this is `not` a copy of the dataset! Like hard links in a UNIX file system, objects in an HDF5 file can be stored in multiple groups:: >>> grp["other name"] == grp["name"] True .. _group_softlinks: Soft links ~~~~~~~~~~ Also like a UNIX filesystem, HDF5 groups can contain "soft" or symbolic links, which contain a text path instead of a pointer to the object itself. You can easily create these in h5py by using ``h5py.SoftLink``:: >>> myfile = h5py.File('foo.hdf5','w') >>> group = myfile.create_group("somegroup") >>> myfile["alias"] = h5py.SoftLink('/somegroup') If the target is removed, they will "dangle": >>> del myfile['somegroup'] >>> print(myfile['alias']) KeyError: 'Component not found (Symbol table: Object not found)' .. _group_extlinks: External links ~~~~~~~~~~~~~~ External links are "soft links plus", which allow you to specify the name of the file as well as the path to the desired object. You can refer to objects in any file you wish. Use similar syntax as for soft links: >>> myfile = h5py.File('foo.hdf5','w') >>> myfile['ext link'] = h5py.ExternalLink("otherfile.hdf5", "/path/to/resource") When the link is accessed, the file "otherfile.hdf5" is opened, and object at "/path/to/resource" is returned. Since the object retrieved is in a different file, its ".file" and ".parent" properties will refer to objects in that file, *not* the file in which the link resides. .. note:: Currently, you can't access an external link if the file it points to is already open. This is related to how HDF5 manages file permissions internally. .. note:: The filename is stored in the file as bytes, normally UTF-8 encoded. In most cases, this should work reliably, but problems are possible if a file created on one platform is accessed on another. Older versions of HDF5 may have problems on Windows in particular. See :ref:`file_filenames` for more details. Reference --------- .. class:: Group(identifier) Generally Group objects are created by opening objects in the file, or by the method :meth:`Group.create_group`. Call the constructor with a :class:`GroupID ` instance to create a new Group bound to an existing low-level identifier. .. method:: __iter__() Iterate over the names of objects directly attached to the group. Use :meth:`Group.visit` or :meth:`Group.visititems` for recursive access to group members. .. method:: __contains__(name) Dict-like membership testing. `name` may be a relative or absolute path. .. method:: __getitem__(name) Retrieve an object. `name` may be a relative or absolute path, or an :ref:`object or region reference `. See :ref:`group_links`. .. method:: __setitem__(name, value) Create a new link, or automatically create a dataset. See :ref:`group_links`. .. method:: __bool__() Check that the group is accessible. A group could be inaccessible for several reasons. For instance, the group, or the file it belongs to, may have been closed elsewhere. >>> f = h5py.open(filename) >>> group = f["MyGroup"] >>> f.close() >>> if group: ... print("group is accessible") ... else: ... print("group is inaccessible") group is inaccessible .. method:: keys() Get the names of directly attached group members. Use :meth:`Group.visit` or :meth:`Group.visititems` for recursive access to group members. :return: set-like object. .. method:: values() Get the objects contained in the group (Group and Dataset instances). Broken soft or external links show up as None. :return: a collection or bag-like object. .. method:: items() Get ``(name, value)`` pairs for object directly attached to this group. Values for broken soft or external links show up as None. :return: a set-like object. .. method:: get(name, default=None, getclass=False, getlink=False) Retrieve an item, or information about an item. `name` and `default` work like the standard Python ``dict.get``. :param name: Name of the object to retrieve. May be a relative or absolute path. :param default: If the object isn't found, return this instead. :param getclass: If True, return the class of object instead; :class:`Group` or :class:`Dataset`. :param getlink: If true, return the type of link via a :class:`HardLink`, :class:`SoftLink` or :class:`ExternalLink` instance. If ``getclass`` is also True, returns the corresponding Link class without instantiating it. .. method:: visit(callable) Recursively visit all objects in this group and subgroups. You supply a callable with the signature:: callable(name) -> None or return value `name` will be the name of the object relative to the current group. Return None to continue visiting until all objects are exhausted. Returning anything else will immediately stop visiting and return that value from ``visit``:: >>> def find_foo(name): ... """ Find first object with 'foo' anywhere in the name """ ... if 'foo' in name: ... return name >>> group.visit(find_foo) 'some/subgroup/foo' .. method:: visititems(callable) Recursively visit all objects in this group and subgroups. Like :meth:`Group.visit`, except your callable should have the signature:: callable(name, object) -> None or return value In this case `object` will be a :class:`Group` or :class:`Dataset` instance. .. method:: visit_links(callable) visititems_links(callable) These methods are like :meth:`visit` and :meth:`visititems`, but work on the links in groups, rather than the objects those links point to. So if you have two links pointing to the same object, these will 'see' both. They also see soft & external links, which :meth:`visit` and :meth:`visititems` ignore. The second argument to the callback for ``visititems_links`` is an instance of one of the :ref:`link classes `. .. versionadded:: 3.11 .. method:: move(source, dest) Move an object or link in the file. If `source` is a hard link, this effectively renames the object. If a soft or external link, the link itself is moved. :param source: Name of object or link to move. :type source: String :param dest: New location for object or link. :type dest: String .. method:: copy(source, dest, name=None, shallow=False, expand_soft=False, expand_external=False, expand_refs=False, without_attrs=False) Copy an object or group. The source can be a path, Group, Dataset, or Datatype object. The destination can be either a path or a Group object. The source and destination need not be in the same file. If the source is a Group object, by default all objects within that group will be copied recursively. When the destination is a Group object, by default the target will be created in that group with its current name (basename of obj.name). You can override that by setting "name" to a string. :param source: What to copy. May be a path in the file or a Group/Dataset object. :param dest: Where to copy it. May be a path or Group object. :param name: If the destination is a Group object, use this for the name of the copied object (default is basename). :param shallow: Only copy immediate members of a group. :param expand_soft: Expand soft links into new objects. :param expand_external: Expand external links into new objects. :param expand_refs: Copy objects which are pointed to by references. :param without_attrs: Copy object(s) without copying HDF5 attributes. .. method:: create_group(name, track_order=None) Create and return a new group in the file. :param name: Name of group to create. May be an absolute or relative path. Provide None to create an anonymous group, to be linked into the file later. :type name: String or None :param track_order: Track dataset/group/attribute creation order under this group if ``True``. Default is ``h5.get_config().track_order``. :return: The new :class:`Group` object. .. method:: require_group(name) Open a group in the file, creating it if it doesn't exist. TypeError is raised if a conflicting object already exists. Parameters as in :meth:`Group.create_group`. .. method:: create_dataset(name, shape=None, dtype=None, data=None, **kwds) Create a new dataset. Options are explained in :ref:`dataset_create`. :param name: Name of dataset to create. May be an absolute or relative path. Provide None to create an anonymous dataset, to be linked into the file later. :param shape: Shape of new dataset (Tuple). :param dtype: Data type for new dataset :param data: Initialize dataset to this (NumPy array). :keyword chunks: Chunk shape, or True to enable auto-chunking. :keyword maxshape: Dataset will be resizable up to this shape (Tuple). Automatically enables chunking. Use None for the axes you want to be unlimited. :keyword compression: Compression strategy. See :ref:`dataset_compression`. :keyword compression_opts: Parameters for compression filter. :keyword scaleoffset: See :ref:`dataset_scaleoffset`. :keyword shuffle: Enable shuffle filter (T/**F**). See :ref:`dataset_shuffle`. :keyword fletcher32: Enable Fletcher32 checksum (T/**F**). See :ref:`dataset_fletcher32`. :keyword fillvalue: This value will be used when reading uninitialized parts of the dataset. :keyword fill_time: Control when to write the fill value. One of the following choices: `alloc`, write fill value before writing application data values or when the dataset is created; `never`, never write fill value; `ifset`, write fill value if it is defined. Default to `ifset`, which is the default of HDF5 library. If the whole dataset is going to be written by the application, setting this to `never` can avoid unnecessary writing of fill value and potentially improve performance. :keyword track_times: Enable dataset creation timestamps (**T**/F). :keyword track_order: Track attribute creation order if ``True``. Default is ``h5.get_config().track_order``. :keyword external: Store the dataset in one or more external, non-HDF5 files. This should be an iterable (such as a list) of tuples of ``(name, offset, size)`` to store data from ``offset`` to ``offset + size`` in the named file. Each name must be a str, bytes, or os.PathLike; each offset and size, an integer. The last file in the sequence may have size ``h5py.h5f.UNLIMITED`` to let it grow as needed. If only a name is given instead of an iterable of tuples, it is equivalent to ``[(name, 0, h5py.h5f.UNLIMITED)]``. :keyword allow_unknown_filter: Do not check that the requested filter is available for use (T/F). This should only be set if you will write any data with ``write_direct_chunk``, compressing the data before passing it to h5py. :keyword rdcc_nbytes: Total size of the dataset's chunk cache in bytes. The default size is 1024**2 (1 MiB). :keyword rdcc_w0: The chunk preemption policy for this dataset. This must be between 0 and 1 inclusive and indicates the weighting according to which chunks which have been fully read or written are penalized when determining which chunks to flush from cache. A value of 0 means fully read or written chunks are treated no differently than other chunks (the preemption is strictly LRU) while a value of 1 means fully read or written chunks are always preempted before other chunks. If your application only reads or writes data once, this can be safely set to 1. Otherwise, this should be set lower depending on how often you re-read or re-write the same data. The default value is 0.75. :keyword rdcc_nslots: The number of chunk slots in the dataset's chunk cache. Increasing this value reduces the number of cache collisions, but slightly increases the memory used. Due to the hashing strategy, this value should ideally be a prime number. As a rule of thumb, this value should be at least 10 times the number of chunks that can fit in rdcc_nbytes bytes. For maximum performance, this value should be set approximately 100 times that number of chunks. The default value is 521. .. method:: require_dataset(name, shape, dtype, exact=False, **kwds) Open a dataset, creating it if it doesn't exist. If keyword "exact" is False (default), an existing dataset must have the same shape and a conversion-compatible dtype to be returned. If True, the shape and dtype must match exactly. If keyword "maxshape" is given, the maxshape and dtype must match instead. If any of the keywords "rdcc_nslots", "rdcc_nbytes", or "rdcc_w0" are given, they will be used to configure the dataset's chunk cache. Other dataset keywords (see create_dataset) may be provided, but are only used if a new dataset is to be created. Raises TypeError if an incompatible object already exists, or if the shape, maxshape or dtype don't match according to the above rules. :keyword exact: Require shape and type to match exactly (T/**F**) .. method:: create_dataset_like(name, other, **kwds) Create a dataset similar to `other`, much like numpy's `_like` functions. :param name: Name of the dataset (absolute or relative). Provide None to make an anonymous dataset. :param other: The dataset whom the new dataset should mimic. All properties, such as shape, dtype, chunking, ... will be taken from it, but no data or attributes are being copied. Any dataset keywords (see create_dataset) may be provided, including shape and dtype, in which case the provided values take precedence over those from `other`. .. method:: create_virtual_dataset(name, layout, fillvalue=None) Create a new virtual dataset in this group. See :doc:`/vds` for more details. :param str name: Name of the dataset (absolute or relative). :param VirtualLayout layout: Defines what source data fills which parts of the virtual dataset. :param fillvalue: The value to use where there is no data. .. method:: build_virtual_dataset() Assemble a virtual dataset in this group. This is used as a context manager:: with f.build_virtual_dataset('virt', (10, 1000), np.uint32) as layout: layout[0] = h5py.VirtualSource('foo.h5', 'data', (1000,)) Inside the context, you populate a :class:`VirtualLayout` object. The file is only modified when you leave the context, and if there's no error. :param str name: Name of the dataset (absolute or relative) :param tuple shape: Shape of the dataset :param dtype: A numpy dtype for data read from the virtual dataset :param tuple maxshape: Maximum dimensions if the dataset can grow (optional). Use None for unlimited dimensions. :param fillvalue: The value used where no data is available. .. attribute:: attrs :ref:`attributes` for this group. .. attribute:: id The groups's low-level identifier; an instance of :class:`GroupID `. .. attribute:: ref An HDF5 object reference pointing to this group. See :ref:`refs_object`. .. attribute:: regionref A proxy object allowing you to interrogate region references. See :ref:`refs_region`. .. attribute:: name String giving the full path to this group. .. attribute:: file :class:`File` instance in which this group resides. .. attribute:: parent :class:`Group` instance containing this group. .. _group_link_classes: Link classes ------------ .. class:: HardLink() Exists only to support :meth:`Group.get`. Has no state and provides no properties or methods. .. class:: SoftLink(path) Exists to allow creation of soft links in the file. See :ref:`group_softlinks`. These only serve as containers for a path; they are not related in any way to a particular file. :param path: Value of the soft link. :type path: String .. attribute:: path Value of the soft link .. class:: ExternalLink(filename, path) Like :class:`SoftLink`, only they specify a filename in addition to a path. See :ref:`group_extlinks`. :param filename: Name of the file to which the link points :type filename: String :param path: Path to the object in the external file. :type path: String .. attribute:: filename Name of the external file as a Unicode string .. attribute:: path Path to the object in the external file ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/high/lowlevel.rst0000644000175000017500000000253214701434742017623 0ustar00takluyvertakluyverLow-Level API ============= This documentation mostly describes the h5py high-level API, which offers the main features of HDF5 in an interface modelled on dictionaries and NumPy arrays. h5py also provides a low-level API, which more closely follows the HDF5 C API. .. seealso:: - `h5py Low-Level API Reference `_ - `HDF5 C/Fortran Reference Manual `_ You can easily switch between the two levels in your code: - **To the low-level**: High-level :class:`.File`, :class:`.Group` and :class:`.Dataset` objects all have a ``.id`` attribute exposing the corresponding low-level objects—:class:`~low:h5py.h5f.FileID`, :class:`~low:h5py.h5g.GroupID` and :class:`~low:h5py.h5d.DatasetID`:: dsid = dset.id dsid.get_offset() # Low-level method Although there is no high-level object for a single attribute, :meth:`.AttributeManager.get_id` will get the low-level :class:`~low:h5py.h5a.AttrID` object:: aid = dset.attrs.get_id('timestamp') aid.get_storage_size() # Low-level method - **To the high-level**: Low-level :class:`~low:h5py.h5f.FileID`, :class:`~low:h5py.h5g.GroupID` and :class:`~low:h5py.h5d.DatasetID` objects can be passed to the constructors of :class:`.File`, :class:`.Group` and :class:`.Dataset`, respectively. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/index.rst0000644000175000017500000000256514701434742016170 0ustar00takluyvertakluyverHDF5 for Python =============== The h5py package is a Pythonic interface to the HDF5 binary data format. `HDF5 `_ lets you store huge amounts of numerical data, and easily manipulate that data from NumPy. For example, you can slice into multi-terabyte datasets stored on disk, as if they were real NumPy arrays. Thousands of datasets can be stored in a single file, categorized and tagged however you want. Where to start -------------- * :ref:`Quick-start guide ` * :ref:`Installation ` Other resources --------------- * `Python and HDF5 O'Reilly book `_ * `Ask questions on the HDF forum `_ * `GitHub project `_ Introductory info ----------------- .. toctree:: :maxdepth: 1 quick build High-level API reference ------------------------ .. toctree:: :maxdepth: 1 high/file high/group high/dataset high/attr high/dims high/lowlevel Advanced topics --------------- .. toctree:: :maxdepth: 1 config special strings refs mpi swmr vds related_projects Meta-info about the h5py project -------------------------------- .. toctree:: :maxdepth: 1 whatsnew/index contributing release_guide faq licenses ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/licenses.rst0000644000175000017500000002667414045746670016704 0ustar00takluyvertakluyverLicenses and legal info ======================= Copyright Notice and Statement for the h5py Project --------------------------------------------------- :: Copyright (c) 2008 Andrew Collette and contributors All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. HDF5 Copyright Statement ------------------------ :: HDF5 (Hierarchical Data Format 5) Software Library and Utilities Copyright 2006-2007 by The HDF Group (THG). NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities Copyright 1998-2006 by the Board of Trustees of the University of Illinois. All rights reserved. Contributors: National Center for Supercomputing Applications (NCSA) at the University of Illinois, Fortner Software, Unidata Program Center (netCDF), The Independent JPEG Group (JPEG), Jean-loup Gailly and Mark Adler (gzip), and Digital Equipment Corporation (DEC). Redistribution and use in source and binary forms, with or without modification, are permitted for any purpose (including commercial purposes) provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions, and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation and/or materials provided with the distribution. 3. In addition, redistributions of modified forms of the source or binary code must carry prominent notices stating that the original code was changed and the date of the change. 4. All publications or advertising materials mentioning features or use of this software are asked, but not required, to acknowledge that it was developed by The HDF Group and by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign and credit the contributors. 5. Neither the name of The HDF Group, the name of the University, nor the name of any Contributor may be used to endorse or promote products derived from this software without specific prior written permission from THG, the University, or the Contributor, respectively. DISCLAIMER: THIS SOFTWARE IS PROVIDED BY THE HDF GROUP (THG) AND THE CONTRIBUTORS "AS IS" WITH NO WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED. In no event shall THG or the Contributors be liable for any damages suffered by the users arising out of the use of this software, even if advised of the possibility of such damage. Portions of HDF5 were developed with support from the University of California, Lawrence Livermore National Laboratory (UC LLNL). The following statement applies to those portions of the product and must be retained in any redistribution of source code, binaries, documentation, and/or accompanying materials: This work was partially produced at the University of California, Lawrence Livermore National Laboratory (UC LLNL) under contract no. W-7405-ENG-48 (Contract 48) between the U.S. Department of Energy (DOE) and The Regents of the University of California (University) for the operation of UC LLNL. DISCLAIMER: This work was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately- owned rights. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or the University of California, and shall not be used for advertising or product endorsement purposes. PyTables Copyright Statement ---------------------------- :: Copyright Notice and Statement for PyTables Software Library and Utilities: Copyright (c) 2002, 2003, 2004 Francesc Altet Copyright (c) 2005, 2006, 2007 Carabos Coop. V. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: a. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. b. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. c. Neither the name of the Carabos Coop. V. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. stdint.h (Windows version) License ---------------------------------- :: Copyright (c) 2006-2008 Alexander Chemeris Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Python license -------------- #. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using Python Python 2.7.5 software in source or binary form and its associated documentation. #. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python Python 2.7.5 alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright 2001-2013 Python Software Foundation; All Rights Reserved" are retained in Python Python 2.7.5 alone or in any derivative version prepared by Licensee. #. In the event Licensee prepares a derivative work that is based on or incorporates Python Python 2.7.5 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python Python 2.7.5. #. PSF is making Python Python 2.7.5 available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON Python 2.7.5 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. #. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON Python 2.7.5 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON Python 2.7.5, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. #. This License Agreement will automatically terminate upon a material breach of its terms and conditions. #. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. #. By copying, installing or otherwise using Python Python 2.7.5, Licensee agrees to be bound by the terms and conditions of this License Agreement. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0 h5py-3.13.0/docs/mpi.rst0000644000175000017500000001177114675110407015644 0ustar00takluyvertakluyver.. _parallel: Parallel HDF5 ============= Parallel read access to HDF5 files is possible from separate processes (but not threads) with no special features. It's advised to open the file independently in each reader process; opening the file once and then forking may cause issues. **Parallel HDF5** is a feature built on MPI which also supports *writing* an HDF5 file in parallel. To use this, both HDF5 and h5py must be compiled with MPI support turned on, as described below. How does Parallel HDF5 work? ---------------------------- Parallel HDF5 is a configuration of the HDF5 library which lets you share open files across multiple parallel processes. It uses the MPI (Message Passing Interface) standard for interprocess communication. Consequently, when using Parallel HDF5 from Python, your application will also have to use the MPI library. This is accomplished through the `mpi4py `_ Python package, which provides excellent, complete Python bindings for MPI. Here's an example "Hello World" using ``mpi4py``:: >>> from mpi4py import MPI >>> print("Hello World (from process %d)" % MPI.COMM_WORLD.Get_rank()) To run an MPI-based parallel program, use the ``mpiexec`` program to launch several parallel instances of Python:: $ mpiexec -n 4 python demo.py Hello World (from process 1) Hello World (from process 2) Hello World (from process 3) Hello World (from process 0) The ``mpi4py`` package includes all kinds of mechanisms to share data between processes, synchronize, etc. It's a different flavor of parallelism than, say, threads or ``multiprocessing``, but easy to get used to. Check out the `mpi4py web site `_ for more information and a great tutorial. Building against Parallel HDF5 ------------------------------ HDF5 must be built with at least the following options:: $./configure --enable-parallel --enable-shared Note that ``--enable-shared`` is required. Often, a "parallel" version of HDF5 will be available through your package manager. You can check to see what build options were used by using the program ``h5cc``:: $ h5cc -showconfig Once you've got a Parallel-enabled build of HDF5, h5py has to be compiled in "MPI mode". Set your default compiler to the ``mpicc`` wrapper and build h5py with the ``HDF5_MPI`` environment variable:: $ export CC=mpicc $ export HDF5_MPI="ON" $ export HDF5_DIR="/path/to/parallel/hdf5" # If this isn't found by default $ pip install . Using Parallel HDF5 from h5py ----------------------------- The parallel features of HDF5 are mostly transparent. To open a file shared across multiple processes, use the ``mpio`` file driver. Here's an example program which opens a file, creates a single dataset and fills it with the process ID:: from mpi4py import MPI import h5py rank = MPI.COMM_WORLD.rank # The process ID (integer 0-3 for 4-process run) f = h5py.File('parallel_test.hdf5', 'w', driver='mpio', comm=MPI.COMM_WORLD) dset = f.create_dataset('test', (4,), dtype='i') dset[rank] = rank f.close() Run the program:: $ mpiexec -n 4 python demo2.py Looking at the file with ``h5dump``:: $ h5dump parallel_test.hdf5 HDF5 "parallel_test.hdf5" { GROUP "/" { DATASET "test" { DATATYPE H5T_STD_I32LE DATASPACE SIMPLE { ( 4 ) / ( 4 ) } DATA { (0): 0, 1, 2, 3 } } } } Collective versus independent operations ---------------------------------------- MPI-based programs work by launching many instances of the Python interpreter, each of which runs your script. There are certain requirements imposed on what each process can do. Certain operations in HDF5, for example, anything which modifies the file metadata, must be performed by all processes. Other operations, for example, writing data to a dataset, can be performed by some processes and not others. These two classes are called *collective* and *independent* operations. Anything which modifies the *structure* or metadata of a file must be done collectively. For example, when creating a group, each process must participate:: >>> grp = f.create_group('x') # right >>> if rank == 1: ... grp = f.create_group('x') # wrong; all processes must do this On the other hand, writing data to a dataset can be done independently:: >>> if rank > 2: ... dset[rank] = 42 # this is fine MPI atomic mode --------------- HDF5 supports the MPI "atomic" file access mode, which trades speed for more stringent consistency requirements. Once you've opened a file with the ``mpio`` driver, you can place it in atomic mode using the settable ``atomic`` property:: >>> f = h5py.File('parallel_test.hdf5', 'w', driver='mpio', comm=MPI.COMM_WORLD) >>> f.atomic = True More information ---------------- Parallel HDF5 is a new feature in h5py. If you have any questions, feel free to ask on the mailing list (h5py at google groups). We welcome bug reports, enhancements and general inquiries. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/quick.rst0000644000175000017500000001314514045746670016200 0ustar00takluyvertakluyver.. _quick: Quick Start Guide ================= Install ------- With `Anaconda `_ or `Miniconda `_:: conda install h5py If there are wheels for your platform (mac, linux, windows on x86) and you do not need MPI you can install ``h5py`` via pip:: pip install h5py With `Enthought Canopy `_, use the GUI package manager or:: enpkg h5py To install from source see :ref:`install`. Core concepts ------------- An HDF5 file is a container for two kinds of objects: `datasets`, which are array-like collections of data, and `groups`, which are folder-like containers that hold datasets and other groups. The most fundamental thing to remember when using h5py is: **Groups work like dictionaries, and datasets work like NumPy arrays** Suppose someone has sent you a HDF5 file, :code:`mytestfile.hdf5`. (To create this file, read `Appendix: Creating a file`_.) The very first thing you'll need to do is to open the file for reading:: >>> import h5py >>> f = h5py.File('mytestfile.hdf5', 'r') The :ref:`File object ` is your starting point. What is stored in this file? Remember :py:class:`h5py.File` acts like a Python dictionary, thus we can check the keys, >>> list(f.keys()) ['mydataset'] Based on our observation, there is one data set, :code:`mydataset` in the file. Let us examine the data set as a :ref:`Dataset ` object >>> dset = f['mydataset'] The object we obtained isn't an array, but :ref:`an HDF5 dataset `. Like NumPy arrays, datasets have both a shape and a data type: >>> dset.shape (100,) >>> dset.dtype dtype('int32') They also support array-style slicing. This is how you read and write data from a dataset in the file:: >>> dset[...] = np.arange(100) >>> dset[0] 0 >>> dset[10] 10 >>> dset[0:100:10] array([ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90]) For more, see :ref:`file` and :ref:`dataset`. Appendix: Creating a file +++++++++++++++++++++++++ At this point, you may wonder how :code:`mytestdata.hdf5` is created. We can create a file by setting the :code:`mode` to :code:`w` when the File object is initialized. Some other modes are :code:`a` (for read/write/create access), and :code:`r+` (for read/write access). A full list of file access modes and their meanings is at :ref:`file`. :: >>> import h5py >>> import numpy as np >>> f = h5py.File("mytestfile.hdf5", "w") The :ref:`File object ` has a couple of methods which look interesting. One of them is ``create_dataset``, which as the name suggests, creates a data set of given shape and dtype :: >>> dset = f.create_dataset("mydataset", (100,), dtype='i') The File object is a context manager; so the following code works too :: >>> import h5py >>> import numpy as np >>> with h5py.File("mytestfile.hdf5", "w") as f: >>> dset = f.create_dataset("mydataset", (100,), dtype='i') Groups and hierarchical organization ------------------------------------ "HDF" stands for "Hierarchical Data Format". Every object in an HDF5 file has a name, and they're arranged in a POSIX-style hierarchy with ``/``-separators:: >>> dset.name '/mydataset' The "folders" in this system are called :ref:`groups `. The ``File`` object we created is itself a group, in this case the `root group`, named ``/``: >>> f.name '/' Creating a subgroup is accomplished via the aptly-named ``create_group``. But we need to open the file in the "append" mode first (Read/write if exists, create otherwise) :: >>> f = h5py.File('mydataset.hdf5', 'a') >>> grp = f.create_group("subgroup") All ``Group`` objects also have the ``create_*`` methods like File:: >>> dset2 = grp.create_dataset("another_dataset", (50,), dtype='f') >>> dset2.name '/subgroup/another_dataset' By the way, you don't have to create all the intermediate groups manually. Specifying a full path works just fine:: >>> dset3 = f.create_dataset('subgroup2/dataset_three', (10,), dtype='i') >>> dset3.name '/subgroup2/dataset_three' Groups support most of the Python dictionary-style interface. You retrieve objects in the file using the item-retrieval syntax:: >>> dataset_three = f['subgroup2/dataset_three'] Iterating over a group provides the names of its members:: >>> for name in f: ... print(name) mydataset subgroup subgroup2 Membership testing also uses names:: >>> "mydataset" in f True >>> "somethingelse" in f False You can even use full path names:: >>> "subgroup/another_dataset" in f True There are also the familiar ``keys()``, ``values()``, ``items()`` and ``iter()`` methods, as well as ``get()``. Since iterating over a group only yields its directly-attached members, iterating over an entire file is accomplished with the ``Group`` methods ``visit()`` and ``visititems()``, which take a callable:: >>> def printname(name): ... print(name) >>> f.visit(printname) mydataset subgroup subgroup/another_dataset subgroup2 subgroup2/dataset_three For more, see :ref:`group`. Attributes ---------- One of the best features of HDF5 is that you can store metadata right next to the data it describes. All groups and datasets support attached named bits of data called `attributes`. Attributes are accessed through the ``attrs`` proxy object, which again implements the dictionary interface:: >>> dset.attrs['temperature'] = 99.5 >>> dset.attrs['temperature'] 99.5 >>> 'temperature' in dset.attrs True For more, see :ref:`attributes`. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/refs.rst0000644000175000017500000000762114045746670016025 0ustar00takluyvertakluyver.. _refs: Object and Region References ============================ In addition to soft and external links, HDF5 supplies one more mechanism to refer to objects and data in a file. HDF5 *references* are low-level pointers to other objects. The great advantage of references is that they can be stored and retrieved as data; you can create an attribute or an entire dataset of reference type. References come in two flavors, object references and region references. As the name suggests, object references point to a particular object in a file, either a dataset, group or named datatype. Region references always point to a dataset, and additionally contain information about a certain selection (*dataset region*) on that dataset. For example, if you have a dataset representing an image, you could specify a region of interest, and store it as an attribute on the dataset. .. _refs_object: Using object references ----------------------- It's trivial to create a new object reference; every high-level object in h5py has a read-only property "ref", which when accessed returns a new object reference: >>> myfile = h5py.File('myfile.hdf5') >>> mygroup = myfile['/some/group'] >>> ref = mygroup.ref >>> print(ref) "Dereferencing" these objects is straightforward; use the same syntax as when opening any other object: >>> mygroup2 = myfile[ref] >>> print(mygroup2) .. _refs_region: Using region references ----------------------- Region references always contain a selection. You create them using the dataset property "regionref" and standard NumPy slicing syntax: >>> myds = myfile.create_dataset('dset', (200,200)) >>> regref = myds.regionref[0:10, 0:5] >>> print(regref) The reference itself can now be used in place of slicing arguments to the dataset: >>> subset = myds[regref] For selections which don't conform to a regular grid, h5py copies the behavior of NumPy's fancy indexing, which returns a 1D array. Note that for h5py release before 2.2, h5py always returns a 1D array. In addition to storing a selection, region references inherit from object references, and can be used anywhere an object reference is accepted. In this case the object they point to is the dataset used to create them. Storing references in a dataset ------------------------------- HDF5 treats object and region references as data. Consequently, there is a special HDF5 type to represent them. However, NumPy has no equivalent type. Rather than implement a special "reference type" for NumPy, references are handled at the Python layer as plain, ordinary python objects. To NumPy they are represented with the "object" dtype (kind 'O'). A small amount of metadata attached to the dtype tells h5py to interpret the data as containing reference objects. These dtypes are available from h5py for references and region references: * ``h5py.ref_dtype`` - for object references * ``h5py.regionref_dtype`` - for region references To store an array of references, use the appropriate dtype when creating the dataset: >>> ref_dataset = myfile.create_dataset("MyRefs", (100,), dtype=h5py.ref_dtype) You can read from and write to the array as normal: >>> ref_dataset[0] = myfile.ref >>> print(ref_dataset[0]) Storing references in an attribute ---------------------------------- Simply assign the reference to a name; h5py will figure it out and store it with the correct type: >>> myref = myfile.ref >>> myfile.attrs["Root group reference"] = myref Null references --------------- When you create a dataset of reference type, the uninitialized elements are "null" references. H5py uses the truth value of a reference object to indicate whether or not it is null: >>> print(bool(myfile.ref)) True >>> nullref = ref_dataset[50] >>> print(bool(nullref)) False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/related_projects.rst0000644000175000017500000001066614701434742020413 0ustar00takluyvertakluyver.. _related_projects: Tools and Related Projects ========================== There are a number of projects which build upon h5py, or who build upon HDF5, which will likely be of interest to users of h5py. This page is non-exhaustive, but if you think there should be a project added, feel free to create an issue or pull request at https://github.com/h5py/h5py/. `PyTables `_ is the most significant related project, providing a higher level wrapper around HDF5 then h5py, and optimised to fully take advantage of some of HDF5's features. h5py provides a comparison between the two projects (see :ref:`h5py_pytable_cmp`), as does the `PyTables project `_. .. contents:: :local: IPython ------- H5py ships with a custom ipython completer, which provides object introspection and tab completion for h5py objects in an ipython session. For example, if a file contains 3 groups, "foo", "bar", and "baz":: In [4]: f['b bar baz In [4]: f['f # Completes to: In [4]: f['foo' In [4]: f['foo']. f['foo'].attrs f['foo'].items f['foo'].ref f['foo'].copy f['foo'].iteritems f['foo'].require_dataset f['foo'].create_dataset f['foo'].iterkeys f['foo'].require_group f['foo'].create_group f['foo'].itervalues f['foo'].values f['foo'].file f['foo'].keys f['foo'].visit f['foo'].get f['foo'].name f['foo'].visititems f['foo'].id f['foo'].parent The easiest way to enable the custom completer is to do the following in an IPython session:: In [1]: import h5py In [2]: h5py.enable_ipython_completer() The completer can be enabled for every session by adding "h5py.ipy_completer" to the list of extensions in your ipython config file, for example :file:`~/.config/ipython/profile_default/ipython_config.py` (if this file does not exist, you can create it by invoking `ipython profile create`):: c = get_config() c.InteractiveShellApp.extensions = ['h5py.ipy_completer'] Exploring and Visualising HDF5 files ------------------------------------ h5py does not contain a tool for exploring or visualising HDF5 files, but tools that can display the structure of h5py include: * `HDFView `_ is a visual tool for browsing and editing HDF5 files. * `ViTables `_ is a GUI for browsing and editing files in both PyTables and HDF5 formats, and is built on top of PyTables. * `h5glance `_ shows the structure of HDF5 files in IPython & Jupyter, as well as at the command line. .. seealso:: The PaNOSC project's `list of HDF5 & NeXus viewers `_ Additional Filters ------------------ Some projects providing additional HDF5 filter with integration into h5py include: * `hdf5plugin `_: this provides several plugins (currently Blosc, Blosc2, BitShuffle, BZip2, FciDecomp, LZ4, SZ, SZ3, Zfp, ZStd), and newer plugins should look to supporting h5py via inclusion into hdf5plugin. Libraries extending h5py ------------------------ These libraries offer additional general functionality on top of h5py: * `Versioned HDF5 `_ offers a versioned abstraction on top of h5py. It provides a wrapper around the h5py API that allows storing different versions of groups and datasets within an HDF5 file. * `h5preserve `_ lets you define how to save and load instances of a given class in HDF5 files, by writing dumper and loader functions. These functions can also have multiple versions. * `Hickle `_ provides an API like :mod:`pickle` to dump & load arbitrary Python objects in HDF5 files. * `h5pickle `_ wraps h5py to allow pickling objects such as :class:`.File` or :class:`.Dataset`. This relies on the file being available at the same path when unpickling. * `b2h5py `_ provides h5py with transparent, automatic optimized reading of n-dimensional slices of Blosc2-compressed datasets, using direct chunk access and 2-level partitioning. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/release_guide.rst0000644000175000017500000000226114045746670017656 0ustar00takluyvertakluyver.. _release_guide: Release Guide ============= h5py uses `rever `_ for release management. To install rever, use either pip or conda: .. code-block:: sh # pip $ pip install re-ver # conda $ conda install -c conda-forge rever Performing releases ------------------- Once rever is installed, always run the ``check`` command to make sure that everything you need to perform the release is correctly installed and that you have the correct permissions. All rever commands should be run in the root level of the repository. **Step 1 (repeat until successful)** .. code-block:: sh $ rever check Resolve any issues that may have come up, and keep running ``rever check`` until it passes. After it is successful, simply pass the version number you want to release (e.g. ``X.Y.Z``) into the rever command. **Step 2** .. code-block:: sh $ rever X.Y.Z You probably want to make sure (with ``git tag``) that the new version number is available. If any release activities fail while running this command, you may safely re-run this command. You can also safely undo previously run activities. Please see the rever docs for more details. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0 h5py-3.13.0/docs/requirements-rtd.txt0000644000175000017500000000004614532312724020367 0ustar00takluyvertakluyversphinx==7.2.6 sphinx_rtd_theme==1.3.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/special.rst0000644000175000017500000001727214350630273016477 0ustar00takluyvertakluyver.. currentmodule:: h5py .. _special_types: Special types ============= HDF5 supports a few types which have no direct NumPy equivalent. Among the most useful and widely used are *variable-length* (VL) types, and enumerated types. As of version 2.3, h5py fully supports HDF5 enums and VL types. How special types are represented --------------------------------- Since there is no direct NumPy dtype for variable-length strings, enums or references, h5py extends the dtype system slightly to let HDF5 know how to store these types. Each type is represented by a native NumPy dtype, with a small amount of metadata attached. NumPy routines ignore the metadata, but h5py can use it to determine how to store the data. The metadata h5py attaches to dtypes is not part of the public API, so it may change between versions. Use the functions described below to create and check for these types. Variable-length strings ----------------------- .. seealso:: :ref:`strings` In HDF5, data in VL format is stored as arbitrary-length vectors of a base type. In particular, strings are stored C-style in null-terminated buffers. NumPy has no native mechanism to support this. Unfortunately, this is the de facto standard for representing strings in the HDF5 C API, and in many HDF5 applications. Thankfully, NumPy has a generic pointer type in the form of the "object" ("O") dtype. In h5py, variable-length strings are mapped to object arrays. A small amount of metadata attached to an "O" dtype tells h5py that its contents should be converted to VL strings when stored in the file. Existing VL strings can be read and written to with no additional effort; Python strings and fixed-length NumPy strings can be auto-converted to VL data and stored. Here's an example showing how to create a VL array of strings:: >>> f = h5py.File('foo.hdf5') >>> dt = h5py.string_dtype(encoding='utf-8') >>> ds = f.create_dataset('VLDS', (100,100), dtype=dt) >>> ds.dtype.kind 'O' >>> h5py.check_string_dtype(ds.dtype) string_info(encoding='utf-8', length=None) .. function:: string_dtype(encoding='utf-8', length=None) Make a numpy dtype for HDF5 strings :param encoding: ``'utf-8'`` or ``'ascii'``. :param length: ``None`` for variable-length, or an integer for fixed-length string data, giving the length in bytes. .. function:: check_string_dtype(dt) Check if ``dt`` is a string dtype. Returns a *string_info* object if it is, or ``None`` if not. .. class:: string_info A named tuple type holding string encoding and length. .. attribute:: encoding The character encoding associated with the string dtype, which can be ``'utf-8'`` or ``'ascii'``. .. attribute:: length For fixed-length string dtypes, the length in bytes. ``None`` for variable-length strings. .. _vlen: Arbitrary vlen data ------------------- Starting with h5py 2.3, variable-length types are not restricted to strings. For example, you can create a "ragged" array of integers:: >>> dt = h5py.vlen_dtype(np.dtype('int32')) >>> dset = f.create_dataset('vlen_int', (100,), dtype=dt) >>> dset[0] = [1,2,3] >>> dset[1] = [1,2,3,4,5] Single elements are read as NumPy arrays:: >>> dset[0] array([1, 2, 3], dtype=int32) Multidimensional selections produce an object array whose members are integer arrays:: >>> dset[0:2] array([array([1, 2, 3], dtype=int32), array([1, 2, 3, 4, 5], dtype=int32)], dtype=object) .. note:: NumPy doesn't support ragged arrays, and the 'arrays of arrays' h5py uses as a workaround are not as convenient or efficient as regular NumPy arrays. If you're deciding how to store data, consider whether there's a sensible way to do it without a variable-length type. .. function:: vlen_dtype(basetype) Make a numpy dtype for an HDF5 variable-length datatype. :param basetype: The dtype of each element in the array. .. function:: check_vlen_dtype(dt) Check if ``dt`` is a variable-length dtype. Returns the base type if it is, or ``None`` if not. Enumerated types ---------------- HDF5 has the concept of an *enumerated type*, which is an integer datatype with a restriction to certain named values. Since NumPy has no such datatype, HDF5 ENUM types are read and written as integers. Here's an example of creating an enumerated type:: >>> dt = h5py.enum_dtype({"RED": 0, "GREEN": 1, "BLUE": 42}, basetype='i') >>> h5py.check_enum_dtype(dt) {'BLUE': 42, 'GREEN': 1, 'RED': 0} >>> f = h5py.File('foo.hdf5','w') >>> ds = f.create_dataset("EnumDS", (100,100), dtype=dt) >>> ds.dtype.kind 'i' >>> ds[0,:] = 42 >>> ds[0,0] 42 >>> ds[1,0] 0 .. function:: enum_dtype(values_dict, basetype=np.uint8) Create a NumPy representation of an HDF5 enumerated type :param values_dict: Mapping of string names to integer values. :param basetype: An appropriate integer base dtype large enough to hold the possible options. .. function:: check_enum_dtype(dt) Check if ``dt`` represents an enumerated type. Returns the values dict if it is, or ``None`` if not. Object and region references ---------------------------- References have their :ref:`own section `. .. _opaque_dtypes: Storing other types as opaque data ---------------------------------- .. versionadded:: 3.0 Numpy datetime64 and timedelta64 dtypes have no equivalent in HDF5 (the HDF5 time type is broken and deprecated). h5py allows you to store such data with an HDF5 opaque type; it can be read back correctly by h5py, but won't be interoperable with other tools. Here's an example of storing and reading a datetime array:: >>> arr = np.array([np.datetime64('2019-09-22T17:38:30')]) >>> f['data'] = arr.astype(h5py.opaque_dtype(arr.dtype)) >>> print(f['data'][:]) ['2019-09-22T17:38:30'] .. function:: opaque_dtype(dt) Return a dtype like the input, tagged to be stored as HDF5 opaque type. .. function:: check_opaque_dtype(dt) Return True if the dtype given is tagged to be stored as HDF5 opaque data. .. note:: With some exceptions, you can use :func:`opaque_dtype` with any numpy dtype. While this may seem like a convenient way to get arbitrary data into HDF5, remember that it's not a standard format. It's better to fit your data into HDF5's native structures, or use a file format better suited to your data. Older API --------- Before h5py 2.10, a single pair of functions was used to create and check for all of these special dtypes. These are still available for backwards compatibility, but are deprecated in favour of the functions listed above. .. function:: special_dtype(**kwds) Create a NumPy dtype object containing type hints. Only one keyword may be specified. :param vlen: Base type for HDF5 variable-length datatype. :param enum: 2-tuple ``(basetype, values_dict)``. ``basetype`` must be an integer dtype; ``values_dict`` is a dictionary mapping string names to integer values. :param ref: Provide class ``h5py.Reference`` or ``h5py.RegionReference`` to create a type representing object or region references respectively. .. function:: check_dtype(**kwds) Determine if the given dtype object is a special type. Example:: >>> out = h5py.check_dtype(vlen=mydtype) >>> if out is not None: ... print("Vlen of type %s" % out) str :param vlen: Check for an HDF5 variable-length type; returns base class :param enum: Check for an enumerated type; returns 2-tuple ``(basetype, values_dict)``. :param ref: Check for an HDF5 object or region reference; returns either ``h5py.Reference`` or ``h5py.RegionReference``. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0 h5py-3.13.0/docs/strings.rst0000644000175000017500000001501314675110407016541 0ustar00takluyvertakluyver.. _strings: Strings in HDF5 =============== .. note:: The rules around reading & writing string data were redesigned for h5py 3.0. Refer to `the h5py 2.10 docs `__ for how to store strings in older versions. Reading strings --------------- String data in HDF5 datasets is read as bytes by default: ``bytes`` objects for variable-length strings, or numpy bytes arrays (``'S'`` dtypes) for fixed-length strings. Use :meth:`.Dataset.asstr` to retrieve ``str`` objects. Variable-length strings in attributes are read as ``str`` objects. These are decoded as UTF-8 with surrogate escaping for unrecognised bytes. Fixed-length strings are read as numpy bytes arrays, the same as for datasets. Storing strings --------------- When creating a new dataset or attribute, Python ``str`` or ``bytes`` objects will be treated as variable-length strings, marked as UTF-8 and ASCII respectively. Numpy bytes arrays (``'S'`` dtypes) make fixed-length strings. You can use :func:`.string_dtype` to explicitly specify any HDF5 string datatype, as shown in the examples below:: string_data = ["varying", "sizes", "of", "strings"] # Variable length strings (implicit) f['vlen_strings1'] = string_data # Variable length strings (explicit) ds = f.create_dataset('vlen_strings2', shape=4, dtype=h5py.string_dtype()) ds[:] = string_data # Fixed length strings (implicit) - longer strings are truncated f['fixed_strings1'] = np.array(string_data, dtype='S6') # Fixed length strings (explicit) - longer strings are truncated ds = f.create_dataset('fixed_strings2', shape=4, dtype=h5py.string_dtype(length=6)) ds[:] = string_data When writing data to an existing dataset or attribute, data passed as bytes is written without checking the encoding. Data passed as Python ``str`` objects is encoded as either ASCII or UTF-8, based on the HDF5 datatype. In either case, null bytes (``'\x00'``) in the data will cause an error. .. warning:: Fixed-length string datasets will silently truncate longer strings which are written to them. Numpy byte string arrays do the same thing. Fixed-length strings in HDF5 hold a set number of bytes. It may take multiple bytes to store one character. What about NumPy's ``U`` type? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ NumPy also has a Unicode type, a UTF-32 fixed-width format (4-byte characters). HDF5 has no support for wide characters. Rather than trying to hack around this and "pretend" to support it, h5py will raise an error if you try to store data of this type. .. _str_binary: How to store raw binary data ---------------------------- If you have a non-text blob in a Python byte string (as opposed to ASCII or UTF-8 encoded text, which is fine), you should wrap it in a ``void`` type for storage. This will map to the HDF5 OPAQUE datatype, and will prevent your blob from getting mangled by the string machinery. Here's an example of how to store binary data in an attribute, and then recover it:: >>> binary_blob = b"Hello\x00Hello\x00" >>> dset.attrs["attribute_name"] = np.void(binary_blob) >>> out = dset.attrs["attribute_name"] >>> binary_blob = out.tobytes() Object names ------------ Unicode strings are used exclusively for object names in the file:: >>> f.name '/' You can supply either byte or unicode strings when creating or retrieving objects. If a byte string is supplied, it will be used as-is; Unicode strings will be encoded as UTF-8. In the file, h5py uses the most-compatible representation; H5T_CSET_ASCII for characters in the ASCII range; H5T_CSET_UTF8 otherwise. >>> grp = f.create_dataset(b"name") >>> grp2 = f.create_dataset("name2") .. _str_encodings: Encodings --------- HDF5 supports two string encodings: ASCII and UTF-8. We recommend using UTF-8 when creating HDF5 files, and this is what h5py does by default with Python ``str`` objects. If you need to write ASCII for compatibility reasons, you should ensure you only write pure ASCII characters (this can be done by ``your_string.encode("ascii")``), as otherwise your text may turn into `mojibake `_. You can use :func:`.string_dtype` to specify the encoding for string data. .. seealso:: `Joel Spolsky's introduction to Unicode & character sets `_ If this section looks like gibberish, try this. For reading, as long as the encoding metadata is correct, the defaults for :meth:`.Dataset.asstr` will always work. However, HDF5 does not enforce the string encoding, and there are files where the encoding metadata doesn't match what's really stored. Most commonly, data marked as ASCII may be in one of the many "Extended ASCII" encodings such as Latin-1. If you know what encoding your data is in, you can specify this using :meth:`.Dataset.asstr`. If you have data in an unknown encoding, you can also use any of the `builtin python error handlers `_. Variable-length strings in attributes are read as ``str`` objects, decoded as UTF-8 with the ``'surrogateescape'`` error handler. If an attribute is incorrectly encoded, you'll see 'surrogate' characters such as ``'\udcb1'`` when reading it:: >>> s = "2.0±0.1" >>> f.attrs["string_good"] = s # Good - h5py uses UTF-8 >>> f.attrs["string_bad"] = s.encode("latin-1") # Bad! >>> f.attrs["string_bad"] '2.0\udcb10.1' To recover the original string, you'll need to *encode* it with UTF-8, and then decode it with the correct encoding:: >>> f.attrs["string_bad"].encode('utf-8', 'surrogateescape').decode('latin-1') '2.0±0.1' Fixed length strings are different; h5py doesn't try to decode them:: >>> s = "2.0±0.1" >>> utf8_type = h5py.string_dtype('utf-8', 30) >>> ascii_type = h5py.string_dtype('ascii', 30) >>> f.attrs["fixed_good"] = np.array(s.encode("utf-8"), dtype=utf8_type) >>> f.attrs["fixed_bad"] = np.array(s.encode("latin-1"), dtype=ascii_type) >>> f.attrs["fixed_bad"] b'2.0\xb10.1' >>> f.attrs["fixed_bad"].decode("utf-8") Traceback (most recent call last): File "", line 1, in f.attrs["fixed_bad"].decode("utf-8") UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb1 in position 3: invalid start byte >>> f.attrs["fixed_bad"].decode("latin-1") '2.0±0.1' As we get bytes back, we only need to decode them with the correct encoding. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/swmr.rst0000644000175000017500000001550614701434742016050 0ustar00takluyvertakluyver.. _swmr: Single Writer Multiple Reader (SWMR) ==================================== Starting with version 2.5.0, h5py includes support for the HDF5 SWMR features. What is SWMR? ------------- The SWMR features allow simple concurrent reading of a HDF5 file while it is being written from another process. Prior to this feature addition it was not possible to do this as the file data and meta-data would not be synchronised and attempts to read a file which was open for writing would fail or result in garbage data. A file which is being written to in SWMR mode is guaranteed to always be in a valid (non-corrupt) state for reading. This has the added benefit of leaving a file in a valid state even if the writing application crashes before closing the file properly. This feature has been implemented to work with independent writer and reader processes. No synchronisation is required between processes and it is up to the user to implement either a file polling mechanism, inotify or any other IPC mechanism to notify when data has been written. The SWMR functionality requires use of the latest HDF5 file format: v110. In practice this implies using at least HDF5 1.10 (this can be checked via ``h5py.version.info``) and setting the libver bounding to "latest" when opening or creating the file. .. Warning:: New v110 format files are *not* compatible with v18 format. So, files written in SWMR mode with libver='latest' cannot be opened with older versions of the HDF5 library (basically any version older than the SWMR feature). The HDF Group has documented the SWMR features in details on the website: `Single-Writer/Multiple-Reader (SWMR) Documentation `_. This is highly recommended reading for anyone intending to use the SWMR feature even through h5py. For production systems in particular pay attention to the file system requirements regarding POSIX I/O semantics. Using the SWMR feature from h5py -------------------------------- The following basic steps are typically required by writer and reader processes: - Writer process creates the target file and all groups, datasets and attributes. - Writer process switches file into SWMR mode. - Reader process can open the file with swmr=True. - Writer writes and/or appends data to existing datasets (new groups and datasets *cannot* be created when in SWMR mode). - Writer regularly flushes the target dataset to make it visible to reader processes. - Reader refreshes target dataset before reading new meta-data and/or main data. - Writer eventually completes and close the file as normal. - Reader can finish and close file as normal whenever it is convenient. The following snippet demonstrate a SWMR writer appending to a single dataset:: f = h5py.File("swmr.h5", 'w', libver='latest') arr = np.array([1,2,3,4]) dset = f.create_dataset("data", chunks=(2,), maxshape=(None,), data=arr) f.swmr_mode = True # Now it is safe for the reader to open the swmr.h5 file for i in range(5): new_shape = ((i+1) * len(arr), ) dset.resize( new_shape ) dset[i*len(arr):] = arr dset.flush() # Notify the reader process that new data has been written The following snippet demonstrate how to monitor a dataset as a SWMR reader:: f = h5py.File("swmr.h5", 'r', libver='latest', swmr=True) dset = f["data"] while True: dset.id.refresh() shape = dset.shape print( shape ) Examples -------- In addition to the above example snippets, a few more complete examples can be found in the examples folder. These examples are described in the following sections. Dataset monitor with inotify ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The inotify example demonstrates how to use SWMR in a reading application which monitors live progress as a dataset is being written by another process. This example uses the the linux inotify (`pyinotify `_ python bindings) to receive a signal each time the target file has been updated. .. literalinclude:: ../examples/swmr_inotify_example.py Multiprocess concurrent write and read ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The SWMR multiprocess example starts two concurrent child processes: a writer and a reader. The writer process first creates the target file and dataset. Then it switches the file into SWMR mode and the reader process is notified (with a multiprocessing.Event) that it is safe to open the file for reading. The writer process then continue to append chunks to the dataset. After each write it notifies the reader that new data has been written. Whether the new data is visible in the file at this point is subject to OS and file system latencies. The reader first waits for the initial "SWMR mode" notification from the writer, upon which it goes into a loop where it waits for further notifications from the writer. The reader may drop some notifications, but for each one received it will refresh the dataset and read the dimensions. After a time-out it will drop out of the loop and exit. .. literalinclude:: ../examples/swmr_multiprocess.py The example output below (from a virtual Ubuntu machine) illustrate some latency between the writer and reader:: python examples/swmr_multiprocess.py INFO 2015-02-26 18:05:03,195 root Starting reader INFO 2015-02-26 18:05:03,196 root Starting reader INFO 2015-02-26 18:05:03,197 reader Waiting for initial event INFO 2015-02-26 18:05:03,197 root Waiting for writer to finish INFO 2015-02-26 18:05:03,198 writer Creating file swmrmp.h5 INFO 2015-02-26 18:05:03,203 writer SWMR mode INFO 2015-02-26 18:05:03,205 reader Opening file swmrmp.h5 INFO 2015-02-26 18:05:03,210 writer Resizing dset shape: (4,) INFO 2015-02-26 18:05:03,212 writer Sending event INFO 2015-02-26 18:05:03,213 reader Read dset shape: (4,) INFO 2015-02-26 18:05:03,214 writer Resizing dset shape: (8,) INFO 2015-02-26 18:05:03,214 writer Sending event INFO 2015-02-26 18:05:03,215 writer Resizing dset shape: (12,) INFO 2015-02-26 18:05:03,215 writer Sending event INFO 2015-02-26 18:05:03,215 writer Resizing dset shape: (16,) INFO 2015-02-26 18:05:03,215 reader Read dset shape: (12,) INFO 2015-02-26 18:05:03,216 writer Sending event INFO 2015-02-26 18:05:03,216 writer Resizing dset shape: (20,) INFO 2015-02-26 18:05:03,216 reader Read dset shape: (16,) INFO 2015-02-26 18:05:03,217 writer Sending event INFO 2015-02-26 18:05:03,217 reader Read dset shape: (20,) INFO 2015-02-26 18:05:03,218 reader Read dset shape: (20,) INFO 2015-02-26 18:05:03,219 root Waiting for reader to finish ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/vds.rst0000644000175000017500000001333214701434742015647 0ustar00takluyvertakluyver.. currentmodule:: h5py .. _vds: Virtual Datasets (VDS) ====================== Starting with version 2.9, h5py includes high-level support for HDF5 'virtual datasets'. The VDS feature is available in version 1.10 of the HDF5 library; h5py must be built with a new enough version of HDF5 to create or read virtual datasets. What are virtual datasets? -------------------------- Virtual datasets allow a number of real datasets to be mapped together into a single, sliceable dataset via an interface layer. The mapping can be made ahead of time, before the parent files are written, and is transparent to the parent dataset characteristics (SWMR, chunking, compression etc...). The datasets can be meshed in arbitrary combinations, and even the data type converted. Once a virtual dataset has been created, it can be read just like any other HDF5 dataset. .. Warning:: Virtual dataset files cannot be opened with versions of the HDF5 library older than 1.10. The HDF Group has overview of the VDS features on their website: `Virtual Datasets (VDS) Documentation `_. .. _creating_vds: Creating virtual datasets in h5py --------------------------------- To make a virtual dataset using h5py, you need to: 1. Create a :class:`VirtualLayout` object representing the dimensions and data type of the virtual dataset. 2. Create a number of :class:`VirtualSource` objects, representing the datasets the array will be built from. These objects can be created either from an h5py :class:`Dataset`, or from a filename, dataset name and shape. This can be done even before the source file exists. 3. Map slices from the sources into the layout. 4. Convert the :class:`VirtualLayout` object into a virtual dataset in an HDF5 file. The following snippet creates a virtual dataset to stack together four 1D datasets from separate files into a 2D dataset:: layout = h5py.VirtualLayout(shape=(4, 100), dtype='i4') for n in range(1, 5): filename = "{}.h5".format(n) vsource = h5py.VirtualSource(filename, 'data', shape=(100,)) layout[n - 1] = vsource # Add virtual dataset to output file with h5py.File("VDS.h5", 'w', libver='latest') as f: f.create_virtual_dataset('data', layout, fillvalue=-5) This is an extract from the ``vds_simple.py`` example in the examples folder. .. note:: Slices up to ``h5py.h5s.UNLIMITED`` can be used to create an unlimited selection along a single axis. Resizing the source data along this axis will cause the virtual dataset to grow. E.g.:: layout[n - 1, :UNLIMITED] = vsource[:UNLIMITED] A normal slice with no defined end point (``[:]``) is fixed based on the shape when you define it. .. versionadded:: 3.0 Examples -------- In addition to the above example snippet, a few more complete examples can be found in the examples folder: - `vds_simple.py `_ is a self-contained, runnable example which creates four source files, and then maps them into a virtual dataset as shown above. - `dataset_concatenation.py `_ illustrates virtually stacking datasets together along a new axis. - A number of examples are based on the sample use cases presented in the `virtual datasets RFC `__: - `excalibur_detector_modules.py `_ - `dual_pco_edge.py `_ - `eiger_use_case.py `_ - `percival_use_case.py `_ Reference --------- .. class:: VirtualLayout(shape, dtype, maxshape=None) Object for building a virtual dataset. Instantiate this class to define a virtual dataset, assign :class:`VirtualSource` objects to slices of it, and then pass it to :meth:`Group.create_virtual_dataset` to add the virtual dataset to a file. This class does not allow access to the data; the virtual dataset must be created in a file before it can be used. :param tuple shape: The full shape of the virtual dataset. :param dtype: Numpy dtype or string. :param tuple maxshape: The virtual dataset is resizable up to this shape. Use None for axes you want to be unlimited. .. class:: VirtualSource(path_or_dataset, name=None, shape=None, dtype=None, \ maxshape=None) Source definition for virtual data sets. Instantiate this class to represent an entire source dataset, and then slice it to indicate which regions should be used in the virtual dataset. When `creating a virtual dataset `_, paths to sources present in the same file are changed to a ".", referring to the current file (see `H5Pset_virtual `_). This will keep such sources valid in case the file is renamed. :param path_or_dataset: The path to a file, or a :class:`Dataset` object. If a dataset is given, no other parameters are allowed, as the relevant values are taken from the dataset instead. :param str name: The name of the source dataset within the file. :param tuple shape: The full shape of the source dataset. :param dtype: Numpy dtype or string. :param tuple maxshape: The source dataset is resizable up to this shape. Use None for axes you want to be unlimited. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1739894414.547882 h5py-3.13.0/docs/whatsnew/0000755000175000017500000000000014755127217016164 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.0.rst0000644000175000017500000001572214045746670017226 0ustar00takluyvertakluyverWhat's new in h5py 2.0 ====================== HDF5 for Python (h5py) 2.0 represents the first major refactoring of the h5py codebase since the project's launch in 2008. Many of the most important changes are behind the scenes, and include changes to the way h5py interacts with the HDF5 library and Python. These changes have substantially improved h5py's stability, and make it possible to use more modern versions of HDF5 without compatibility concerns. It is now also possible to use h5py with Python 3. Enhancements unlikely to affect compatibility --------------------------------------------- * HDF5 1.8.3 through 1.8.7 now work correctly and are officially supported. * Python 3.2 is officially supported by h5py! Thanks especially to Darren Dale for getting this working. * Fill values can now be specified when creating a dataset. The fill time is H5D_FILL_TIME_IFSET for contiguous datasets, and H5D_FILL_TIME_ALLOC for chunked datasets. * On Python 3, dictionary-style methods like Group.keys() and Group.values() return view-like objects instead of lists. * Object and region references now work correctly in compound types. * Zero-length dimensions for extendable axes are now allowed. * H5py no longer attempts to auto-import ipython on startup. * File format bounds can now be given when opening a high-level File object (keyword "libver"). Changes which may break existing code ------------------------------------- Supported HDF5/Python versions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * HDF5 1.6.X is no longer supported on any platform; following the release of 1.6.10 some time ago, this branch is no longer maintained by The HDF Group. * Python 2.6 or later is now required to run h5py. This is a consequence of the numerous changes made to h5py for Python 3 compatibility. * On Python 2.6, unittest2 is now required to run the test suite. Group, Dataset and Datatype constructors have changed ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In h5py 2.0, it is no longer possible to create new groups, datasets or named datatypes by passing names and settings to the constructors directly. Instead, you should use the standard Group methods create_group and create_dataset. The File constructor remains unchanged and is still the correct mechanism for opening and creating files. Code which manually creates Group, Dataset or Datatype objects will have to be modified to use create_group or create_dataset. File-resident datatypes can be created by assigning a NumPy dtype to a name (e.g. mygroup["name"] = numpy.dtype('S10')). Unicode is now used for object names ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Older versions of h5py used byte strings to represent names in the file. Starting with version 2.0, you may use either byte or unicode strings to create objects, but object names (obj.name, etc) will generally be returned as Unicode. Code which may be affected: * Anything which uses "isinstance" or explicit type checks on names, expecting "str" objects. Such checks should be removed, or changed to compare to "basestring" instead. * In Python 2.X, other parts of your application may complain if they are handed Unicode data which can't be encoded down to ascii. This is a general problem in Python 2. File objects must be manually closed ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ With h5py 1.3, when File objects (or low-level FileID) objects went out of scope, the corresponding HDF5 file was closed. This led to surprising behavior, especially when files were opened with the H5F_CLOSE_STRONG flag; "losing" the original File object meant that all open groups and datasets suddenly became invalid. Beginning with h5py 2.0, files must be manually closed, by calling the "close" method or by using the file object as a context manager. If you forget to close a file, the HDF5 library will try to close it for you when the application exits. Please note that opening the same file multiple times (i.e. without closing it first) continues to result in undefined behavior. Changes to scalar slicing code ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When a scalar dataset was accessed with the syntax ``dataset[()]``, h5py incorrectly returned an ndarray. H5py now correctly returns an array scalar. Using ``dataset[...]`` on a scalar dataset still returns an ndarray. Array scalars now always returned when indexing a dataset ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When using datasets of compound type, retrieving a single element incorrectly returned a tuple of values, rather than an instance of ``numpy.void_`` with the proper fields populated. Among other things, this meant you couldn't do things like ``dataset[index][field]``. H5py now always returns an array scalar, except in the case of object dtypes (references, vlen strings). Reading object-like data strips special type information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In the past, reading multiple data points from dataset with vlen or reference type returned a Numpy array with a "special dtype" (such as those created by ``h5py.special_dtype()``). In h5py 2.0, all such arrays now have a generic Numpy object dtype (``numpy.dtype('O')``). To get a copy of the dataset's dtype, always use the dataset's dtype property directly (``mydataset.dtype``). The selections module has been removed ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Only numpy-style slicing arguments remain supported in the high level interface. Existing code which uses the selections module should be refactored to use numpy slicing (and ``numpy.s_`` as appropriate), or the standard C-style HDF5 dataspace machinery. The H5Error exception class has been removed (along with h5py.h5e) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ All h5py exceptions are now native Python exceptions, no longer inheriting from H5Error. RuntimeError is raised if h5py can't figure out what exception is appropriate... every instance of this behavior is considered a bug. If you see h5py raising RuntimeError please report it so we can add the correct mapping! The old errors module (h5py.h5e) has also been removed. There is no public error-management API. File .mode property is now either 'r' or 'r+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Files can be opened using the same mode arguments as before, but now the property File.mode will always return 'r' (read-only) or 'r+' (read-write). Long-deprecated dict methods have been removed ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Certain ancient aliases for Group/AttributeManager methods (e.g. ``listnames``) have been removed. Please use the standard Python dict interface (Python 2 or Python 3 as appropriate) to interact with these objects. Known issues ------------ * Thread support has been improved in h5py 2.0. However, we still recommend that for your own sanity you use locking to serialize access to files. * There are reports of crashes related to storing object and region references. If this happens to you, please post on the mailing list or contact the h5py author directly. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.1.rst0000644000175000017500000000345014045746670017222 0ustar00takluyvertakluyverWhat's new in h5py 2.1 ====================== Dimension scales ---------------- H5py now supports the Dimension Scales feature of HDF5! Thanks to Darren Dale for implementing this. You can find more information on using scales in the :ref:`dimension_scales` section of the docs. Unicode strings allowed in attributes ------------------------------------- Group, dataset and attribute names in h5py 2.X can all be given as unicode. Now, you can also store (scalar) unicode data in attribute values as well:: >>> myfile.attrs['x'] = u"I'm a Unicode string!" Storing Unicode strings in datasets or as members of compound types is not yet implemented. Dataset size property --------------------- Dataset objects now expose a ``.size`` property which provides the total number of elements in the dataspace. ``Dataset.value`` property is now deprecated. --------------------------------------------- The property ``Dataset.value``, which dates back to h5py 1.0, is deprecated and will be removed in a later release. This property dumps the entire dataset into a NumPy array. Code using ``.value`` should be updated to use NumPy indexing, using ``mydataset[...]`` or ``mydataset[()]`` as appropriate. Bug fixes --------- * Object and region references were sometimes incorrectly wrapped wrapped in a ``numpy.object_`` instance (issue 202) * H5py now ignores old versions of Cython (<0.13) when building (issue 221) * Link access property lists weren't being properly tracked in the high level interface (issue 212) * Race condition fixed in identifier tracking which led to Python crashes (issue 151) * Highlevel objects will now complain if you try to bind them to the wrong HDF5 object types (issue 191) * Unit tests can now be run after installation (issue 201) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.10.rst0000644000175000017500000001247614045746670017312 0ustar00takluyvertakluyverWhat's new in h5py 2.10 ======================= New features ------------ - HDF5 8-bit bitfield data can now be read either as uint8 or booleans (:issue:`821`). Pytables stores booleans as this type. For now, you must pick which type to use explicitly:: with dset.astype(numpy.uint8): # or numpy.bool arr = dset[:] - Numpy arrays of integers can now be used for fancy indexing, where previously a Python list was required (:issue:`963`). - Fancy indexing now allows an empty list or array (:issue:`1174`). - IPython can now tab-complete names in h5py groups and attributes without any special user action (:issue:`1228`). This simple completion only matches the first level of keys in a group, not subkeys. You can still call ``h5py.enable_ipython_completion()`` for more complete results. - The ``libver`` parameter for :class:`File` now accepts ``'v108'`` and ``'v110'`` to specify compatibility with HDF5 1.8 or 1.10 (:issue:`1155`). See :ref:`file_version` for details. - New functions and constants for getting and identifying :ref:`special data types ` - :func:`string_dtype`, :func:`vlen_dtype`, :func:`enum_dtype`, ``ref_dtype`` and ``regionref_dtype`` replace :func:`special_dtype`. For identifying string, vlen and enum dtypes, :func:`check_string_dtype`, :func:`check_vlen_dtype` and :func:`check_enum_dtype` replace :func:`check_dtype` (:issue:`1132`). - A new method :meth:`~.Dataset.make_scale` to conveniently make a dataset into a :ref:`dimension scale ` (:issue:`830`, :issue:`1212`). - A new method :meth:`AttributeManager.get_id` to get a low-level :class:`~h5py.h5a.AttrID` object referring to an attribute (:issue:`1278`). - Several examples were updated to run on Python 3 (:issue:`1149`). Deprecations ------------ - The default behaviour of ``h5py.File`` with no specified mode is deprecated (:issue:`1143`). It currently tries to create a file or open it for read/write access, silently falling back to read-only depending on permissions. From h5py 3.0, the default will be read-only. Ideally, code should pass an explicit mode each time a file is opened:: h5py.File("example.h5", "r") The possible modes are described in :ref:`file_open`. If you want to suppress the deprecation warnings from code you can't modify, you can either: - set ``h5.get_config().default_file_mode = 'r'`` (or another available mode) - or set the environment variable ``H5PY_DEFAULT_READONLY`` to any non-empty string, to adopt the future default. - This is expected to be the last h5py release to support Python 2.7 and 3.4. Exposing HDF5 functions ----------------------- - ``H5Zunregister`` exposed as :func:`h5z.unregister_filter` (:issue:`746`, :issue:`1224`). - The new module :mod:`h5py.h5pl` module exposes various ``H5PL`` functions to inspect and modify the search path for plugins (:issue:`1166`, :issue:`1256`). - ``H5Dread_chunk`` exposed as :func:`h5d.read_direct_chunk` (:issue:`1190`). Bugfixes -------- - Fix crash with empty variable-length data (:issue:`1248`, :issue:`1253`). - Fixed random selection of data type when reading 64-bit floats on Windows where Python uses random dictionary order (:issue:`1051`, :issue:`1134`). - Pickling h5py objects now fails explicitly. It previously failed on unpickling, and we can't reliably serialise and restore handles to HDF5 objects anyway (:issue:`531`, :issue:`1194`). If you need to use these objects in other processes, you could explicitly serialise the filename and the name of the object inside the file. Or consider `h5pickle `_, which does the same implicitly. - Creating a dataset with external storage can no longer mutate the ``external`` list parameter passed in (:issue:`1205`). It also has improved error messages (:issue:`1204`). - Certain deprecation warnings will now show the relevant line of code which uses the deprecated feature (:issue:`1146`). - Skipped a failing test for complex floating point numbers on 32-bit x86 systems (:issue:`1235`). - Disabled the longdouble type on the ``ppc64le`` architecture, as it was causing segfaults with more commonly used float types (:issue:`1243`). - Documented that nested compound types are not currently supported (:issue:`1236`). - Fixed attribute ``create`` method to be consistent with ``__setattr__`` (:issue:`1265`). Building h5py ------------- - The version of HDF5 can now be automatically detected on Windows (:issue:`1123`). - Fixed autodetecting the version from libhdf5 in default locations on Windows and Mac (:issue:`1240`). - Fail to build if it can't detect version from libhdf5, rather than assuming 1.8.4 as a default (:issue:`1241`). - Building h5py from source on Unix platforms now requires either ``pkg-config`` or an explicitly specified path to HDF5 (:issue:`1231`). Previously it had a hardcoded default path, but when this was wrong, the failures were unnecessarily confusing. - The Cython 'language level' is now explicitly set to 2, to prepare h5py for changing defaults in Cython (:issue:`1171`). - Avoid using ``setup_requires`` when pip calls ``setup.py egg_info`` (:issue:`1259`). Development ----------- - h5py's tests are now run by pytest (:issue:`1003`), and coverage reports are automatically generated `on Codecov `_. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.2.rst0000644000175000017500000000510114045746670017216 0ustar00takluyvertakluyverWhat's new in h5py 2.2 ====================== Support for Parallel HDF5 ------------------------- On UNIX platforms, you can now take advantage of MPI and Parallel HDF5. Cython, ``mpi4py`` and an MPI-enabled build of HDF5 are required.. See :ref:`parallel` in the documentation for details. Support for Python 3.3 ---------------------- Python 3.3 is now officially supported. Mini float support (issue #141) ------------------------------- Two-byte floats (NumPy ``float16``) are supported. HDF5 scale/offset filter ------------------------ The Scale/Offset filter added in HDF5 1.8 is now available. Field indexing is now allowed when writing to a dataset (issue #42) ------------------------------------------------------------------- H5py has long supported reading only certain fields from a dataset:: >>> dset = f.create_dataset('x', (100,), dtype=np.dtype([('a', 'f'), ('b', 'i')])) >>> out = dset['a', 0:100:10] >>> out.dtype dtype('float32') Now, field names are also allowed when writing to a dataset: >>> dset['a', 20:50] = 1.0 Region references preserve shape (issue #295) --------------------------------------------- Previously, region references always resulted in a 1D selection, even when 2D slicing was used:: >>> dset = f.create_dataset('x', (10, 10)) >>> ref = dset.regionref[0:5,0:5] >>> out = dset[ref] >>> out.shape (25,) Shape is now preserved:: >>> out = dset[ref] >>> out.shape (5, 5) Additionally, the shape of both the target dataspace and the selection shape can be determined via new methods on the ``regionref`` proxy (now available on both datasets and groups):: >>> f.regionref.shape(ref) (10, 10) >>> f.regionref.selection(ref) (5, 5) Committed types can be linked to datasets and attributes -------------------------------------------------------- HDF5 supports "shared" named types stored in the file:: >>> f['name'] = np.dtype("int64") You can now use these types when creating a new dataset or attribute, and HDF5 will "link" the dataset type to the named type:: >>> dset = f.create_dataset('int dataset', (10,), dtype=f['name']) >>> f.attrs.create('int scalar attribute', shape=(), dtype=f['name']) ``move`` method on Group objects -------------------------------- It's no longer necessary to move objects in a file by manually re-linking them:: >>> f.create_group('a') >>> f['b'] = f['a'] >>> del f['a'] The method ``Group.move`` allows this to be performed in one step:: >>> f.move('a', 'b') Both the source and destination must be in the same file. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.3.rst0000644000175000017500000000506014045746670017223 0ustar00takluyvertakluyverWhat's new in h5py 2.3 ====================== Support for arbitrary vlen data ------------------------------- Variable-length data is :ref:`no longer restricted to strings `. You can use this feature to produce "ragged" arrays, whose members are 1D arrays of variable length. The implementation of special types was changed to use the NumPy dtype "metadata" field. This change should be transparent, as access to special types is handled through ``h5py.special_dtype`` and ``h5py.check_dtype``. Improved exception messages --------------------------- H5py has historically suffered from low-detail exception messages generated automatically by HDF5. While the exception types in 2.3 remain identical to those in 2.2, the messages have been substantially improved to provide more information as to the source of the error. Examples:: ValueError: Unable to set extend dataset (Dimension cannot exceed the existing maximal size (new: 100 max: 1)) IOError: Unable to open file (Unable to open file: name = 'x3', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0) KeyError: "Unable to open object (Object 'foo' doesn't exist)" Improved setuptools support --------------------------- ``setup.py`` now uses ``setup_requires`` to make installation via pip friendlier. Multiple low-level additions ---------------------------- Improved support for opening datasets via the low-level interface, by adding ``H5Dopen2`` and many new property-list functions. Improved support for MPI features --------------------------------- Added support for retrieving the MPI communicator and info objects from an open file. Added boilerplate code to allow compiling cleanly against newer versions of mpi4py. Readonly files can now be opened in default mode ------------------------------------------------ When opening a read-only file with no mode flags, now defaults to opening the file on RO mode rather than raising an exception. Single-step build for HDF5 on Windows ------------------------------------- Building h5py on windows has typically been hamstrung by the need to build a compatible version of HDF5 first. A new Paver-based system located in the "windows" distribution directory allows single-step compilation of HDF5 with settings that are known to work with h5py. For more, see: https://github.com/h5py/h5py/tree/master/windows Thanks to --------- * Martin Teichmann * Florian Rathgerber * Pierre de Buyl * Thomas Caswell * Andy Salnikov * Darren Dale * Robert David Grant * Toon Verstraelen * Many others who contributed bug reports ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.4.rst0000644000175000017500000000263414045746670017230 0ustar00takluyvertakluyverWhat's new in h5py 2.4 ====================== Build system changes -------------------- The setup.py-based build system has been reworked to be more maintainable, and to fix certain long-standing bugs. As a consequence, the options to setup.py have changed; a new top-level "configure" command handles options like ``--hdf5=/path/to/hdf5`` and ``--mpi``. Setup.py now works correctly under Python 3 when these options are used. Cython (0.17+) is now required when building from source on all platforms; the .c files are no longer shipped in the UNIX release. The minimum NumPy version is now 1.6.1. Files will now auto-close ------------------------- Files are now automatically closed when all objects within them are unreachable. Previously, if File.close() was not explicitly called, files would remain open and "leaks" were possible if the File object was lost. Thread safety improvements -------------------------- Access to all APIs, high- and low-level, are now protected by a global lock. The entire API is now believed to be thread-safe. Feedback and real-world testing is welcome. External link improvements -------------------------- External links now work if the target file is already open. Previously this was not possible because of a mismatch in the file close strengths. Thanks to --------- Many people, but especially: * Matthieu Brucher * Laurence Hole * John Tyree * Pierre de Buyl * Matthew Brett ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/whatsnew/2.5.rst0000644000175000017500000000463414350630273017221 0ustar00takluyvertakluyverWhat's new in h5py 2.5 ====================== Experimental support for Single Writer Multiple Reader (SWMR) ------------------------------------------------------------- This release introduces experimental support for the highly-anticipated "Single Writer Multiple Reader" (SWMR) feature in the upcoming HDF5 1.10 release. SWMR allows sharing of a single HDF5 file between multiple processes without the complexity of MPI or multiprocessing-based solutions. This is an experimental feature that should NOT be used in production code. We are interested in getting feedback from the broader community with respect to performance and the API design. For more details, check out the h5py user guide: https://docs.h5py.org/en/latest/swmr.html SWMR support was contributed by Ulrik Pedersen (:pr:`551`). Other changes ------------- * Use system Cython as a fallback if `cythonize()` fails (:pr:`541` by Ulrik Pedersen). * Use pkg-config for building/linking against hdf5 (:pr:`505` by James Tocknell). * Disable building Cython on Travis (:pr:`513` by Andrew Collette). * Improvements to release tarball (:pr:`555`, :pr:`560` by Ghislain Antony Vaillant). * h5py now has one codebase for both Python 2 and 3; 2to3 removed from setup.py (:pr:`508` by James Tocknell). * Add python 3.4 to tox (:pr:`507` by James Tocknell). * Warn when importing from inside install dir (:pr:`558` by Andrew Collette). * Tweak installation docs with reference to Anaconda and other Python package managers (:pr:`546` by Andrew Collette). * Fix incompatible function pointer types (:pr:`526`, :pr:`524` by Peter H. Li). * Add explicit `vlen is not None` check to work around https://github.com/numpy/numpy/issues/2190 (`#538` by Will Parkin). * Group and AttributeManager classes now inherit from the appropriate ABCs (:pr:`527` by James Tocknell). * Don't strip metadata from special dtypes on read (:pr:`512` by Antony Lee). * Add 'x' mode as an alias for 'w-' (:pr:`510` by Antony Lee). * Support dynamical loading of LZF filter plugin (:pr:`506` by Peter Colberg). * Fix accessing attributes with array type (:pr:`501` by Andrew Collette). * Don't leak types in enum converter (:pr:`503` by Andrew Collette). * Cython warning cleanups related to "const" Acknowledgements ---------------- This release incorporates changes from, among others: * Ulrik Pedersen * James Tocknell * Will Parkin * Antony Lee * Peter H. Li * Peter Colberg * Ghislain Antony Vaillant ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/whatsnew/2.6.rst0000644000175000017500000000503714350630273017220 0ustar00takluyvertakluyverWhat's new in h5py 2.6 ====================== Support for HDF5 Virtual Dataset API ------------------------------------ Initial support for the HDF5 Virtual Dataset API, which was introduced in HDF5 1.10, was added to the low-level API. Ideas and input for how this should work as part of the high-level interface are welcome. This work was added in :pr:`663` by Aleksandar Jelenak. Add MPI Collective I/O Support ------------------------------ Support for using MPI Collective I/O in both low-level and high-level code has been added. See the collective_io.py example for a simple demonstration of how to use MPI Collective I/O with the high level API. This work was added in :pr:`648` by Jialin Liu. Numerous build/testing/CI improvements -------------------------------------- There were a number of improvements to the setup.py file, which should mean that `pip install h5py` should work in most places. Work was also done to clean up the current testing system, using tox is the recommended way of testing h5py across different Python versions. See :pr:`576` by Jakob Lombacher, :pr:`640` by Lawrence Mitchell, and :pr:`650`, :pr:`651` and :pr:`658` by James Tocknell. Cleanup of codebase based on pylint ----------------------------------- There was a large cleanup of pylint-identified problems by Andrew Collette (:pr:`578`, :pr:`579`). Fixes to low-level API ---------------------- Fixes to the typing of functions were added in :pr:`597` by Ulrik Kofoed Pedersen, :pr:`589` by Peter Chang, and :pr:`625` by Spaghetti Sort. A fix for variable-length arrays was added in :pr:`621` by Sam Mason. Fixes to compound types were added in :pr:`639` by @nevion and :pr:`606` by Yu Feng. Finally, a fix to type conversion was added in :pr:`614` by Andrew Collette. Documentation improvements -------------------------- * Updates to FAQ by Dan Guest (:pr:`608`) and Peter Hill (:pr:`607`). * Updates MPI-related documentation by Jens Timmerman (:pr:`604`) and Matthias König (:pr:`572`). * Fixes to documentation building by Ghislain Antony Vaillant (:pr:`562`, :pr:`561`). * Update PyTables link (:pr:`574` by Dominik Kriegner) * Add File opening modes to docstring (:pr:`563` by Antony Lee) Other changes ------------- * Add `Dataset.ndim` (:pr:`649`, :pr:`660` by @jakirkham, :pr:`661` by James Tocknell) * Fix import errors in IPython completer (:pr:`605` by Niru Maheswaranathan) * Turn off error printing in new threads (:pr:`583` by Andrew Collette) * Use item value in `KeyError` instead of error message (:pr:`642` by Matthias Geier) Acknowledgements ---------------- ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.7.1.rst0000644000175000017500000000333114045746670017365 0ustar00takluyvertakluyverWhat's new in h5py 2.7.1 ======================== 2.7.1 is the first bug-fix release in the 2.7.x series. Bug fixes --------- - :issue:`903` Fixed critical issue with cyclic gc which resulted in segfaults - :issue:`904` Avoid unaligned access fixing h5py on sparc64 - :issue:`883` Fixed compilation issues for some library locations - :issue:`868` Fix deadlock between phil and the import lock in py2 - :issue:`841` Improve windows handling if filenames - :issue:`874` Allow close to be called on file multiple times - :issue:`867`, :issue:`872` Warn on loaded vs complied hdf5 version issues - :issue:`902` Fix overflow computing size of dataset on windows - :issue:`912` Do not mangle capitalization of filenames in error messages - :issue:`842` Fix longdouble on ppc64le - :issue:`862`, :issue:`916` Fix compounds structs with variable-size members Fix h5py segfaulting on some Python 3 versions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Through an intersection of `Python Issue 30484`_ and :issue:`888`, it was possible for the Python Garbage Collector to activate when closing ``h5py`` objects, which due to how dictionaries were iterated over in Python could cause a segfault. :issue:`903` fixes the Garbage Collector activating whilst closing, whilst `Python Issue 30484`_ had been fixed upstream (and backported to Python 3.3 onwards). Avoid unaligned memory access in conversion functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some architectures (e.g. SPARC64) do not allow unaligned memory access, which can come up when copying packed structs. :issue:`904` (by James Clarke) uses ``memcpy`` to avoid said unaligned memory access. .. _`Python Issue 30484`: https://bugs.python.org/issue30484 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/whatsnew/2.7.rst0000644000175000017500000001065014701434742017221 0ustar00takluyvertakluyverWhat's new in h5py 2.7 ====================== Python 3.2 is no longer supported --------------------------------- ``h5py`` 2.7 drops Python 3.2 support, and testing is not longer performed on Python 3.2. The latest versions of ``pip``, ``virtualenv``, ``setuptools`` and ``numpy`` do not support Python 3.2, and dropping 3.2 allows both ``u`` and ``b`` prefixes to be used for strings. A clean up of some of the legacy code was done in :pr:`675` by Andrew Collette. Additionally, support for Python 2.6 is soon to be dropped for ``pip`` (See https://github.com/pypa/pip/issues/3955) and ``setuptools`` (See https://github.com/pypa/setuptools/issues/878), and ``numpy`` has dropped Python 2.6 also in the latest release. While ``h5py`` has not dropped Python 2.6 this release, users are strongly encouraged to move to Python 2.7 where possible. Improved testing support ------------------------ There has been a major increase in the number of configurations ``h5py`` is automatically tested in, with Windows CI support added via Appveyor (:pr:`795`, :pr:`798`, :pr:`799` and :pr:`801` by James Tocknell) and testing of minimum requirements to ensure we still satisfy them (:pr:`703` by James Tocknell). Additionally, ``tox`` was used to ensure that we don't run tests on Python versions which our dependencies have dropped or do not support (:pr:`662`, :pr:`700` and :pr:`733`). Thanks to to the Appveyor support, unicode tests were made more robust (:pr:`788`, :pr:`800` and :pr:`804` by James Tocknell). Finally, other tests were improved or added where needed (:pr:`724` by Matthew Brett, :pr:`789`, :pr:`794` and :pr:`802` by James Tocknell). Improved python compatibility ----------------------------- The ``ipython``/``jupyter`` completion support now has Python 3 support (:pr:`715` by Joseph Kleinhenz). ``h5py`` now supports ``pathlib`` filenames (:pr:`716` by James Tocknell). Documentation improvements -------------------------- An update to the installation instructions and some whitespace cleanup was done in :pr:`808` by Thomas A Caswell, and mistake in the quickstart was fixed by Joydeep Bhattacharjee in :pr:`708`. setup.py improvements --------------------- Support for detecting the version of HDF5 via ``pkgconfig`` was added by Axel Huebl in :pr:`734`, and support for specifying the path to MPI-supported HDF5 was added by Axel Huebl in :pr:`721`. ``h5py's`` classifiers were updated to include supported python version and interpreters in :pr:`811` by James Tocknell. Support for additional HDF5 features added ------------------------------------------ Low-level support for `HDF5 Direct Chunk Write`_ was added in :pr:`691` by Simon Gregor Ebner. Minimal support for `HDF5 File Image Operations`_ was added by Andrea Bedini in :pr:`680`. Ideas and opinions for further support for both `HDF5 Direct Chunk Write`_ and `HDF5 File Image Operations`_ are welcome. High-level support for reading and writing null dataspaces was added in :pr:`664` by James Tocknell. Improvements to type system --------------------------- Reading and writing of compound datatypes has improved, with support for different orderings and alignments (:pr:`701` by Jonah Bernhard, :pr:`702` by Caleb Morse :pr:`738` by @smutch, :pr:`765` by Nathan Goldbaum and :pr:`793` by James Tocknell). Support for reading extended precision and non-standard floating point numbers has also been added (:pr:`749`, :pr:`812` by Thomas A Caswell, :pr:`787` by James Tocknell and :pr:`781` by Martin Raspaud). Finally, compatibility improvements to ``Cython`` annotations of HDF5 types were added in :pr:`692` and :pr:`693` by Aleksandar Jelenak. Other changes ------------- * Fix deprecation of ``-`` for ``numpy`` boolean arrays (:pr:`683` by James Tocknell) * Check for duplicates in fancy index validation (:pr:`739` by Sam Toyer) * Avoid potential race condition (:pr:`754` by James Tocknell) * Fix inconsistency when slicing with ``numpy.array`` of shape ``(1,)`` (:pr:`772` by Artsiom) * Use ``size_t`` to store Python object id (:pr:`773` by Christoph Gohlke) * Avoid errors when the Python GC runs during ``nonlocal_close()`` (:pr:`776` by Antoine Pitrou) * Move from ``six.PY3`` to ``six.PY2`` (:pr:`686` by James Tocknell) .. _`HDF5 Direct Chunk Write` : https://support.hdfgroup.org/releases/hdf5/documentation/rfc/DECTRIS%20Integration%20RFC%202012-11-29.pdf .. _`HDF5 File Image Operations` : https://support.hdfgroup.org/documentation/hdf5/latest/_f_i_l_e_i_m_g_o_p_s.html Acknowledgements ---------------- ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.8.rst0000644000175000017500000000751414045746670017236 0ustar00takluyvertakluyverWhat's new in h5py 2.8 ====================== This is the first release of the h5py 2.8 series. Note that the 2.8 series is the last series of h5py to support Python 2.6 and 3.3. Users should look to moving to Python 2.7 or (preferably) Python 3.4 or higher, as earlier releases are now outside of security support. API changes ----------- * Deprecation of ``h5t.available_ftypes``. This is no longer used internally and will be removed in the future. There is no replacement public API. See :issue:`926` for how to add addition floating point types to h5py. * Do not sort fields in compound types (:issue:`970` by James Tocknell). This is to account for changes in numpy 1.14. * Minimum required version of Cython is now 0.23. Features -------- * Allow registration of new file drivers (:issue:`956` by Joe Jevnik). * Add option to track object creation order to ``Group.create_group`` (:issue:`968` by Chen Yufei) Bug fixes --------- * Support slices with stop < start as empty slices (:issue:`924` by Joe Jevnik) * Avoid crashing the IPython auto-completer when missing key (:issue:`885`, :issue:`958` by James Tocknell). The auto-completer currently only works on older versions of IPython, see :issue:`1022` for what's needed to support newer versions of IPython/jupyter (PRs welcome!) * Set libver default to 'earliest' (a.k.a LIBVER_EARLIEST) as previously documented (:issue:`933`, :issue:`936` by James Tocknell) * Fix conflict between fletcher32 and szip compression when using the float64 dtype (:issue:`953`, :issue:`989`, by Paul Müller). * Update floating point type handling as flagged by numpy deprecation warning (:issue:`985`, by Eric Larson) * Allow ExternalLinks to use non-ASCII hdf5 paths (:issue:`333`, :issue:`952` by James Tocknell) * Prefer custom HDF5 over pkg-config/fallback paths when building/installing (:issue:`946`, :issue:`947` by Lars Viklund) * Fix compatibility with Python 3 in document generation (:issue:`921` by Ghislain Antony Vaillant) * Fix spelling and grammar in documentation (:issue:`931` by Michael V. DePalatis, :issue:`950` by Christian Sachs, :issue:`1015` by Mikhail) * Add minor changes to documentation in order to improve clarity and warn about potential problems (:issue:`528`, :issue:`783`, :issue:`829`, :issue:`849`, :issue:`911`, :issue:`959`, by James Tocknell) * Add license field to ``setup.py`` metadata (:issue:`999` by Nils Werner). * Use system encoding for errors, not utf-8 (:issue:`1016`, :issue:`1025` by James Tocknell) * Add ``write_direct`` to the documentation (:issue:`1028` by Sajid Ali and Thomas A Caswell) Wheels HDF5 Version ------------------- * Wheels uploaded to PyPI will now be built against the HDF5 1.10 series as opposed to the 1.8 series (h5py 2.8 is built against HDF5 1.10.2). CI/Testing improvements and fixes --------------------------------- There were a number of improvements to testing and CI systems of h5py, including running the CI against multiple versions of HDF5, improving reliability and speed of the CIs, and simplifying the tox file. See :issue:`857`, :issue:`894`, :issue:`922`, :issue:`954` and :issue:`962` by Thomas A Caswell and James Tocknell for more details. Other changes ------------- * Emphasise reading from HDF5 files rather than writing to files in Quickguide (:issue:`609`, :issue:`610` by Yu Feng). Note these changes were in the 2.5 branch, but never got merged into master. The h5py 2.8 release now actually includes these changes. * Use lazy-loading of run_tests to avoid strong dependency on unittest2 (:issue:`1013`, :issue:`1014` by Thomas VINCENT) * Correctly handle with multiple float types of the same size (:issue:`926` by James Tocknell) Acknowledgements and Thanks --------------------------- The h5py developers thank Nathan Goldbaum, Matthew Brett, and Christoph Gohlke for building the wheels that appear on PyPI. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs/whatsnew/2.9.rst0000644000175000017500000000346314045746670017236 0ustar00takluyvertakluyverWhat's new in h5py 2.9 ====================== New features ------------ * A convenient high-level API for creating virtual datasets, HDF5 objects which act as a view over one or more real datasets (:issue:`1060`, :issue:`1126`). See :ref:`vds` for details. * :class:`File` can now be constructed with a Python file-like object, making it easy to create an HDF5 file in memory using :class:`io.BytesIO` (:issue:`1061`, :issue:`1105`, :issue:`1116`). See :ref:`file_fileobj` for details. * :class:`File` now accepts parameters to control the chunk cache (:issue:`1008`). See :ref:`file_cache` for details. * New options to record the order of insertion for attributes and group entries. Iterating over these collections now follows insertion order if it was recorded, or name order if not (:issue:`1098`). * A new method :meth:`Group.create_dataset_like` to create a new dataset with similar properties to an existing one (:issue:`1085`). * Datasets can now be created with storage backed by external non-HDF5 files (:issue:`1000`). * Lists or tuples of unicode strings can now be stored as HDF5 attributes (:issue:`1032`). * Inspecting the view returned by ``.keys()`` now shows the key names, for convenient interactive use (:issue:`1049`). Exposing HDF5 functions ----------------------- * ``H5LTopen_file_image`` as :func:`h5py.h5f.open_file_image` (:issue:`1075`). * External dataset storage functions ``H5Pset_external``, ``H5Pget_external`` and ``H5Pget_external_count`` as methods on :class:`h5py.h5p.PropDCID` (:issue:`1000`). Bugfixes -------- * Fix reading/writing of float128 data (:issue:`1114`). * Converting data to float16 when creating a dataset (:issue:`1115`). Support for old Python ---------------------- Support for Python 3.3 has been dropped. Support for Python 2.6 has been dropped. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0 h5py-3.13.0/docs/whatsnew/3.0.rst0000644000175000017500000002344514532312724017216 0ustar00takluyvertakluyverWhat's new in h5py 3.0 ====================== New features ------------ * The interface for storing & reading strings has changed - see :doc:`/strings`. The new rules are hopefully more consistent, but may well require some changes in coding using h5py. * Reading & writing data now releases the GIL, so another Python thread can continue while HDF5 accesses data. Where HDF5 can call back into Python, such as for data conversion, h5py re-acquires the GIL. However, HDF5 has its own global lock, so this won't speed up parallel data access using multithreading. * Numpy datetime and timedelta arrays can now be stored and read as HDF5 opaque data (:issue:`1339`), though other tools will not understand them. See :ref:`opaque_dtypes` for more information. * New :meth:`.Dataset.iter_chunks` method, to iterate over chunks within the given selection. * Compatibility with HDF5 1.12. * Methods which accept a shape tuple, e.g. to create a dataset, now also allow an integer for a 1D shape (:pr:`1340`). * Casting data to a specified type on reading (:meth:`.Dataset.astype`) can now be done without a with statement, like this:: data = dset.astype(np.int32)[:] * A new :meth:`.Dataset.fields` method lets you read only selected fields from a dataset with a compound datatype. * Reading data has less overhead, as selection has been implemented in Cython. Making many small reads from the same dataset can be as much as 10 times faster, but there are many factors that can affect performance. * A new NumPy-style :attr:`.Dataset.nbytes` attribute to get the size of the dataset's data in bytes. This differs from the :attr:`~.Dataset.size` attribute, which gives the number of elements. * The ``external`` argument of :meth:`.Group.create_dataset`, which specifies any external storage for the dataset, accepts more types (:issue:`1260`), as follows: * The top-level container may be any iterable, not only a list. * The names of external files may be not only ``str`` but also ``bytes`` or ``os.PathLike`` objects. * The offsets and sizes may be NumPy integers as well as Python integers. See also the deprecation related to the ``external`` argument. * Support for setting file space strategy at file creation. Includes option to persist empty space tracking between sessions. See :class:`.File` for details. * More efficient writing when assigning a scalar to a chunked dataset, when the number of elements to write is no more than the size of one chunk. * Introduced support for the split :ref:`file driver ` (:pr:`1468`). * Allow making virtual datasets which can grow as the source data is resized - see :doc:`/vds`. * New `allow_unknown_filter` option to :meth:`.Group.create_dataset`. This should only be used if you will compress the data before writing it with the low-level :meth:`~h5py.h5d.DatasetID.write_direct_chunk` method. * The low-level chunk query API provides information about dataset chunks in an HDF5 file: :meth:`~h5py.h5d.DatasetID.get_num_chunks`, :meth:`~h5py.h5d.DatasetID.get_chunk_info` and :meth:`~h5py.h5d.DatasetID.get_chunk_info_by_coord`. * The low-level :meth:`h5py.h5f.FileID.get_vfd_handle` method now works for any file driver that supports it, not only the sec2 driver. Breaking changes & deprecations ------------------------------- * h5py now requires Python 3.6 or above; it is no longer compatible with Python 2.7. * The default mode for opening files is now 'r' (read-only). See :ref:`file_open` for other possible modes if you need to write to a file. * In previous versions, creating a dataset from a list of bytes objects would choose a fixed length string datatype to fit the biggest item. It will now use a variable length string datatype. To store fixed length strings, use a suitable dtype from :func:`h5py.string_dtype`. * Variable-length UTF-8 strings in datasets are now read as ``bytes`` objects instead of ``str`` by default, for consistency with other kinds of strings. See :doc:`/strings` for more details. * When making a virtual dataset, a dtype must be specified in :class:`.VirtualLayout`. There is no longer a default dtype, as this was surprising in some cases. * The ``external`` argument of :meth:`Group.create_dataset` no longer accepts the following forms (:issue:`1260`): * a list containing *name*, [*offset*, [*size*]]; * a list containing *name1*, *name2*, …; and * a list containing tuples such as ``(name,)`` and ``(name, offset)`` that lack the offset or size. Furthermore, each *name*–*offset*–*size* triplet now must be a tuple rather than an arbitrary iterable. See also the new feature related to the ``external`` argument. * The MPI mode no longer supports mpi4py 1.x. * The deprecated ``h5py.h5t.available_ftypes`` dictionary was removed. * The deprecated ``Dataset.value`` property was removed. Use ``ds[()]`` to read all data from any dataset. * The deprecated functions ``new_vlen``, ``new_enum``, ``get_vlen`` and ``get_enum`` have been removed. See :doc:`/special` for the newer APIs. * Removed deprecated File.fid attribute. Use :attr:`.File.id` instead. * Remove the deprecated ``h5py.highlevel`` module. The high-level API is available directly in the ``h5py`` module. * The third argument of ``h5py._hl.selections.select()`` is now an optional high-level :class:`.Dataset` object, rather than a ``DatasetID``. This is not really a public API - it has to be imported through the private ``_hl`` module - but probably some people are using it anyway. Exposing HDF5 functions ----------------------- * H5Dget_num_chunks * H5Dget_chunk_info * H5Dget_chunk_info_by_coord * H5Oget_info1 * H5Oget_info_by_name1 * H5Oget_info_by_idx1 * H5Ovisit1 * H5Ovisit_by_name1 * H5Pset_attr_phase_change * H5Pset_fapl_split * H5Pget_file_space_strategy * H5Pset_file_space_strategy * H5Sencode1 * H5Tget_create_plist Bug fixes --------- * Fix segmentation fault when accessing vlen of strings (:issue:`1336`). * Fix the storage of non-contiguous arrays, such as numpy slices, as HDF5 vlen data (:issue:`1649`). * Fix pathologically slow reading/writing in certain conditions with integer indexing (:issue:`492`). * Fix bug when :meth:`.Group.copy` source is a high-level object and destination is a Group (:issue:`1005`). * Fix reading data for region references pointing to an empty selection. * Unregister converter functions at exit, preventing segfaults on exit in some situations with threads (:pr:`1440`). * As HDF5 1.10.6 and later support UTF-8 paths on Windows, h5py built against HDF5 1.10.6 will use UTF-8 for file names, allowing all filenames. * Fixed :meth:`h5py.h5d.DatasetID.get_storage_size` to report storage size of zero bytes without raising an exception (:issue:`1475`). * Attribute Managers (``obj.attrs``) can now work on HDF5 stored datatypes (:issue:`1476`). * Remove broken inherited ``ds.dims.values()`` and ``ds.dims.items()`` methods. The dimensions interface behaves as a sequence, not a mapping (:issue:`744`). * Fix creating attribute with :class:`.Empty` by converting its dtype to a numpy dtype object. * Fix getting :attr:`~.Dataset.maxshape` on empty/null datasets. * The :attr:`.File.swmr_mode` property is always available (:issue:`1580`). * The :attr:`.File.mode` property handles SWMR access modes in addition to plain RDONLY/RDWR modes * Importing an MPI build of h5py no longer initialises MPI immediately, which will hopefully avoid various strange behaviours. * Avoid launching a subprocess by using ``platform.machine()`` at import time. This could trigger a warning in MPI. * Removed an equality comparison with an empty array, which will cause problems with future versions of numpy. * Better error message if you try to use the mpio driver and h5py was not built with MPI support. * Improved error messages when requesting chunked storage for an empty dataset. * Data conversion functions should fail more gracefully if no memory is available. * Fix some errors for internal functions that were raising "TypeError: expected bytes, str found" instead of the correct error. * Use relative path for virtual data sources if the source dataset is in the same file as the virtual dataset. * Generic exception types used in tests' assertRaise (exception types changed in new HDF5 version) * Use ``dtype=object`` in tests with ragged arrays Building h5py ------------- * The ``setup.py configure`` command was removed. Configuration for the build can be specified with environment variables instead. See :ref:`custom_install` for details. * It is now possible to specify separate include and library directories for HDF5 via environment variables. See :ref:`custom_install` for more details. * The pkg-config name to use when looking up the HDF5 library can now be configured, this can assist with selecting the correct HDF5 library when using MPI. See :ref:`custom_install` for more details. * Using bare ``char*`` instead of ``array.array`` in h5d.read_direct_chunk since ``array.array`` is a private CPython C-API interface * Define ``NPY_NO_DEPRECATED_API`` to silence a warning. * Make the lzf filter build with HDF5 1.10 (:issue:`1219`). * If HDF5 is not loaded, an additional message is displayed to check HDF5 installation * Rely much more on the C-interface provided by Cython to call Python and NumPy. * Removed an old workaround which tried to run Cython in a subprocess if cythonize() didn't work. This shouldn't be necessary for any recent version of setuptools. * Migrate all Cython code base to Cython3 syntax * The only noticeable change is in exception raising from cython which use bytes * Massively use local imports everywhere as expected from Python3 * Explicitly mark several Cython functions as non-binding Development ----------- * Unregistering converter functions on exit (:pr:`1440`) should allow profiling and code coverage tools to work on Cython code. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/whatsnew/3.1.rst0000644000175000017500000000135614350630273017214 0ustar00takluyvertakluyverWhat's new in h5py 3.1 ====================== Bug fixes --------- * Fix reading numeric data which is not in the native endianness, e.g. big-endian data on a little-endian system (:issue:`1729`). * Fix using bytes as names for :meth:`.Group.create_dataset` and :meth:`.Group.create_virtual_dataset` (:issue:`1732`). * Fix writing data as a list to a dataset with a sub-array data type (:issue:`1735`). Building h5py ------------- * Allow building against system lzf library by setting ``H5PY_SYSTEM_LZF=1``. See :ref:`custom_install`. Development ----------- * If pytest is missing `pytest-mpi `_ it will now fail with a clear error. * Fix a test which was failing on big-endian systems. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696860990.0 h5py-3.13.0/docs/whatsnew/3.10.rst0000644000175000017500000000307214511005476017271 0ustar00takluyvertakluyverWhat's new in h5py 3.10 ======================= New features ------------ * h5py now has pre-built packages for Python 3.12. * Pre-built packages on Linux & Windows now bundle HDF5 version 1.14.2. Mac packages still contain HDF5 1.12.2 for now. You can still :ref:`build h5py from source ` against a wider range of HDF5 versions. * The read-only S3 file driver ('ros3') now accepts an AWS session token as part of the credentials (:pr:`2301`). Pass ``session_token`` when opening a :class:`.File` (along with the other S3 parameters). This requires HDF5 1.14.2 or later, with the ROS3 feature built. Deprecations & removals ----------------------- * Support for the HDF5 1.8 series was dropped, along with early 1.10 releases. The minimum required HDF5 version is now 1.10.4. Exposing HDF5 functions ----------------------- * ``H5Pget_fapl_ros3_token`` & ``H5Pset_fapl_ros3_token`` Bug fixes --------- * Various nasty bugs when using nested compound and vlen data types have been fixed (:pr:`2134`). * Fixed an ``OverflowError`` in some cases when registering a filter with :func:`h5z.register_filter`, especially on 32-bit architectures (:pr:`2318`). * Sequential slicing/indexing operations on a :class:`.VirtualSource` object (e.g. ``source[:10][::2]``) now raise an error, rather than giving incorrect results (:pr:`2280`). Building h5py ------------- * h5py now uses HDF5's 1.10 compatibility mode at compile time rather than the 1.8 compatibility mode (:pr:`2320`). This is normally transparent even if you're building h5py from source. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1712742811.0 h5py-3.13.0/docs/whatsnew/3.11.rst0000644000175000017500000000321514605460633017275 0ustar00takluyvertakluyverWhat's new in h5py 3.11 ======================= New features ------------ * h5py is now compatible with Numpy 2.0 (:pr:`2329`, :pr:`2401`). * New methods :meth:`.Group.visit_links` and :meth:`.Group.visititems_links` that include links when visiting groups (:pr:`2360`). Exposing HDF5 functions ----------------------- * Exposes remaining information from `H5O_info_t` struct such as access, modification, change, and birth time (:pr:`2358`). Also exposes field providing number of attributes attached to an object. Expands object header metadata struct `H5O_hdr_info_t`, `hdr` field of `H5O_info_t`, to provide number of chunks and flags set for object header. Lastly, adds `meta_size` field from `H5O_info_t` struct that provides two fields, `attr` which is the storage overhead of any attached attributes, and `obj` which is storage overhead required for chunk storage. The last two fields added can be useful for determining the storage overhead incurred from various data layout/chunked strategies, and for obtaining information such as that provided by `h5stat`. Bug fixes --------- * h5py's tests pass with HDF5 1.14.4, which is due to be released shortly after h5py 3.11 (:pr:`2406`). * :meth:`~.Dataset.iter_chunks()` now behaves correctly with a selection (:pr:`2381`). * HDF5 allows external datasets (with the data stored in a separate file) to be expandable along the first dimension. Such datasets can now be created through h5py by passing a ``maxshape=`` parameter (:pr:`2398`). Building h5py ------------- * h5py can now be built with Cython 3.x (:pr:`2345`). * Fixed some errors compiling with GCC 14 (:pr:`2380`, :pr:`2382`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727366465.0 h5py-3.13.0/docs/whatsnew/3.12.rst0000644000175000017500000000413514675302501017274 0ustar00takluyvertakluyverWhat's new in h5py 3.12 ======================= New features ------------ * h5py now has pre-built packages for Python 3.13. * However, h5py is not yet compatible with the new free-threading mode; we're tracking work on that in issue :issue:`2475`. Breaking changes ---------------- * Support for Python 3.8 was dropped (:pr:`2471`). Python 3.9 or newer is required to build or install h5py 3.12. * The minimum supported version of HDF5 was increased to 1.10.6 (:pr:`2486`). If you need h5py on HDF5 1.10.4 or .5, please use h5py 3.11. * The fill time for chunked storage was previously set to ``h5d.FILL_TIME_ALLOC``. Now this the default comes from HDF5, which uses ``h5d.FILL_TIME_IFSET`` (equivalent to ``fill_time='ifset'``) (:pr:`2463`). Please use ``fill_time='alloc'`` if the change is a problem for you. Exposing HDF5 functions ----------------------- * Expose fill time option in dataset creation property list via the ``fill_time`` parameter in :meth:`~.Group.create_dataset` (:pr:`2463`). Bug fixes --------- * Fix an error where native float16 support is not available (:pr:`2422`). * Fixed values of ``H5F_close_degree_t`` enum (:pr:`2433`). * External links are now accessed with libhdf5's default access properties (:pr:`2433`). * Fix the iteration order for the root group in a file with creation order tracked (:pr:`2410`). * Fixed some deprecation warnings from NumPy (:pr:`2416`). Building h5py ------------- * Require a newer version of mpi4py for Python 3.12 (:pr:`2418`). * The test suite is now configured to fail on unexpected warnings (:pr:`2428`). * The generated Cython wrapper code (``defs.*`` & ``_hdf5.pxd``) is now specific to the version of HDF5 it's building for. If the version of HDF5 has changed, ``api_gen.py`` should be run automatically to recreate this (:pr:`2479`, :pr:`2480`). * Various PRs modernising & cleaning up old Cython code, see the `3.12 milestone on Github `_ for details. 3.12.1 bug fix release ---------------------- * Fix bundling HDF5 DLLs in pre-built packages for Windows (:pr:`2507`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739891098.0 h5py-3.13.0/docs/whatsnew/3.13.rst0000644000175000017500000000231314755120632017273 0ustar00takluyvertakluyverWhat's new in h5py 3.13 ======================= New features ------------ * New :meth:`.File.in_memory` constructor to conveniently build an HDF5 file structure in memory (:pr:`2517`). * :class:`.Dataset` views returned by :meth:`~.Dataset.astype`, :meth:`~.Dataset.asstr` and :meth:`~.Dataset.fields` have gained the ``.dtype``, ``.ndim``, ``.shape``, and ``.size`` attributes (:pr:`2550`). * The bundled HDF5 library in the pre-built packages was updated to 1.14.6 (:pr:`2554`). * Opening an existing dataset in a file is faster since it now only loads the "dataset creation property list" when required (:pr:`2552`). Exposing HDF5 functions ----------------------- * ``H5Sselect_shape_same`` exposed as :meth:`h5py.h5s.SpaceID.select_shape_same` (:pr:`2529`). Bug fixes --------- * Fix various bugs when applying ``np.array`` or ``np.asarray`` to a :class:`.Dataset` view returned by :meth:`~.Dataset.astype`, :meth:`~.Dataset.asstr`, or :meth:`~.Dataset.fields`. Building h5py ------------- * Fixed building h5py with Numpy 2.3 (:pr:`2556`). * Bump the specified mpi4py version to fix building with MPI support on Python 3.13 (:pr:`2524`). * Fix for running ``api_gen.py`` directly (:pr:`2534`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/whatsnew/3.2.rst0000644000175000017500000000377314350630273017222 0ustar00takluyvertakluyverWhat's new in h5py 3.2 ====================== New features ------------ * Added support to use the HDF5 ROS3 driver to access HDF5 files on S3 (:pr:`1755`). This is not enabled in the pre-built packages on PyPI. To use it, ensure HDF5 is built with read-only S3 support enabled, and then :ref:`build h5py from source ` using that HDF5 library. Breaking changes & deprecations ------------------------------- * Python 3.7 is now the minimum supported version. It may still be possible to use this release with Python 3.6, but it isn't tested and wheels are not provided for Python 3.6. * Setting the config option ``default_file_mode`` to values other than ``'r'`` is deprecated. Pass the desired mode when opening a :class:`~.File` instead. Exposing HDF5 functions ----------------------- * ``H5Pset_fapl_ros3`` & ``H5Pget_fapl_ros3`` (where HDF5 is built with read-only S3 support). Bug fixes --------- * :exc:`OSError` exceptions raised by h5py should now have a useful ``.errno`` attribute, where HDF5 provides this information. Subclasses such as :exc:`FileNotFoundError` should also be raised where appropriate (:pr:`1815`). * Fix reading data with a datatype of variable-length arrays of fixed length strings (:issue:`1817`). * Fix :meth:`.Dataset.read_direct` and :meth:`.Dataset.write_direct` when the source and destination have different shapes (:pr:`1796`). * Fix selecting data using integer indices in :meth:`.Dataset.read_direct` and :meth:`.Dataset.write_direct` (:pr:`1818`). * Fix exception handling in :meth:`.Group.visititems` (:issue:`1740`). * Issue a warning when ``File(..., swmr=True)`` is specified with any mode other than ``'r'``, as the SWMR option is ignored in these cases (:pr:`1812`). * Fix NumPy 1.20 deprecation warnings concerning the use of None as shape, and the deprecated aliases np.float, np.int and np.bool (:pr:`1780`). 3.2.1 bug fix release --------------------- * Fix :attr:`.File.driver` when the read-only S3 driver is available (:pr:`1844`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/whatsnew/3.3.rst0000644000175000017500000000361014350630273017211 0ustar00takluyvertakluyverWhat's new in h5py 3.3 ====================== New features ------------ * Compatibility with the upcoming HDF5 1.12.1 and possibly 1.14 (:pr:`1875`). * H5T_BITFIELD types will now be cast to their ``numpy.uint`` equivalent by default (:issue:`1258`). This means that no knowledge of mixed type compound dataset schemas is required to read these types, and can simply be read as follows: .. code:: arr = dset[:] Alternatively, 8-bit bitfields can still be cast to booleans explicitly: .. code:: arr = dset.astype(numpy.bool_)[:] * Key types are validated when accessing groups, to give more helpful errors when a group is indexed like a dataset (:pr:`1856`). * A new :meth:`.Group.build_virtual_dataset` method acting as a context manager to assemble virtual datasets (:pr:`1905`). * If the source and target of a virtual dataset mapping have different numbers of points, an error should now be thrown when you make the mapping in the :class:`VirtualLayout`, rather than later when writing this into the file. This should make it easier to find the source of such errors. Deprecations ------------ * Linux wheels are now manylinux2010 rather than manylinux1 * The ``default_file_mode`` config option is deprecated, and setting it to values other than 'r' (for read-only mode) is no longer allowed. Pass the mode when creating a :class:`.File` object instead of setting a global default. Bug fixes --------- * Trying to open a file in append mode (``'a'``) should now give clearer error messages when the file exists but can't be opened (:pr:`1902`). * Protect :func:`h5py.h5f.get_obj_ids` against garbage collection invalidating HDF5 IDs while it is retrieving them (:issue:`1852`). * Make file closing more robust, including when closing files while the interpreter is shutting down, by using lower-level code to close HDF5 IDs of objects inside the file (:issue:`1495`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/whatsnew/3.4.rst0000644000175000017500000000117714350630273017220 0ustar00takluyvertakluyverWhat's new in h5py 3.4 ====================== New features ------------ * The pre-built wheels now bundle HDF5 1.12.1 (:pr:`1945`). * ``len()`` now works on ``dset.astype()``, ``.asstr()`` and ``.fields()`` wrappers (:pr:`1913`). Bug fixes --------- * Fix bug introduced in version 3.3 that did not allow the creation of files using the flag "a" for certain drivers (e.g. mpiio, core and stdio) (:pr:`1922`). * Dataset indexing will now use the optimized fast path, which was accidentally disabled in a previous version (:pr:`1944`). * Fix an error building with Cython 3.0 alpha 8 (``cpdef`` inside functions) (:pr:`1923`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/whatsnew/3.5.rst0000644000175000017500000000377014350630273017222 0ustar00takluyvertakluyverWhat's new in h5py 3.5 ====================== New features ------------ * Datasets are now created without timestamps by default, making it easier to create more consistent files. Pass ``track_times=True`` to :meth:`.Group.create_dataset` to add timestamps again. * Added ``locking`` :class:`.File` argument to select HDF5 file locking behavior. * Enable setting file space page size when creating new HDF5 files. A new named argument ``fs_page_size`` is added to the :class:`.File` class. * Enable HDF5 page buffering, a low-level caching feature that may improve overall I/O performance in some cases. Three new named arguments are added to the :class:`.File` class: ``page_buf_size``, ``min_meta_keep``, and ``min_raw_keep``. * Get and reset HDF5 page buffering statistics. Available as the low-level API of the :class:`.FileID` class. * The built-in ``reversed()`` function now works with various dictionary-like interfaces: :class:`.Group`, :class:`.GroupID`, :meth:`.Group.keys`, :meth:`.Group.values` and :meth:`.Group.items`. Exposing HDF5 functions ----------------------- * ``H5Pset_file_locking`` and ``H5Pget_file_locking`` (for HDF5 >= 1.12.1 or 1.10.x >= 1.10.7) * ``H5Freset_page_buffering_stats`` * ``H5Fget_page_buffering_stats`` * ``H5Pset_file_space_page_size`` * ``H5Pget_file_space_page_size`` * ``H5Pset_page_buffer_size`` * ``H5Pget_page_buffer_size`` Breaking changes & deprecations ------------------------------- * Dataset timestamps are no longer written by default for new datasets. Pass ``track_times=True`` to :meth:`.Group.create_dataset` if you need them. * The IPython completer code no longer tries to work with very old versions of IPython (before 1.0). Bug fixes --------- * Fix a memory leak when reading data. This particularly affected code making many small reads. * ``dataset == array`` now behaves the same way as ``array == dataset``: the HDF5 dataset is read and NumPy makes a boolean array. * The IPython completer code no longer imports the ``readline`` module. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0 h5py-3.13.0/docs/whatsnew/3.6.rst0000644000175000017500000000204414350630273017214 0ustar00takluyvertakluyverWhat's new in h5py 3.6 ====================== New features ------------ * Pre-built packages are now available for Python 3.10. Deprecations ------------ * Using :meth:`.Dataset.astype` as a context manager (``with dset.astype(t):``) is deprecated. Slice the object returned by astype instead (``data = dset.astype(t)[:10]``). This works from h5py 3.0 onwards. * Getting the value of ``h5py.get_config().default_file_mode`` now issues a deprecation warning. This has been ``'r'`` by default from h5py 3.0, and cannot be changed since 3.3. Building h5py ------------- * h5py now requires the ``oldest-supported-numpy`` package at build time, instead of maintaining its own list of the oldest supported NumPy versions. The effect should be similar, but hopefully more reliable. Development ----------- * The custom ``setup.py test`` has been removed. `tox `_ should be used instead during development (see :ref:`contrib-run-tests`), and ``pytest --pyargs h5py`` can be used to test h5py after installation. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0 h5py-3.13.0/docs/whatsnew/3.7.rst0000644000175000017500000000524614532312724017224 0ustar00takluyvertakluyverWhat's new in h5py 3.7 ====================== New features ------------ * Both Apple Silicon (arm64) and Intel (x86_64) Mac wheels are now provided (:pr:`2065`). Apple Silicon wheels are not automatically tested, however, as we're not aware of any CI offerings that do this. * Provide the ability to use the Direct Virtual File Driver (VFD) from HDF5 (Linux only). If the Direct VFD driver is present at the time of compilation, users can use the Direct VFD by passing the keyword argument ``driver="direct"`` to the ``h5py.File`` constructor. To use the Direct VFD, HDF5 and h5py must have both been compiled with this enabled. Currently, pre-built h5py wheels on PyPI do not include the Direct VFD. Other packages such as the conda package on conda-forge might include it. Alternatively, you can :ref:`build h5py from source ` against an HDF5 build with the direct driver enabled. * The :class:`.File` constructor contains two new parameters ``alignment_threshold``, and ``alignment_interval`` controlling the data alignment within the HDF5 file (:pr:`2040`). * :meth:`~.Group.create_dataset` and :meth:`~.Group.require_dataset` now accept parameters ``efile_prefix`` and ``virtual_prefix`` to set a filesystem path prefix to use to find files for external datasets and for virtual dataset sources (:pr:`2092`). These only affect the current access; the prefix is not stored in the file. * h5py wheels on PyPI now bundle HDF5 version 1.12.2 (:pr:`2099`). * h5py Mac wheels on PyPI now bundle zlib version 1.2.12 (:pr:`2082`). * Pre-built wheels are now available for Python 3.10 on Linux ARM64 (:pr:`2094`). Bug fixes --------- * Fix a deadlock which was possible when the same dataset was accessed from multiple threads (:issue:`2064`). * New attributes are created directly, instead of via a temporary attribute with subsequent renaming. This fixes overwriting attributes with ``track_order=True``. * Fix for building with mpi4py on Python 3.10 (:pr:`2101`). * Fixed fancy indexing with a boolean array for a single dimension (:pr:`2079`). * Avoid returning uninitialised memory when reading from a chunked dataset with missing chunks and no fill value (:pr:`2076`). * Enable setting of fillvalue for datasets with variable length string dtype (:pr:`2044`). * Closing a file or calling ``get_obj_ids()`` no longer re-enables Python garbage collection if it was previously disabled (:pr:`2020`). Exposing HDF5 functions ----------------------- * ``H5Pset_efile_prefix`` and ``H5Pget_efile_prefix`` Building h5py ------------- * Fix for building h5py on Cygwin (:pr:`2038`). * More helpful error message when ``pkg-config`` is unavailable (:pr:`2053`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728461282.0 h5py-3.13.0/docs/whatsnew/3.8.rst0000644000175000017500000000604414701434742017225 0ustar00takluyvertakluyverWhat's new in h5py 3.8 ====================== New features ------------ * h5py now has pre-built packages for Python 3.11. * h5py is compatible with HDF5 1.14 (:pr:`2187`). Pre-built packages on PyPI still include HDF5 1.12 for now. * :ref:`dataset_fancy` now accepts tuples, or any other sequence type, rather than only lists and NumPy arrays. This also includes ``range`` objects, but this will normally be less efficient than the equivalent slice. * New property :attr:`.Dataset.is_scale` for checking if the dataset is a dimension scale (:pr:`2168`). * :meth:`.Group.require_dataset` now validates ``maxshape`` for resizable datasets (:pr:`2116`). * :class:`.File` now has a ``meta_block_size`` argument and property. This influences how the space for metadata, including the initial header, is allocated. * Chunk cache can be configured per individual HDF5 dataset (:pr:`2127`). Use :meth:`.Group.create_dataset` for new datasets or :meth:`.Group.require_dataset` for already existing datasets. Any combination of the ``rdcc_nbytes``, ``rdcc_w0``, and ``rdcc_nslots`` arguments is allowed. The file defaults apply to those omitted. * HDF5 file names for ros3 driver can now also be ``s3://`` resource locations (:pr:`2140`). h5py will translate them into AWS path-style URLs for use by the driver. * When using the ros3 driver, AWS authentication will be activated only if all three driver arguments are provided. Previously AWS authentication was active if any one of the arguments was set causing an error from the HDF5 library. * :meth:`.Dataset.fields` now implements the ``__array__()`` method (:pr:`2151`). This speeds up accessing fields with functions that expect this, like ``np.asarray()``. * Low-level :meth:`h5py.h5d.DatasetID.chunk_iter` method that invokes a user-supplied callable object on every written chunk of one dataset (:pr:`2202`). It provides much better performance when iterating over a large number of chunks. Exposing HDF5 functions ----------------------- * ``H5Dchunk_iter`` as :meth:`h5py.h5d.DatasetID.chunk_iter`. * `H5Pset_meta_block_size `_ and `H5Pget_meta_block_size `_ (:pr:`2106`). Bug fixes --------- * Fixed getting the default fill value (an empty string) for variable-length string data (:pr:`2132`). * Complex float16 data could cause a ``TypeError`` when trying to coerce to the currently unavailable numpy.dtype('c4'). Now a compound type is used instead (:pr:`2157`). * h5py 3.7 contained a performance regression when using a boolean mask array to index a 1D dataset, which is now fixed (:pr:`2193`). Building h5py ------------- * Parallel HDF5 can be built with Microsoft MS-MPI (:pr:`2147`). See :ref:`build_mpi` for details. * Some 'incompatible function pointer type' compile time warnings were fixed (:pr:`2142`). * Fix for finding HDF5 DLL in mingw (:pr:`2105`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1687251406.0 h5py-3.13.0/docs/whatsnew/3.9.rst0000644000175000017500000000443314444264716017234 0ustar00takluyvertakluyverWhat's new in h5py 3.9 ====================== This version of h5py requires Python 3.8 or above. New features ------------ * New ``out`` argument to :meth:`~h5py.h5d.DatasetID.read_direct_chunk` to allow passing the output buffer (:pr:`2232`). * The objects from :meth:`.Dataset.asstr` and :meth:`.Dataset.astype` now implement the ``__array__()`` method (:pr:`2269`). This speeds up access for functions that support it, such as ``np.asarray()``. * Validate key types when creating groups and attributes, giving better error messages when invalid types are used (:pr:`2266`). Deprecations & removals ----------------------- * Using :meth:`.Dataset.astype` as a context manager has been removed, after being deprecated in h5py 3.6. Read data by slicing the returned object instead: ``dset.astype('f4')[:]``. Exposing HDF5 functions ----------------------- * ``H5Pget_elink_acc_flags`` & ``H5Pset_elink_acc_flags`` as :meth:`h5py.h5p.PropLAID.get_elink_acc_flags` & :meth:`h5py.h5p.PropLAID.set_elink_acc_flags`: access the external link file access traversal flags in a link access property list (:pr:`2244`). * ``H5Zregister`` as :func:`h5py.h5z.register_filter`: register an HDF5 filter (:pr:`2229`). Bug fixes --------- * ``Group.__contains__`` and ``Group.get`` now use the default link access property list systematically (:pr:`2244`). * Removed various calls to the deprecated ``numpy.product`` function (:pr:`2242` & :pr:`2273`). * Fix the IPython tab-completion integration in IPython 8.12 (:pr:2256`). * Replacing attributes with :meth:`.AttributeManager.create` now deletes the old attributes before creating the new one, rather than using a temporary name and renaming the new attribute (:pr:`2274`). This should avoid some confusing bugs affecting attributes. However, failures creating an attribute are less likely to leave an existing attribute of the same name in place. To change an attribute value without changing its shape or dtype, use :meth:`~.AttributeManager.modify` instead. Building h5py ------------- * When building with :ref:`parallel` support, the version of mpi4py used on various Python versions is increased to 3.1.1, fixing building with a newer setuptools (:pr:`2225`). * Some fixes towards compatibility with the upcoming Cython 3 (:pr:`2247`). ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739891098.0 h5py-3.13.0/docs/whatsnew/index.rst0000644000175000017500000000060114755120632020014 0ustar00takluyvertakluyver.. _whatsnew: ********************** "What's new" documents ********************** These document the changes between minor (or major) versions of h5py. .. toctree:: 3.13 3.12 3.11 3.10 3.9 3.8 3.7 3.6 3.5 3.4 3.3 3.2 3.1 3.0 2.10 2.9 2.8 2.7.1 2.7 2.6 2.5 2.4 2.3 2.2 2.1 2.0 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5528822 h5py-3.13.0/docs_api/0000755000175000017500000000000014755127217015155 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs_api/Makefile0000644000175000017500000001523614045746670016626 0ustar00takluyvertakluyver# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Low-levelAPIforh5py.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Low-levelAPIforh5py.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/Low-levelAPIforh5py" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Low-levelAPIforh5py" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5528822 h5py-3.13.0/docs_api/_static/0000755000175000017500000000000014755127217016603 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs_api/_static/.keep0000644000175000017500000000000014045746670017520 0ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0 h5py-3.13.0/docs_api/automod.py0000644000175000017500000001661314045746670017210 0ustar00takluyvertakluyver """ Requires patched version of autodoc.py http://bugs.python.org/issue3422 """ import re from functools import partial # === Regexp replacement machinery ============================================ role_expr = re.compile(r"(:.+:(?:`.+`)?)") def safe_replace(istr, expr, rpl): """ Perform a role-safe replacement of all occurrences of "expr", using the callable "rpl". """ outparts = [] for part in role_expr.split(istr): if not role_expr.search(part): part = expr.sub(rpl, part) outparts.append(part) return "".join(outparts) # === Replace literal class names ============================================= class_base = r""" (?P
  \W+
)
(?P%s)
(?P
  \W+
)
"""

class_exprs = { "ObjectID": "h5py.h5.ObjectID",
                "GroupID": "h5py.h5g.GroupID",
                "FileID": "h5py.h5f.FileID",
                "DatasetID": "h5py.h5d.DatasetID",
                "TypeID": "h5py.h5t.TypeID",
                "[Dd]ataset creation property list": "h5py.h5p.PropDCID",
                "[Dd]ataset transfer property list": "h5py.h5p.PropDXID",
                "[Ff]ile creation property list": "h5py.h5p.PropFCID",
                "[Ff]ile access property list": "h5py.h5p.PropFAID",
                "[Ll]ink access property list": "h5py.h5p.PropLAID",
                "[Ll]ink creation property list": "h5py.h5p.PropLCID",
                "[Gg]roup creation property list": "h5py.h5p.PropGCID"}


class_exprs = dict(
    (re.compile(class_base % x.replace(" ",r"\s"), re.VERBOSE), y) \
    for x, y in class_exprs.items() )


def replace_class(istr):

    def rpl(target, match):
        pre, name, post = match.group('pre', 'name', 'post')
        return '%s:class:`%s <%s>`%s' % (pre, name, target, post)

    for expr, target in class_exprs.items():
        rpl2 = partial(rpl, target)
        istr = safe_replace(istr, expr, rpl2)

    return istr

# === Replace constant and category expressions ===============================

# e.g. h5f.OBJ_ALL -> :data:`h5f.OBJ_ALL `
# and  h5f.OBJ*    -> :ref:`h5f.OBJ* `

const_exclude = ['HDF5', 'API', 'H5', 'H5A', 'H5D', 'H5F', 'H5P', 'H5Z', 'INT',
                 'UINT', 'STRING', 'LONG', 'PHIL', 'GIL', 'TUPLE', 'LIST',
                 'FORTRAN', 'BOOL', 'NULL', 'NOT', 'SZIP']
const_exclude = ["%s(?:\W|$)" % x for x in const_exclude]
const_exclude = "|".join(const_exclude)

const_expr = re.compile(r"""
(?P
  (?:^|\s+)                   # Must be preceded by whitespace or string start
  \W?                         # May have punctuation ( (CONST) or "CONST" )
  (?!%s)                      # Exclude known list of non-constant objects
)
(?Ph5[a-z]{0,2}\.)?   # Optional h5xx. prefix
(?P[A-Z_][A-Z0-9_]+)    # The constant name itself
(?P\*)?                 # Wildcard indicates this is a category
(?P
  \W?                         # May have trailing punctuation
  (?:$|\s+)                   # Must be followed by whitespace or end of string
)
""" % const_exclude, re.VERBOSE)

def replace_constant(istr, current_module):

    def rpl(match):
        mod, name, wild = match.group('module', 'name', 'wild')
        pre, post = match.group('pre', 'post')

        if mod is None:
            mod = current_module+'.'
            displayname = name
        else:
            displayname = mod+name

        if wild:
            target = 'ref.'+mod+name
            role = ':ref:'
            displayname += '*'
        else:
            target = 'h5py.'+mod+name
            role = ':data:'

        return '%s%s`%s <%s>`%s' % (pre, role, displayname, target, post)

    return safe_replace(istr, const_expr, rpl)


# === Replace literal references to modules ===================================

mod_expr = re.compile(r"""
(?P
  (?:^|\s+)                 # Must be preceded by whitespace
  \W?                       # Optional opening paren/quote/whatever
)
(?!h5py)                    # Don't match the package name
(?Ph5[a-z]{0,2})      # Names of the form h5, h5a, h5fd
(?P
  \W?                       # Optional closing paren/quote/whatever
  (?:$|\s+)                 # Must be followed by whitespace
)
""", re.VERBOSE)

def replace_module(istr):

    def rpl(match):
        pre, name, post = match.group('pre', 'name', 'post')
        return '%s:mod:`%s `%s' % (pre, name, name, post)

    return safe_replace(istr, mod_expr, rpl)


# === Replace parameter lists =================================================

# e.g. "    + STRING path ('/default')" -> ":param STRING path: ('/default')"

param_expr = re.compile(r"""
^
\s*
\+
\s+
(?P
  [^\s(]
  .*
  [^\s)]
)
(?:
  \s+
  \(
  (?P
    [^\s(]
    .*
    [^\s)]
  )
  \)
)?
$
""", re.VERBOSE)

def replace_param(istr):
    """ Replace parameter lists.  Not role-safe. """

    def rpl(match):
        desc, default = match.group('desc', 'default')
        default = ' (%s) ' % default if default is not None else ''
        return ':param %s:%s' % (desc, default)

    return param_expr.sub(rpl, istr)



# === Begin Sphinx extension code =============================================

def is_callable(docstring):
    return str(docstring).strip().startswith('(')

def setup(spx):

    def proc_doc(app, what, name, obj, options, lines):
        """ Process docstrings for modules and routines """

        final_lines = lines[:]

        # Remove the signature lines from the docstring
        if is_callable(obj.__doc__):
            doclines = []
            arglines = []
            final_lines = arglines
            for line in lines:
                if len(line.strip()) == 0:
                    final_lines = doclines
                final_lines.append(line)

        # Resolve class names, constants and modules
        if hasattr(obj, 'im_class'):
            mod = obj.im_class.__module__
        elif hasattr(obj, '__module__'):
            mod = obj.__module__
        else:
            mod = ".".join(name.split('.')[0:2])  # i.e. "h5py.h5z"
        mod = mod.split('.')[1]  # i.e. 'h5z'

        del lines[:]
        for line in final_lines:
            #line = replace_param(line)
            line = replace_constant(line, mod)
            line = replace_module(line)
            line = replace_class(line)
            line = line.replace('**kwds', '\*\*kwds').replace('*args','\*args')
            lines.append(line)




    def proc_sig(app, what, name, obj, options, signature, return_annotation):
        """ Auto-generate function signatures from docstrings """

        def getsig(docstring):
            """ Get (sig, return) from a docstring, or None. """
            if not is_callable(docstring):
                return None

            lines = []
            for line in docstring.split("\n"):
                if len(line.strip()) == 0:
                    break
                lines.append(line)
            rawsig = " ".join(x.strip() for x in lines)

            if '=>' in rawsig:
                sig, ret = tuple(x.strip() for x in rawsig.split('=>'))
            elif '->' in rawsig:
                sig, ret = tuple(x.strip() for x in rawsig.split('->'))
            else:
                sig = rawsig
                ret = None

            if sig == "()":
                sig = "( )" # Why? Ask autodoc.

            return (sig, ret)

        sigtuple = getsig(obj.__doc__)

        return sigtuple

    spx.connect('autodoc-process-signature', proc_sig)
    spx.connect('autodoc-process-docstring', proc_doc)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/conf.py0000644000175000017500000002043714045746670016464 0ustar00takluyvertakluyver# -*- coding: utf-8 -*-
#
# Low-level API for h5py documentation build configuration file, created by
# sphinx-quickstart on Fri Jan 31 22:42:08 2014.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.

import sys
import os
import h5py

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('.'))

# -- General configuration ------------------------------------------------

# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc', 'automod']

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']

# The suffix of source filenames.
source_suffix = '.rst'

# The encoding of source files.
#source_encoding = 'utf-8-sig'

# The master toctree document.
master_doc = 'index'

# General information about the project.
project = 'Low-level API for h5py'
copyright = '2014, Andrew Collette and contributors'

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = "%d.%d" % h5py.version.version_tuple[0:2]

# The full version, including alpha/beta/rc tags.
release = h5py.version.version

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None

# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']

# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None

# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True

# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True

# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'

# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []

# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False


# -- Options for HTML output ----------------------------------------------

# The theme to use for HTML and HTML Help pages.  See the documentation for
# a list of builtin themes.
html_theme = 'nature'

# Theme options are theme-specific and customize the look and feel of a theme
# further.  For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}

# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []

# The name for this set of Sphinx documents.  If None, it defaults to
# " v documentation".
html_title = "Low-level API for h5py (v{})".format(release)

# A shorter title for the navigation bar.  Default is the same as html_title.
#html_short_title = None

# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None

# The name of an image file (within the static path) to use as favicon of the
# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']

# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []

# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'

# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True

# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}

# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}

# If false, no module index is generated.
#html_domain_indices = True

# If false, no index is generated.
#html_use_index = True

# If true, the index is split into individual pages for each letter.
#html_split_index = False

# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True

# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True

# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True

# If true, an OpenSearch description file will be output, and all pages will
# contain a  tag referring to it.  The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''

# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None

# Output file base name for HTML help builder.
htmlhelp_basename = 'Low-levelAPIforh5pydoc'


# -- Options for LaTeX output ---------------------------------------------

latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',

# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',

# Additional stuff for the LaTeX preamble.
#'preamble': '',
}

# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
#  author, documentclass [howto, manual, or own class]).
latex_documents = [
  ('index', 'Low-levelAPIforh5py.tex', 'Low-level API for h5py Documentation',
   'Andrew Collette and contributors', 'manual'),
]

# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None

# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False

# If true, show page references after internal links.
#latex_show_pagerefs = False

# If true, show URL addresses after external links.
#latex_show_urls = False

# Documents to append as an appendix to all manuals.
#latex_appendices = []

# If false, no module index is generated.
#latex_domain_indices = True


# -- Options for manual page output ---------------------------------------

# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
    ('index', 'low-levelapiforh5py', 'Low-level API for h5py Documentation',
     ['Andrew Collette and contributors'], 1)
]

# If true, show URL addresses after external links.
#man_show_urls = False


# -- Options for Texinfo output -------------------------------------------

# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
#  dir menu entry, description, category)
texinfo_documents = [
  ('index', 'Low-levelAPIforh5py', 'Low-level API for h5py Documentation',
   'Andrew Collette and contributors', 'Low-levelAPIforh5py', 'One line description of project.',
   'Miscellaneous'),
]

# Documents to append as an appendix to all manuals.
#texinfo_appendices = []

# If false, no module index is generated.
#texinfo_domain_indices = True

# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'

# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5.rst0000644000175000017500000000117714045746670016233 0ustar00takluyvertakluyverModule H5
=========

.. automodule:: h5py.h5

Library API
-----------

.. autofunction:: get_config
.. autofunction:: get_libversion


Configuration class
-------------------

.. autoclass:: H5PYConfig


Module constants
----------------

.. data:: INDEX_NAME

    Resolve indices in alphanumeric order

.. data:: INDEX_CRT_ORDER

    Resolve indices in order of object creation.  Not always available.

.. data:: ITER_NATIVE

    Traverse index in the fastest possible order.  No particular pattern is
    guaranteed.

.. data:: ITER_INC

    Traverse index in increasing order

.. data:: ITER_DEC

    Traverse index in decreasing order
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5a.rst0000644000175000017500000000065414045746670016373 0ustar00takluyvertakluyverModule H5A
==========

.. automodule:: h5py.h5a

Functional API
--------------

.. autofunction:: create
.. autofunction:: open
.. autofunction:: exists
.. autofunction:: rename
.. autofunction:: delete
.. autofunction:: get_num_attrs
.. autofunction:: get_info
.. autofunction:: iterate

Info objects
------------

.. autoclass:: AttrInfo
    :members:

Attribute objects
-----------------

.. autoclass:: AttrID
    :members:
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5ac.rst0000644000175000017500000000016214045746670016530 0ustar00takluyvertakluyverModule H5AC
===========

.. automodule:: h5py.h5ac


.. autoclass:: CacheConfig
    :members:
    :undoc-members:
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/docs_api/h5d.rst0000644000175000017500000000154314350630273016362 0ustar00takluyvertakluyverModule H5D
==========

.. automodule:: h5py.h5d

Functional API
--------------

.. autofunction:: open
.. autofunction:: create

Dataset Objects
---------------

.. autoclass:: DatasetID
   :members:

Module constants
----------------

Storage strategies
~~~~~~~~~~~~~~~~~~

.. data:: COMPACT
.. data:: CONTIGUOUS
.. data:: CHUNKED

.. _ref.h5d.ALLOC_TIME:

Allocation times
~~~~~~~~~~~~~~~~

.. data:: ALLOC_TIME_DEFAULT
.. data:: ALLOC_TIME_LATE
.. data:: ALLOC_TIME_EARLY
.. data:: ALLOC_TIME_INCR

Allocation status
~~~~~~~~~~~~~~~~~

.. data:: SPACE_STATUS_NOT_ALLOCATED
.. data:: SPACE_STATUS_PART_ALLOCATED
.. data:: SPACE_STATUS_ALLOCATED

Fill time
~~~~~~~~~

.. data:: FILL_TIME_ALLOC
.. data:: FILL_TIME_NEVER
.. data:: FILL_TIME_IFSET

Fill values
~~~~~~~~~~~

.. data:: FILL_VALUE_UNDEFINED
.. data:: FILL_VALUE_DEFAULT
.. data:: FILL_VALUE_USER_DEFINED
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/docs_api/h5ds.rst0000644000175000017500000000010114350630273016532 0ustar00takluyvertakluyverModule H5DS
===========

.. automodule:: h5py.h5ds
    :members:
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/docs_api/h5f.rst0000644000175000017500000000242214675110407016363 0ustar00takluyvertakluyverModule H5F
==========

.. automodule:: h5py.h5f

Functional API
--------------

.. autofunction:: open
.. autofunction:: create
.. autofunction:: flush
.. autofunction:: is_hdf5
.. autofunction:: mount
.. autofunction:: unmount
.. autofunction:: open_file_image
.. autofunction:: get_name
.. autofunction:: get_obj_count
.. autofunction:: get_obj_ids


File objects
------------

.. autoclass:: FileID
    :members:

Module constants
----------------

.. _ref.h5f.ACC:

File access flags
~~~~~~~~~~~~~~~~~

.. data:: ACC_TRUNC

    Create/truncate file

.. data:: ACC_EXCL

    Create file if it doesn't exist; fail otherwise

.. data:: ACC_RDWR

    Open in read/write mode

.. data:: ACC_RDONLY

    Open in read-only mode


.. _ref.h5f.CLOSE:

File close strength
~~~~~~~~~~~~~~~~~~~

.. data:: CLOSE_DEFAULT
.. data:: CLOSE_WEAK
.. data:: CLOSE_SEMI
.. data:: CLOSE_STRONG

.. _ref.h5f.SCOPE:

File scope
~~~~~~~~~~

.. data:: SCOPE_LOCAL
.. data:: SCOPE_GLOBAL

.. _ref.h5f.OBJ:

Object types
~~~~~~~~~~~~

.. data:: OBJ_FILE
.. data:: OBJ_DATASET
.. data:: OBJ_GROUP
.. data:: OBJ_DATATYPE
.. data:: OBJ_ATTR
.. data:: OBJ_ALL
.. data:: OBJ_LOCAL

Library version bounding
~~~~~~~~~~~~~~~~~~~~~~~~

.. data:: LIBVER_EARLIEST
.. data:: LIBVER_V18
.. data:: LIBVER_V110
.. data:: LIBVER_LATEST
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/docs_api/h5fd.rst0000644000175000017500000000226414350630273016531 0ustar00takluyvertakluyverModule H5FD
===========

.. automodule:: h5py.h5fd

Module constants
----------------

Memory usage types for MULTI file driver
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. data:: MEM_DEFAULT
.. data:: MEM_SUPER
.. data:: MEM_BTREE
.. data:: MEM_DRAW
.. data:: MEM_GHEAP
.. data:: MEM_LHEAP
.. data:: MEM_OHDR
.. data:: MEM_NTYPES


Data transfer modes for MPIO driver
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. data:: MPIO_INDEPENDENT
.. data:: MPIO_COLLECTIVE

File drivers types
~~~~~~~~~~~~~~~~~~

.. data:: CORE
.. data:: FAMILY
.. data:: LOG
.. data:: MPIO
.. data:: MULTI
.. data:: SEC2
.. data:: STDIO
.. data:: WINDOWS

Logging driver settings
~~~~~~~~~~~~~~~~~~~~~~~

.. note:: Not all logging flags are currently implemented by HDF5.

.. data:: LOG_LOC_READ
.. data:: LOG_LOC_WRITE
.. data:: LOG_LOC_SEEK
.. data:: LOG_LOC_IO

.. data:: LOG_FILE_READ
.. data:: LOG_FILE_WRITE
.. data:: LOG_FILE_IO

.. data:: LOG_FLAVOR

.. data:: LOG_NUM_READ
.. data:: LOG_NUM_WRITE
.. data:: LOG_NUM_SEEK
.. data:: LOG_NUM_IO

.. data:: LOG_TIME_OPEN
.. data:: LOG_TIME_READ
.. data:: LOG_TIME_WRITE
.. data:: LOG_TIME_SEEK
.. data:: LOG_TIME_CLOSE
.. data:: LOG_TIME_IO

.. data:: LOG_ALLOC
.. data:: LOG_ALL
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5g.rst0000644000175000017500000000115514045746670016376 0ustar00takluyvertakluyverModule H5G
==========

.. automodule:: h5py.h5g

Functional API
--------------

.. autofunction:: open
.. autofunction:: create
.. autofunction:: iterate
.. autofunction:: get_objinfo

Info objects
------------

.. autoclass:: GroupStat
    :members:

Group objects
-------------

.. autoclass:: GroupID
    :members:

Module constants
----------------

Object type codes
~~~~~~~~~~~~~~~~~

.. data:: LINK

    Symbolic link

.. data:: GROUP

    HDF5 group

.. data:: DATASET

    HDF5 dataset

.. data:: TYPE

    Named (file-resident) datatype

Link type codes
~~~~~~~~~~~~~~~

.. data:: LINK_HARD
.. data:: LINK_SOFT
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5i.rst0000644000175000017500000000053714045746670016403 0ustar00takluyvertakluyverModule H5I
==========

Functional API
--------------

.. automodule:: h5py.h5i
    :members:

Module constants
----------------

Identifier classes
~~~~~~~~~~~~~~~~~~

.. data:: BADID
.. data:: FILE
.. data:: GROUP
.. data:: DATASPACE
.. data:: DATASET
.. data:: ATTR
.. data:: REFERENCE
.. data:: GENPROP_CLS
.. data:: GENPROP_LST
.. data:: DATATYPE
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5l.rst0000644000175000017500000000033614045746670016403 0ustar00takluyvertakluyverModule H5L
==========

Linkproxy objects
-----------------

.. automodule:: h5py.h5l
    :members:

Module constants
----------------

Link types
~~~~~~~~~~

.. data:: TYPE_HARD
.. data:: TYPE_SOFT
.. data:: TYPE_EXTERNAL
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5o.rst0000644000175000017500000000162614045746670016411 0ustar00takluyvertakluyverModule H5O
==========

.. automodule:: h5py.h5o

Functional API
--------------

.. autofunction:: open
.. autofunction:: link
.. autofunction:: copy
.. autofunction:: set_comment
.. autofunction:: get_comment
.. autofunction:: visit
.. autofunction:: get_info

Info classes
------------

.. autoclass:: ObjInfo
    :members:

Module constants
----------------

Object types
~~~~~~~~~~~~

.. data:: TYPE_GROUP
.. data:: TYPE_DATASET
.. data:: TYPE_NAMED_DATATYPE

.. _ref.h5o.COPY:

Copy flags
~~~~~~~~~~

.. data:: COPY_SHALLOW_HIERARCHY_FLAG

    Copy only immediate members of a group.

.. data:: COPY_EXPAND_SOFT_LINK_FLAG

    Expand soft links into new objects.

.. data:: COPY_EXPAND_EXT_LINK_FLAG

    Expand external link into new objects.

.. data:: COPY_EXPAND_REFERENCE_FLAG

    Copy objects that are pointed to by references.

.. data:: COPY_WITHOUT_ATTR_FLAG

    Copy object without copying attributes.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/docs_api/h5p.rst0000644000175000017500000000336114350630273016376 0ustar00takluyvertakluyverModule H5P
==========

.. automodule:: h5py.h5p

Functional API
--------------

.. autofunction:: create

Base classes
------------

.. autoclass:: PropID
    :show-inheritance:
    :members:

.. autoclass:: PropClassID
    :show-inheritance:
    :members:

.. autoclass:: PropInstanceID
    :show-inheritance:
    :members:

.. autoclass:: PropCreateID
    :show-inheritance:
    :members:

.. autoclass:: PropOCID
    :show-inheritance:
    :members:

.. autoclass:: PropCopyID
    :show-inheritance:
    :members:

File creation
-------------

.. autoclass:: PropFCID
    :show-inheritance:
    :members:

File access
-----------

.. autoclass:: PropFAID
    :show-inheritance:
    :members:

Dataset creation
----------------

.. autoclass:: PropDCID
    :show-inheritance:
    :members:

Dataset access
--------------

.. autoclass:: PropDAID
    :show-inheritance:
    :members:

Dataset transfer
----------------

.. autoclass:: PropDXID
    :show-inheritance:
    :members:

Link creation
-------------

.. autoclass:: PropLCID
    :show-inheritance:
    :members:


Link access
-----------

.. autoclass:: PropLAID
    :show-inheritance:
    :members:


Group creation
--------------

.. autoclass:: PropGCID
    :show-inheritance:
    :members:

Datatype creation
-----------------

.. autoclass:: PropTCID
    :show-inheritance:
    :members:

Module constants
----------------

Predefined classes
~~~~~~~~~~~~~~~~~~

.. data:: DEFAULT
.. data:: FILE_CREATE
.. data:: FILE_ACCESS
.. data:: DATASET_CREATE
.. data:: DATASET_XFER
.. data:: DATASET_ACCESS
.. data:: OBJECT_COPY
.. data:: LINK_CREATE
.. data:: LINK_ACCESS
.. data:: GROUP_CREATE
.. data:: OBJECT_CREATE

Order tracking flags
~~~~~~~~~~~~~~~~~~~~

.. data:: CRT_ORDER_TRACKED
.. data:: CRT_ORDER_INDEXED
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5pl.rst0000644000175000017500000000010114045746670016551 0ustar00takluyvertakluyverModule H5PL
===========

.. automodule:: h5py.h5pl
    :members:
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5r.rst0000644000175000017500000000101114045746670016400 0ustar00takluyvertakluyverModule H5R
==========

.. automodule:: h5py.h5r

Functional API
--------------

.. autofunction:: create
.. autofunction:: dereference
.. autofunction:: get_region
.. autofunction:: get_obj_type
.. autofunction:: get_name

Reference classes
-----------------

.. autoclass:: Reference
    :members:

.. autoclass:: RegionReference
    :show-inheritance:
    :members:

API constants
-------------

.. data:: OBJECT

    Typecode for object references

.. data:: DATASET_REGION

    Typecode for dataset region references
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5s.rst0000644000175000017500000000164514045746670016416 0ustar00takluyvertakluyverModule H5S
==========

.. automodule:: h5py.h5s

Functional API
--------------

.. autofunction:: create
.. autofunction:: create_simple
.. autofunction:: decode

Dataspace objects
-----------------

.. autoclass:: SpaceID
    :show-inheritance:
    :members:

Module constants
----------------

.. data:: ALL

    Accepted in place of an actual dataspace; means "every point"

.. data:: UNLIMITED

    Indicates an unlimited maximum dimension

Dataspace class codes
~~~~~~~~~~~~~~~~~~~~~

.. data:: NO_CLASS
.. data:: SCALAR
.. data:: SIMPLE

Selection codes
~~~~~~~~~~~~~~~

.. data:: SELECT_NOOP
.. data:: SELECT_SET
.. data:: SELECT_OR
.. data:: SELECT_AND
.. data:: SELECT_XOR
.. data:: SELECT_NOTB
.. data:: SELECT_NOTA
.. data:: SELECT_APPEND
.. data:: SELECT_PREPEND
.. data:: SELECT_INVALID

Existing selection type
~~~~~~~~~~~~~~~~~~~~~~~

.. data:: SEL_NONE
.. data:: SEL_POINTS
.. data:: SEL_HYPERSLABS
.. data:: SEL_ALL
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5t.rst0000644000175000017500000000732414045746670016417 0ustar00takluyvertakluyverModule H5T
==========

.. automodule:: h5py.h5t

Functions specific to h5py
--------------------------

.. autofunction:: py_create
.. autofunction:: string_dtype
.. autofunction:: check_string_dtype
.. autofunction:: vlen_dtype
.. autofunction:: check_vlen_dtype
.. autofunction:: enum_dtype
.. autofunction:: check_enum_dtype
.. autofunction:: special_dtype
.. autofunction:: check_dtype

Functional API
--------------
.. autofunction:: create
.. autofunction:: open
.. autofunction:: array_create
.. autofunction:: enum_create
.. autofunction:: vlen_create
.. autofunction:: decode
.. autofunction:: convert
.. autofunction:: find

Type classes
------------

.. autoclass:: TypeID
    :members:

Atomic classes
~~~~~~~~~~~~~~

Atomic types are integers and floats.  Much of the functionality for each is
inherited from the base class :class:`TypeAtomicID`.

.. autoclass:: TypeAtomicID
    :show-inheritance:
    :members:

.. autoclass:: TypeIntegerID
    :show-inheritance:
    :members:

.. autoclass:: TypeFloatID
    :show-inheritance:
    :members:

Strings
~~~~~~~

.. autoclass:: TypeStringID
    :show-inheritance:
    :members:

Compound Types
~~~~~~~~~~~~~~

Traditional compound type (like NumPy record type) and enumerated types share
a base class, :class:`TypeCompositeID`.

.. autoclass:: TypeCompositeID
    :show-inheritance:
    :members:

.. autoclass:: TypeCompoundID
    :show-inheritance:
    :members:

.. autoclass:: TypeEnumID
    :show-inheritance:
    :members:

Other types
~~~~~~~~~~~

.. autoclass:: TypeArrayID
    :show-inheritance:
    :members:

.. autoclass:: TypeOpaqueID
    :show-inheritance:
    :members:

.. autoclass:: TypeVlenID
    :show-inheritance:
    :members:

.. autoclass:: TypeBitfieldID
    :show-inheritance:
    :members:

.. autoclass:: TypeReferenceID
    :show-inheritance:
    :members:

Predefined Datatypes
--------------------

These locked types are pre-allocated by the library.

Floating-point
~~~~~~~~~~~~~~

.. data:: IEEE_F32LE
.. data:: IEEE_F32BE
.. data:: IEEE_F64LE
.. data:: IEEE_F64BE

Integer types
~~~~~~~~~~~~~

.. data:: STD_I8LE
.. data:: STD_I16LE
.. data:: STD_I32LE
.. data:: STD_I64LE

.. data:: STD_I8BE
.. data:: STD_I16BE
.. data:: STD_I32BE
.. data:: STD_I64BE

.. data:: STD_U8LE
.. data:: STD_U16LE
.. data:: STD_U32LE
.. data:: STD_U64LE

.. data:: STD_U8BE
.. data:: STD_U16BE
.. data:: STD_U32BE
.. data:: STD_U64BE

.. data:: NATIVE_INT8
.. data:: NATIVE_UINT8
.. data:: NATIVE_INT16
.. data:: NATIVE_UINT16
.. data:: NATIVE_INT32
.. data:: NATIVE_UINT32
.. data:: NATIVE_INT64
.. data:: NATIVE_UINT64
.. data:: NATIVE_FLOAT
.. data:: NATIVE_DOUBLE

Reference types
~~~~~~~~~~~~~~~

.. data:: STD_REF_OBJ
.. data:: STD_REF_DSETREG

String types
~~~~~~~~~~~~

.. data:: C_S1

    Null-terminated fixed-length string

.. data:: FORTRAN_S1

    Zero-padded fixed-length string

.. data:: VARIABLE

    Variable-length string

Python object type
~~~~~~~~~~~~~~~~~~

.. data:: PYTHON_OBJECT

Module constants
----------------

Datatype class codes
~~~~~~~~~~~~~~~~~~~~

.. data:: NO_CLASS
.. data:: INTEGER
.. data:: FLOAT
.. data:: TIME
.. data:: STRING
.. data:: BITFIELD
.. data:: OPAQUE
.. data:: COMPOUND
.. data:: REFERENCE
.. data:: ENUM
.. data:: VLEN
.. data:: ARRAY

API Constants
~~~~~~~~~~~~~

.. data:: SGN_NONE
.. data:: SGN_2

.. data:: ORDER_LE
.. data:: ORDER_BE
.. data:: ORDER_VAX
.. data:: ORDER_NONE
.. data:: ORDER_NATIVE

.. data:: DIR_DEFAULT
.. data:: DIR_ASCEND
.. data:: DIR_DESCEND

.. data:: STR_NULLTERM
.. data:: STR_NULLPAD
.. data:: STR_SPACEPAD

.. data:: NORM_IMPLIED
.. data:: NORM_MSBSET
.. data:: NORM_NONE

.. data:: CSET_ASCII
.. DATA:: CSET_UTF8

.. data:: PAD_ZERO
.. data:: PAD_ONE
.. data:: PAD_BACKGROUND

.. data:: BKG_NO
.. data:: BKG_TEMP
.. data:: BKG_YES
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/h5z.rst0000644000175000017500000000211314045746670016414 0ustar00takluyvertakluyverModule H5Z
==========

.. automodule:: h5py.h5z
    :members:

Module constants
----------------

.. _ref.h5z.FILTER:

Predefined filters
~~~~~~~~~~~~~~~~~~

.. data:: FILTER_NONE
.. data:: FILTER_ALL
.. data:: FILTER_DEFLATE
.. data:: FILTER_SHUFFLE
.. data:: FILTER_FLETCHER32
.. data:: FILTER_SZIP
.. data:: FILTER_SCALEOFFSET
.. data:: FILTER_LZF

.. _ref.h5z.FLAG:

Filter flags
~~~~~~~~~~~~

.. data:: FLAG_DEFMASK
.. data:: FLAG_MANDATORY
.. data:: FLAG_OPTIONAL
.. data:: FLAG_INVMASK
.. data:: FLAG_REVERSE
.. data:: FLAG_SKIP_EDC

.. _ref.h5z.SZIP:

SZIP-specific options
~~~~~~~~~~~~~~~~~~~~~

.. data:: SZIP_ALLOW_K13_OPTION_MASK
.. data:: SZIP_CHIP_OPTION_MASK
.. data:: SZIP_EC_OPTION_MASK
.. data:: SZIP_NN_OPTION_MASK
.. data:: SZIP_MAX_PIXELS_PER_BLOCK

Scale/offset-specific options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. data:: SO_FLOAT_DSCALE
.. data:: SO_FLOAT_ESCALE
.. data:: SO_INT
.. data:: SO_INT_MINBITS_DEFAULT

Other flags
~~~~~~~~~~~

.. data:: FILTER_CONFIG_ENCODE_ENABLED
.. data:: FILTER_CONFIG_DECODE_ENABLED

.. data:: DISABLE_EDC
.. data:: ENABLE_EDC
.. data:: NO_EDC
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/docs_api/index.rst0000644000175000017500000000122714350630273017010 0ustar00takluyvertakluyver
h5py Low-Level API Reference
============================


This documentation contains the auto-generated API information for the
h5py |release| "low-level" interface, a collection of Cython modules
which form the interface to the HDF5 C library.  It's hosted separately from
our main documentation as it requires autodoc.

.. note::

   The main docs for h5py are at https://docs.h5py.org.
   The docs here are specifically for the h5py low-level interface.

Contents
--------

.. toctree::
    :maxdepth: 2

    objects
    h5
    h5a
    h5ac
    h5d
    h5ds
    h5f
    h5fd
    h5g
    h5i
    h5l
    h5o
    h5p
    h5pl
    h5r
    h5s
    h5t
    h5z
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/docs_api/objects.rst0000644000175000017500000000014014045746670017335 0ustar00takluyvertakluyverBase object classes
===================

.. automodule:: h5py._objects

.. autoclass:: ObjectID
././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1739894414.555882
h5py-3.13.0/examples/0000755000175000017500000000000014755127217015212 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/examples/bytesio.py0000644000175000017500000000057414350630273017240 0ustar00takluyvertakluyver"""Create an HDF5 file in memory and retrieve the raw bytes

This could be used, for instance, in a server producing small HDF5
files on demand.
"""
import io
import h5py

bio = io.BytesIO()
with h5py.File(bio, 'w') as f:
    f['dataset'] = range(10)

data = bio.getvalue() # data is a regular Python bytes object.
print("Total size:", len(data))
print("First bytes:", data[:10])
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/collective_io.py0000644000175000017500000000344714045746670020416 0ustar00takluyvertakluyver# This file is to test collective io in h5py

"""
Author:  Jialin Liu, jalnliu@lbl.gov
Date:    Nov 17, 2015
Prerequisites: python 2.5.0, mpi4py and numpy
Source Codes: Already submit this 'collective io' branch to h5py master, meanwhile, can download this branch at https://github.com/valiantljk/h5py.git
Note: Must build the h5py with parallel hdf5
"""

from mpi4py import MPI
import numpy as np
import h5py
import sys

#"run as "mpirun -np 64 python-mpi collective_io.py 1 file.h5"
#(1 is for collective write, other numbers for non-collective write)"

colw=1 #default is collective write
filename="parallel_test.hdf5"
if len(sys.argv)>2:
    colw = int(sys.argv[1])
    filename=str(sys.argv[2])
comm =MPI.COMM_WORLD
nproc = comm.Get_size()
f = h5py.File(filename, 'w', driver='mpio', comm=MPI.COMM_WORLD)
rank = comm.Get_rank()
length_x = 6400*1024
length_y = 1024
dset = f.create_dataset('test', (length_x,length_y), dtype='f8')
#data type should be consistent in numpy and h5py, e.g., 64 bits
#otherwise, hdf5 layer will fall back to independent io.
f.atomic = False
length_rank=length_x / nproc
length_last_rank=length_x -length_rank*(nproc-1)
comm.Barrier()
timestart=MPI.Wtime()
start=rank*length_rank
end=start+length_rank
if rank==nproc-1: #last rank
    end=start+length_last_rank
temp=np.random.random((end-start,length_y))
if colw==1:
    with dset.collective:
        dset[start:end,:] = temp
else:
    dset[start:end,:] = temp
comm.Barrier()
timeend=MPI.Wtime()
if rank==0:
    if colw==1:
        print("collective write time %f" %(timeend-timestart))
    else:
        print("independent write time %f" %(timeend-timestart))
    print("data size x: %d y: %d" %(length_x, length_y))
    print("file size ~%d GB" % (length_x*length_y/1024.0/1024.0/1024.0*8.0))
    print("number of processes %d" %nproc)
f.close()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/examples/dataset_concatenation.py0000644000175000017500000000262714350630273022115 0ustar00takluyvertakluyver'''Concatenate multiple files into a single virtual dataset
'''
import h5py
import numpy as np
import sys
import os


def concatenate(file_names_to_concatenate):
    entry_key = 'data'  # where the data is inside of the source files.
    sh = h5py.File(file_names_to_concatenate[0], 'r')[entry_key].shape  # get the first ones shape.
    layout = h5py.VirtualLayout(shape=(len(file_names_to_concatenate),) + sh,
                                dtype=np.float64)
    with h5py.File("VDS.h5", 'w', libver='latest') as f:
        for i, filename in enumerate(file_names_to_concatenate):
            vsource = h5py.VirtualSource(filename, entry_key, shape=sh)
            layout[i, :, :, :] = vsource

        f.create_virtual_dataset(entry_key, layout, fillvalue=0)


def create_random_file(folder, index):
    """create one random file"""
    name = os.path.join(folder, 'myfile_' + str(index))
    with h5py.File(name=name, mode='w') as f:
        d = f.create_dataset('data', (5, 10, 20), 'i4')
        data = np.random.randint(low=0, high=100, size=(5*10*20))
        data = data.reshape(5, 10, 20)
        d[:] = data
    return name


def main(argv):
    files = argv[1:]
    if len(files) == 0:
        import tempfile
        tmp_dir = tempfile.mkdtemp()
        for i_file in range(5):
            files.append(create_random_file(tmp_dir, index=i_file))
    concatenate(files)


if __name__ == '__main__':
    main(sys.argv)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/dual_pco_edge.py0000644000175000017500000000160714045746670020344 0ustar00takluyvertakluyver'''Virtual datasets: The 'Dual PCO Edge' use case

https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''

import h5py

with h5py.File('raw_file_1.h5', 'r') as f:
    in_sh = f['data'].shape # get the input shape
    dtype = f['data'].dtype # get the datatype

gap = 10

# Sources represent the input datasets
vsource1 = h5py.VirtualSource('raw_file_1.h5', 'data', shape=in_sh)
vsource2 = h5py.VirtualSource('raw_file_2.h5', 'data', shape=in_sh)
# target is where we layout the virtual dataset
layout = h5py.VirtualLayout((in_sh[0], 2 * in_sh[1] + gap, in_sh[3]),
                            dtype=dtype)
layout[0:in_sh[0]:1, :, :] = vsource1
layout[(in_sh[0] + gap):(2 * in_sh[0] + gap + 1):1, :, :] = vsource2

# Create an output file
with h5py.File('outfile.h5', 'w', libver='latest') as f:
    f.create_virtual_dataset('data', layout, fillvalue=0x1)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/examples/eiger_use_case.py0000644000175000017500000000171314350630273020520 0ustar00takluyvertakluyver'''Virtual datasets: The 'Eiger' use case

https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''

import h5py
import numpy as np

# create files: 7 images each, (120,130) pixels
files = ['1.h5', '2.h5', '3.h5', '4.h5', '5.h5']

for filename in files:
    data = np.random.randint(0, 1<<16, (7,120,130))
    with h5py.File(filename, "w") as h:
        h["/data"] = data

entry_key = 'data' # where the data is inside of the source files.
sh = h5py.File(files[0], 'r')[entry_key].shape # get the first ones shape.

layout = h5py.VirtualLayout(shape=(len(files) * sh[0], ) + sh[1:], dtype=np.float64)
M_start = 0
for i, filename in enumerate(files):
    M_end = M_start + sh[0]
    vsource = h5py.VirtualSource(filename, entry_key, shape=sh)
    layout[M_start:M_end:1, :, :] = vsource
    M_start = M_end

with h5py.File("eiger_vds.h5", 'w', libver='latest') as f:
    f.create_virtual_dataset('data', layout, fillvalue=0)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/excalibur_detector_modules.py0000644000175000017500000000243414045746670023170 0ustar00takluyvertakluyver'''Virtual datasets: The 'Excalibur' use case

https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''

import h5py

raw_files = ["stripe_%d.h5" % stripe for stripe in range(1,7)]# get these names
in_key = 'data' # where is the data at the input?
outfile = 'full_detector.h5'
out_key = 'full_frame'

in_sh = h5py.File(raw_files[0], 'r')[in_key].shape # get the input shape
dtype = h5py.File(raw_files[0], 'r')[in_key].dtype # get the datatype

# now generate the output shape
vertical_gap = 10 # pixels spacing in the vertical
nfiles = len(raw_files)
nframes = in_sh[0]
width = in_sh[2]
height = (in_sh[1]*nfiles) + (vertical_gap*(nfiles-1))
out_sh = (nframes, height, width)

# Virtual target is a representation of the output dataset
layout = h5py.VirtualLayout(shape=out_sh, dtype=dtype)
offset = 0 # initial offset
for i in range(nfiles):
    print("frame_number is: %s" % str(i)) # for feedback
    vsource = h5py.VirtualSource(raw_files[i], in_key, shape=in_sh) #a representation of the input dataset
    layout[:, offset:(offset + in_sh[1]), :] = vsource
    offset += in_sh[1]+vertical_gap # increment the offset

# Create an output file.
with h5py.File(outfile, 'w', libver='latest') as f:
    f.create_virtual_dataset(out_key, layout, fillvalue=0x1)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/multiblockslice_interleave.py0000644000175000017500000000250014045746670023166 0ustar00takluyvertakluyver"""
An example of using MultiBlockSlice to interleave frames in a virtual dataset.
"""
import h5py


# These files were written round-robin in blocks of 1000 frames from a single source
# 1.h5 has 0-999, 4000-4999, ...; 2.h4 has 1000-1999, 5000-5999, ...; etc
# The frames from each block are contiguous within each file
# e.g. 1.h5 = [..., 998, 999, 4000, 4001, ... ]
files = ["1.h5", "2.h5", "3.h5", "4.h5"]
dataset_name = "data"
dtype = "float"
source_shape = (25000, 256, 512)
target_shape = (100000, 256, 512)
block_size = 1000

v_layout = h5py.VirtualLayout(shape=target_shape, dtype=dtype)

for file_idx, file_path in enumerate(files):
    v_source = h5py.VirtualSource(
        file_path, name=dataset_name, shape=source_shape, dtype=dtype
    )
    dataset_frames = v_source.shape[0]

    # A single raw file maps to every len(files)th block of frames in the VDS
    start = file_idx * block_size  # 0, 1000, 2000, 3000
    stride = len(files) * block_size  # 4000
    count = dataset_frames // block_size  # 25
    block = block_size  # 1000

    # MultiBlockSlice for frame dimension and full extent for height and width
    v_layout[h5py.MultiBlockSlice(start, stride, count, block), :, :] = v_source

with h5py.File("interleave_vds.h5", "w", libver="latest") as f:
    f.create_virtual_dataset(dataset_name, v_layout, fillvalue=0)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/multiprocessing_example.py0000644000175000017500000000672314045746670022540 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Demonstrates how to use h5py with the multiprocessing module.

    This module implements a simple multi-process program to generate
    Mandelbrot set images.  It uses a process pool to do the computations,
    and a single process to save the results to file.

    Importantly, only one process actually reads/writes the HDF5 file.
    Remember that when a process is fork()ed, the child inherits the HDF5
    state from its parent, which can be dangerous if you already have a file
    open.  Trying to interact with the same file on disk from multiple
    processes results in undefined behavior.

    If matplotlib is available, the program will read from the HDF5 file and
    display an image of the fractal in a window.  To re-run the calculation,
    delete the file "mandelbrot.hdf5".

"""

import numpy as np
import multiprocessing as mp
import h5py

# === Parameters for Mandelbrot calculation ===================================

NX = 512
NY = 512
ESCAPE = 1000

XSTART = -0.16070135 - 5e-8
YSTART =  1.0375665 -5e-8
XEXTENT = 1.0E-7
YEXTENT = 1.0E-7

xincr = XEXTENT*1.0/NX
yincr = YEXTENT*1.0/NY

# === Functions to compute set ================================================

def compute_escape(pos):
    """ Compute the number of steps required to escape from a point on the
    complex plane """
    z = 0+0j;
    for i in range(ESCAPE):
        z = z**2 + pos
        if abs(z) > 2:
            break
    return i

def compute_row(xpos):
    """ Compute a 1-D array containing escape step counts for each y-position.
    """
    a = np.ndarray((NY,), dtype='i')
    for y in range(NY):
        pos = complex(XSTART,YSTART) + complex(xpos, y*yincr)
        a[y] = compute_escape(pos)
    return a

# === Functions to run process pool & visualize ===============================

def run_calculation():
    """ Begin multi-process calculation, and save to file """

    print("Creating %d-process pool" % mp.cpu_count())

    pool = mp.Pool(mp.cpu_count())

    f = h5py.File('mandelbrot.hdf5','w')

    print("Creating output dataset with shape %s x %s" % (NX, NY))

    dset = f.create_dataset('mandelbrot', (NX,NY), 'i')
    dset.attrs['XSTART'] = XSTART
    dset.attrs['YSTART'] = YSTART
    dset.attrs['XEXTENT'] = XEXTENT
    dset.attrs['YEXTENT'] = YEXTENT

    result = pool.imap(compute_row, (x*xincr for x in range(NX)))

    for idx, arr in enumerate(result):
        if idx%25 == 0: print("Recording row %s" % idx)
        dset[idx] = arr

    print("Closing HDF5 file")

    f.close()

    print("Shutting down process pool")

    pool.close()
    pool.join()

def visualize_file():
    """ Open the HDF5 file and display the result """
    try:
        import pylab as p
    except ImportError:
        print("Whoops! Matplotlib is required to view the fractal.")
        raise

    f = h5py.File('mandelbrot.hdf5','r')
    dset = f['mandelbrot']
    a = dset[...]
    p.imshow(a.transpose())

    print("Displaying fractal. Close window to exit program.")
    try:
        p.show()
    finally:
        f.close()

if __name__ == '__main__':
    if not h5py.is_hdf5('mandelbrot.hdf5'):
        run_calculation()
    else:
        print('Fractal found in "mandelbrot.hdf5". Delete file to re-run calculation.')
    visualize_file()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/percival_use_case.py0000644000175000017500000000212314045746670021240 0ustar00takluyvertakluyver'''Virtual datasets: the 'Percival Frame Builder' use case.

https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''
import h5py

in_key = 'data' # where is the data at the input?
dtype = h5py.File('raw_file_1.h5')['data'].dtype
outshape = (799,2000,2000)

# Virtual target is a representation of the output dataset
layout = h5py.VirtualLayout(shape=outshape, dtype=dtype)

# Sources represent the input datasets
vsource1 = h5py.VirtualSource('raw_file_1.h5', 'data', shape=(200, 2000, 2000))
vsource2 = h5py.VirtualSource('raw_file_2.h5', 'data', shape=(200, 2000, 2000))
vsource3 = h5py.VirtualSource('raw_file_3.h5', 'data', shape=(200, 2000, 2000))
vsource4 = h5py.VirtualSource('raw_file_4.h5', 'data', shape=(199, 2000, 2000))

# Map the inputs into the virtual dataset
layout[0:799:4, :, :] = vsource1
layout[1:799:4, :, :] = vsource2
layout[2:799:4, :, :] = vsource3
layout[3:799:4, :, :] = vsource4

# Create an output file
with h5py.File('full_time_series.h5', 'w', libver='latest') as f:
    f.create_virtual_dataset(in_key, layout, fillvalue=0x1)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/store_and_retrieve_units_example.py0000644000175000017500000000247314045746670024414 0ustar00takluyvertakluyver"""
Author: Daniel Berke, berke.daniel@gmail.com
Date: October 27, 2019
Requirements: h5py>=2.10.0, unyt>=v2.4.0
Notes: This short example script shows how to save unit information attached
to a `unyt_array` using `attrs` in HDF5, and recover it upon reading the file.
It uses the Unyt package (https://github.com/yt-project/unyt) because that's
what I'm familiar with, but presumably similar options exist for Pint and
astropy.units.
"""

import h5py
import tempfile
import unyt as u

# Set up a temporary file for this example.
tf = tempfile.TemporaryFile()
f = h5py.File(tf, 'a')

# Create some mock data with moderately complicated units (this is the
# dimensional representation of Joules of energy).
test_data = [1, 2, 3, 4, 5] * u.kg * ( u.m / u.s ) ** 2
print(test_data.units)
# kg*m**2/s**2

# Create a data set to hold the numerical information:
f.create_dataset('stored data', data=test_data)

# Save the units information as a string in `attrs`.
f['stored data'].attrs['units'] = str(test_data.units)

# Now recover the data, using the saved units information to reconstruct the
# original quantities.
reconstituted_data = u.unyt_array(f['stored data'],
                                  units=f['stored data'].attrs['units'])

print(reconstituted_data.units)
# kg*m**2/s**2

assert reconstituted_data.units == test_data.units
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/store_datetimes.py0000644000175000017500000000036414045746670020764 0ustar00takluyvertakluyverimport h5py
import numpy as np

arr = np.array([np.datetime64('2019-09-22T17:38:30')])

with h5py.File('datetimes.h5', 'w') as f:
    # Create dataset
    f['data'] = arr.astype(h5py.opaque_dtype(arr.dtype))

    # Read
    print(f['data'][:])
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/swmr_inotify_example.py0000644000175000017500000000516114045746670022035 0ustar00takluyvertakluyver
"""
    Demonstrate the use of h5py in SWMR mode to monitor the growth of a dataset
    on notification of file modifications.

    This demo uses pyinotify as a wrapper of Linux inotify.
    https://pypi.python.org/pypi/pyinotify

    Usage:
            swmr_inotify_example.py [FILENAME [DATASETNAME]]

              FILENAME:    name of file to monitor. Default: swmr.h5
              DATASETNAME: name of dataset to monitor in DATAFILE. Default: data

    This script will open the file in SWMR mode and monitor the shape of the
    dataset on every write event (from inotify). If another application is
    concurrently writing data to the file, the writer must have have switched
    the file into SWMR mode before this script can open the file.
"""
import asyncore
import pyinotify
import sys
import h5py
import logging

#assert h5py.version.hdf5_version_tuple >= (1,9,178), "SWMR requires HDF5 version >= 1.9.178"

class EventHandler(pyinotify.ProcessEvent):

    def monitor_dataset(self, filename, datasetname):
        logging.info("Opening file %s", filename)
        self.f = h5py.File(filename, 'r', libver='latest', swmr=True)
        logging.debug("Looking up dataset %s"%datasetname)
        self.dset = self.f[datasetname]

        self.get_dset_shape()

    def get_dset_shape(self):
        logging.debug("Refreshing dataset")
        self.dset.refresh()

        logging.debug("Getting shape")
        shape = self.dset.shape
        logging.info("Read data shape: %s"%str(shape))
        return shape

    def read_dataset(self, latest):
        logging.info("Reading out dataset [%d]"%latest)
        self.dset[latest:]

    def process_IN_MODIFY(self, event):
        logging.debug("File modified!")
        shape = self.get_dset_shape()
        self.read_dataset(shape[0])

    def process_IN_CLOSE_WRITE(self, event):
        logging.info("File writer closed file")
        self.get_dset_shape()
        logging.debug("Good bye!")
        sys.exit(0)


if __name__ == "__main__":
    logging.basicConfig(format='%(asctime)s  %(levelname)s\t%(message)s',level=logging.INFO)

    file_name = "swmr.h5"
    if len(sys.argv) > 1:
        file_name = sys.argv[1]
    dataset_name = "data"
    if len(sys.argv) > 2:
        dataset_name = sys.argv[2]


    wm = pyinotify.WatchManager()  # Watch Manager
    mask = pyinotify.IN_MODIFY | pyinotify.IN_CLOSE_WRITE
    evh = EventHandler()
    evh.monitor_dataset( file_name, dataset_name )

    notifier = pyinotify.AsyncNotifier(wm, evh)
    wdd = wm.add_watch(file_name, mask, rec=False)

    # Sit in this loop() until the file writer closes the file
    # or the user hits ctrl-c
    asyncore.loop()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/examples/swmr_multiprocess.py0000644000175000017500000000743114350630273021362 0ustar00takluyvertakluyver"""
    Demonstrate the use of h5py in SWMR mode to write to a dataset (appending)
    from one process while monitoring the growing dataset from another process.

    Usage:
            swmr_multiprocess.py [FILENAME [DATASETNAME]]

              FILENAME:    name of file to monitor. Default: swmrmp.h5
              DATASETNAME: name of dataset to monitor in DATAFILE. Default: data

    This script will start up two processes: a writer and a reader. The writer
    will open/create the file (FILENAME) in SWMR mode, create a dataset and start
    appending data to it. After each append the dataset is flushed and an event
    sent to the reader process. Meanwhile the reader process will wait for events
    from the writer and when triggered it will refresh the dataset and read the
    current shape of it.
"""

import sys
import h5py
import numpy as np
import logging
from multiprocessing import Process, Event

class SwmrReader(Process):
    def __init__(self, event, fname, dsetname, timeout = 2.0):
        super().__init__()
        self._event = event
        self._fname = fname
        self._dsetname = dsetname
        self._timeout = timeout

    def run(self):
        self.log = logging.getLogger('reader')
        self.log.info("Waiting for initial event")
        assert self._event.wait( self._timeout )
        self._event.clear()

        self.log.info("Opening file %s", self._fname)
        f = h5py.File(self._fname, 'r', libver='latest', swmr=True)
        assert f.swmr_mode
        dset = f[self._dsetname]
        try:
            # monitor and read loop
            while self._event.wait( self._timeout ):
                self._event.clear()
                self.log.debug("Refreshing dataset")
                dset.refresh()

                shape = dset.shape
                self.log.info("Read dset shape: %s"%str(shape))
        finally:
            f.close()

class SwmrWriter(Process):
    def __init__(self, event, fname, dsetname):
        super().__init__()
        self._event = event
        self._fname = fname
        self._dsetname = dsetname

    def run(self):
        self.log = logging.getLogger('writer')
        self.log.info("Creating file %s", self._fname)
        f = h5py.File(self._fname, 'w', libver='latest')
        try:
            arr = np.array([1,2,3,4])
            dset = f.create_dataset(self._dsetname, chunks=(2,), maxshape=(None,), data=arr)
            assert not f.swmr_mode

            self.log.info("SWMR mode")
            f.swmr_mode = True
            assert f.swmr_mode
            self.log.debug("Sending initial event")
            self._event.set()

            # Write loop
            for i in range(5):
                new_shape = ((i+1) * len(arr), )
                self.log.info("Resizing dset shape: %s"%str(new_shape))
                dset.resize( new_shape )
                self.log.debug("Writing data")
                dset[i*len(arr):] = arr
                #dset.write_direct( arr, np.s_[:], np.s_[i*len(arr):] )
                self.log.debug("Flushing data")
                dset.flush()
                self.log.info("Sending event")
                self._event.set()
        finally:
            f.close()


if __name__ == "__main__":
    logging.basicConfig(format='%(levelname)10s  %(asctime)s  %(name)10s  %(message)s',level=logging.INFO)
    fname = 'swmrmp.h5'
    dsetname = 'data'
    if len(sys.argv) > 1:
        fname = sys.argv[1]
    if len(sys.argv) > 2:
        dsetname = sys.argv[2]

    event = Event()
    reader = SwmrReader(event, fname, dsetname)
    writer = SwmrWriter(event, fname, dsetname)

    logging.info("Starting reader")
    reader.start()
    logging.info("Starting reader")
    writer.start()

    logging.info("Waiting for writer to finish")
    writer.join()
    logging.info("Waiting for reader to finish")
    reader.join()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/examples/threading_example.py0000644000175000017500000002505314350630273021241 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Demonstrates use of h5py in a multi-threaded GUI program.

    In a perfect world, multi-threaded programs would practice strict
    separation of tasks, with separate threads for HDF5, user interface,
    processing, etc, communicating via queues.  In the real world, shared
    state is frequently encountered, especially in the world of GUIs.  It's
    quite common to initialize a shared resource (in this case an HDF5 file),
    and pass it around between threads.  One must then be careful to regulate
    access using locks, to ensure that each thread sees the file in a
    consistent fashion.

    This program demonstrates how to use h5py in a medium-sized
    "shared-state" threading application.  Two threads exist: a GUI thread
    (Tkinter) which takes user input and displays results, and a calculation
    thread which is used to perform computation in the background, leaving
    the GUI responsive to user input.

    The computation thread calculates portions of the Mandelbrot set and
    stores them in an HDF5 file.  The visualization/control thread reads
    datasets from the same file and displays them using matplotlib.
"""

import tkinter as tk
import threading

import numpy as np
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure

import h5py


file_lock = threading.RLock()  # Protects the file from concurrent access

t = None  # We'll use this to store the active computation thread

class ComputeThread(threading.Thread):

    """
        Computes a slice of the Mandelbrot set, and saves it to the HDF5 file.
    """

    def __init__(self, f, shape, escape, startcoords, extent, eventcall):
        """ Set up a computation thread.

        f: HDF5 File object
        shape: 2-tuple (NX, NY)
        escape: Integer giving max iterations to escape
        start: Complex number giving initial location on the plane
        extent: Complex number giving calculation extent on the plane
        """
        self.f = f
        self.shape = shape
        self.escape = escape
        self.startcoords = startcoords
        self.extent = extent
        self.eventcall = eventcall

        threading.Thread.__init__(self)

    def run(self):
        """ Perform computations and record the result to file """

        nx, ny = self.shape

        arr = np.ndarray((nx,ny), dtype='i')

        xincr = self.extent.real/nx
        yincr = self.extent.imag/ny

        def compute_escape(pos, escape):
            """ Compute the number of steps required to escape """
            z = 0+0j;
            for i in range(escape):
                z = z**2 + pos
                if abs(z) > 2:
                    break
            return i

        for x in range(nx):
            if x%25 == 0: print("Computing row %d" % x)
            for y in range(ny):
                pos = self.startcoords + complex(x*xincr, y*yincr)
                arr[x,y] = compute_escape(pos, self.escape)

        with file_lock:
            dsname = "slice%03d" % len(self.f)
            dset = self.f.create_dataset(dsname, (nx, ny), 'i')
            dset.attrs['shape'] = self.shape
            dset.attrs['start'] = self.startcoords
            dset.attrs['extent'] = self.extent
            dset.attrs['escape'] = self.escape
            dset[...] = arr

        print("Calculation for %s done" % dsname)

        self.eventcall()

class ComputeWidget:

    """
        Responsible for input widgets, and starting new computation threads.
    """

    def __init__(self, f, master, eventcall):

        self.f = f

        self.eventcall = eventcall

        self.mainframe = tk.Frame(master=master)

        entryframe = tk.Frame(master=self.mainframe)

        nxlabel = tk.Label(entryframe, text="NX")
        nylabel = tk.Label(entryframe, text="NY")
        escapelabel = tk.Label(entryframe, text="Escape")
        startxlabel = tk.Label(entryframe, text="Start X")
        startylabel = tk.Label(entryframe, text="Start Y")
        extentxlabel = tk.Label(entryframe, text="Extent X")
        extentylabel = tk.Label(entryframe, text="Extent Y")

        self.nxfield = tk.Entry(entryframe)
        self.nyfield = tk.Entry(entryframe)
        self.escapefield = tk.Entry(entryframe)
        self.startxfield = tk.Entry(entryframe)
        self.startyfield = tk.Entry(entryframe)
        self.extentxfield = tk.Entry(entryframe)
        self.extentyfield = tk.Entry(entryframe)

        nxlabel.grid(row=0, column=0, sticky=tk.E)
        nylabel.grid(row=1, column=0, sticky=tk.E)
        escapelabel.grid(row=2, column=0, sticky=tk.E)
        startxlabel.grid(row=3, column=0, sticky=tk.E)
        startylabel.grid(row=4, column=0, sticky=tk.E)
        extentxlabel.grid(row=5, column=0, sticky=tk.E)
        extentylabel.grid(row=6, column=0, sticky=tk.E)

        self.nxfield.grid(row=0, column=1)
        self.nyfield.grid(row=1, column=1)
        self.escapefield.grid(row=2, column=1)
        self.startxfield.grid(row=3, column=1)
        self.startyfield.grid(row=4, column=1)
        self.extentxfield.grid(row=5, column=1)
        self.extentyfield.grid(row=6, column=1)

        entryframe.grid(row=0, rowspan=2, column=0)

        self.suggestbutton = tk.Button(master=self.mainframe, text="Suggest", command=self.suggest)
        self.computebutton = tk.Button(master=self.mainframe, text="Compute", command=self.compute)

        self.suggestbutton.grid(row=0, column=1)
        self.computebutton.grid(row=1, column=1)

        self.suggest = 0

    def compute(self, *args):
        """ Validate input and start calculation thread.

        We use a global variable "t" to store the current thread, to make
        sure old threads are properly joined before they are discarded.
        """
        global t

        try:
            nx = int(self.nxfield.get())
            ny = int(self.nyfield.get())
            escape = int(self.escapefield.get())
            start = complex(float(self.startxfield.get()), float(self.startyfield.get()))
            extent = complex(float(self.extentxfield.get()), float(self.extentyfield.get()))
            if (nx<=0) or (ny<=0) or (escape<=0):
                raise ValueError("NX, NY and ESCAPE must be positive")
            if abs(extent)==0:
                raise ValueError("Extent must be finite")
        except (ValueError, TypeError) as e:
            print(e)
            return

        if t is not None:
            t.join()

        t = ComputeThread(self.f, (nx,ny), escape, start, extent, self.eventcall)
        t.start()

    def suggest(self, *args):
        """ Populate the input fields with interesting locations """

        suggestions = [(200,200,50, -2, -1, 3, 2),
                       (500, 500, 200, 0.110, -0.680, 0.05, 0.05),
                       (200, 200, 1000, -0.16070135-5e-8, 1.0375665-5e-8, 1e-7, 1e-7),
                       (500, 500, 100, -1, 0, 0.5, 0.5)]

        for entry, val in zip((self.nxfield, self.nyfield, self.escapefield,
                self.startxfield, self.startyfield, self.extentxfield,
                self.extentyfield), suggestions[self.suggest]):
            entry.delete(0, 999)
            entry.insert(0, repr(val))

        self.suggest = (self.suggest+1)%len(suggestions)


class ViewWidget:

    """
        Draws images using the datasets recorded in the HDF5 file.  Also
        provides widgets to pick which dataset is displayed.
    """

    def __init__(self, f, master):

        self.f = f

        self.mainframe = tk.Frame(master=master)
        self.lbutton = tk.Button(self.mainframe, text="<= Back", command=self.back)
        self.rbutton = tk.Button(self.mainframe, text="Next =>", command=self.forward)
        self.loclabel = tk.Label(self.mainframe, text='To start, enter values and click "compute"')
        self.infolabel = tk.Label(self.mainframe, text='Or, click the "suggest" button for interesting locations')

        self.fig = Figure(figsize=(5, 5), dpi=100)
        self.plot = self.fig.add_subplot(111)
        self.canvas = FigureCanvasTkAgg(self.fig, master=self.mainframe)
        self.canvas.draw_idle()

        self.loclabel.grid(row=0, column=1)
        self.infolabel.grid(row=1, column=1)
        self.lbutton.grid(row=2, column=0)
        self.canvas.get_tk_widget().grid(row=2, column=1)
        self.rbutton.grid(row=2, column=2)

        self.index = 0

        self.jumptolast()

    def draw_fractal(self):
        """ Read a dataset from the HDF5 file and display it """

        with file_lock:
            name = list(self.f.keys())[self.index]
            dset = self.f[name]
            arr = dset[...]
            start = dset.attrs['start']
            extent = dset.attrs['extent']
            self.loclabel["text"] = 'Displaying dataset "%s" (%d of %d)' % (dset.name, self.index+1, len(self.f))
            self.infolabel["text"] = "%(shape)s pixels, starts at %(start)s, extent %(extent)s" % dset.attrs

        self.plot.clear()
        self.plot.imshow(arr.transpose(), cmap='jet', aspect='auto', origin='lower',
                         extent=(start.real, (start.real+extent.real),
                                 start.imag, (start.imag+extent.imag)))
        self.canvas.draw_idle()

    def back(self):
        """ Go to the previous dataset (in ASCII order) """
        if self.index == 0:
            print("Can't go back")
            return
        self.index -= 1
        self.draw_fractal()

    def forward(self):
        """ Go to the next dataset (in ASCII order) """
        if self.index == (len(self.f)-1):
            print("Can't go forward")
            return
        self.index += 1
        self.draw_fractal()

    def jumptolast(self,*args):
        """ Jump to the last (ASCII order) dataset and display it """
        with file_lock:
            if len(self.f) == 0:
                print("can't jump to last (no datasets)")
                return
            index = len(self.f)-1
        self.index = index
        self.draw_fractal()


if __name__ == '__main__':

    f = h5py.File('mandelbrot_gui.hdf5', 'a')

    root = tk.Tk()

    display = ViewWidget(f, root)

    root.bind("<>", display.jumptolast)
    def callback():
        root.event_generate("<>")
    compute = ComputeWidget(f, root, callback)

    display.mainframe.grid(row=0, column=0)
    compute.mainframe.grid(row=1, column=0)

    try:
        root.mainloop()
    finally:
        if t is not None:
            t.join()
        f.close()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/examples/vds_simple.py0000644000175000017500000000222714045746670017736 0ustar00takluyvertakluyver"""A simple example of building a virtual dataset.

This makes four 'source' HDF5 files, each with a 1D dataset of 100 numbers.
Then it makes a single 4x100 virtual dataset in a separate file, exposing
the four sources as one dataset.
"""

import h5py
import numpy as np

# create some sample data
data = np.arange(0, 100).reshape(1, 100) + np.arange(1, 5).reshape(4, 1)

# Create source files (0.h5 to 3.h5)
for n in range(4):
    with h5py.File(f"{n}.h5", "w") as f:
        d = f.create_dataset("data", (100,), "i4", data[n])

# Assemble virtual dataset
layout = h5py.VirtualLayout(shape=(4, 100), dtype="i4")
for n in range(4):
    filename = "{}.h5".format(n)
    vsource = h5py.VirtualSource(filename, "data", shape=(100,))
    layout[n] = vsource

# Add virtual dataset to output file
with h5py.File("VDS.h5", "w", libver="latest") as f:
    f.create_virtual_dataset("vdata", layout, fillvalue=-5)
    f.create_dataset("data", data=data, dtype="i4")


# read data back
# virtual dataset is transparent for reader!
with h5py.File("VDS.h5", "r") as f:
    print("Virtual dataset:")
    print(f["vdata"][:, :10])
    print("Normal dataset:")
    print(f["data"][:, :10])
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/examples/write-direct-compressed.py0000644000175000017500000000235414350630273022324 0ustar00takluyvertakluyver"""Write compressed chunks directly, bypassing HDF5's filters
"""
import h5py
import numpy as np
import zlib

f = h5py.File("direct_chunk.h5", "w")

block_size = 2048
dataset = f.create_dataset(
    "data", (256, 1024, 1024), dtype="uint16", chunks=(64, 128, 128),
    compression="gzip", compression_opts=4,
)
# h5py's compression='gzip' is actually a misnomer: gzip does the same
# compression, but adds some extra metadata before & after the compressed data.
# This won't work if you use gzip.compress() instead of zlib!

# Random numbers with only a few possibilities, so some compression is possible.
array = np.random.randint(0, 10, size=(64, 128, 128), dtype=np.uint16)

# Compress the data, and write it into the dataset. (0, 0, 128) are coordinates
# for the start of a chunk. Equivalent to:
#   dataset[0:64, 0:128, 128:256] = array
compressed = zlib.compress(array, level=4)
dataset.id.write_direct_chunk((0, 0, 128), compressed)
print(f"Written {len(compressed)} bytes compressed data")

# Read the chunk back (HDF5 will decompress it) and check the data is the same.
read_data = dataset[:64, :128, 128:256]
np.testing.assert_array_equal(read_data, array)
print(f"Verified array of {read_data.size} elements ({read_data.nbytes} bytes)")

f.close()
././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1739894414.567882
h5py-3.13.0/h5py/0000755000175000017500000000000014755127217014261 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/__init__.py0000644000175000017500000000711614507227212016366 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    This is the h5py package, a Python interface to the HDF5
    scientific data format.
"""

from warnings import warn as _warn
import atexit


# --- Library setup -----------------------------------------------------------

# When importing from the root of the unpacked tarball or git checkout,
# Python sees the "h5py" source directory and tries to load it, which fails.
# We tried working around this by using "package_dir" but that breaks Cython.
try:
    from . import _errors
except ImportError:
    import os.path as _op
    if _op.exists(_op.join(_op.dirname(__file__), '..', 'setup.py')):
        raise ImportError("You cannot import h5py from inside the install directory.\nChange to another directory first.")
    else:
        raise

from . import version

if version.hdf5_version_tuple != version.hdf5_built_version_tuple:
    _warn(("h5py is running against HDF5 {0} when it was built against {1}, "
           "this may cause problems").format(
            '{0}.{1}.{2}'.format(*version.hdf5_version_tuple),
            '{0}.{1}.{2}'.format(*version.hdf5_built_version_tuple)
    ))


_errors.silence_errors()

from ._conv import register_converters as _register_converters, \
                   unregister_converters as _unregister_converters
_register_converters()
atexit.register(_unregister_converters)

from .h5z import _register_lzf
_register_lzf()


# --- Public API --------------------------------------------------------------

from . import h5a, h5d, h5ds, h5f, h5fd, h5g, h5r, h5s, h5t, h5p, h5z, h5pl

from ._hl import filters
from ._hl.base import is_hdf5, HLObject, Empty
from ._hl.files import (
    File,
    register_driver,
    unregister_driver,
    registered_drivers,
)
from ._hl.group import Group, SoftLink, ExternalLink, HardLink
from ._hl.dataset import Dataset
from ._hl.datatype import Datatype
from ._hl.attrs import AttributeManager
from ._hl.vds import VirtualSource, VirtualLayout

from ._selector import MultiBlockSlice
from .h5 import get_config
from .h5r import Reference, RegionReference
from .h5t import (special_dtype, check_dtype,
    vlen_dtype, string_dtype, enum_dtype, ref_dtype, regionref_dtype,
    opaque_dtype,
    check_vlen_dtype, check_string_dtype, check_enum_dtype, check_ref_dtype,
    check_opaque_dtype,
)
from .h5s import UNLIMITED

from .version import version as __version__


def run_tests(args=''):
    """Run tests with pytest and returns the exit status as an int.
    """
    # Lazy-loading of tests package to avoid strong dependency on test
    # requirements, e.g. pytest
    from .tests import run_tests
    return run_tests(args)


def enable_ipython_completer():
    """ Call this from an interactive IPython session to enable tab-completion
    of group and attribute names.
    """
    import sys
    if 'IPython' in sys.modules:
        ip_running = False
        try:
            from IPython.core.interactiveshell import InteractiveShell
            ip_running = InteractiveShell.initialized()
        except ImportError:
            # support  None

cdef extern from "numpy/arrayobject.h":
    void PyArray_ENABLEFLAGS(cnp.ndarray arr, int flags)


ctypedef int (*conv_operator_t)(void* ipt, void* opt, void* bkg, void* priv) except -1
ctypedef herr_t (*init_operator_t)(hid_t src, hid_t dst, void** priv) except -1

# Generic conversion callback
#
# The actual conversion routines are one-liners which plug the appropriate
# operator callback into this function.  This prevents us from having to
# repeat all the conversion boilerplate for every single callback.
#
# While this is somewhat slower than a custom function, the added overhead is
# likely small compared to the cost of the Python-side API calls required to
# implement the conversions.
cdef herr_t generic_converter(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl, conv_operator_t op,
                    init_operator_t initop, H5T_bkg_t need_bkg)  except -1:
    cdef:
        int command
        conv_size_t *sizes
        int i
        char* buf = buf_i
        char* bkg = bkg_i

    command = cdata[0].command
    if command == H5T_CONV_INIT:
        cdata[0].need_bkg = need_bkg
        return initop(src_id, dst_id, &(cdata[0].priv))

    elif command == H5T_CONV_FREE:
        efree(cdata[0].priv)
        cdata[0].priv = NULL

    elif command == H5T_CONV_CONV:
        sizes = cdata[0].priv
        if H5Tis_variable_str(src_id):
            sizes.cset = H5Tget_cset(src_id)
        elif H5Tis_variable_str(dst_id):
            sizes.cset = H5Tget_cset(dst_id)
        if bkg_stride==0:
            bkg_stride = sizes[0].dst_size
        if buf_stride == 0:
            # No explicit stride seems to mean that the elements are packed
            # contiguously in the buffer.  In this case we must be careful
            # not to "stomp on" input elements if the output elements are
            # of a larger size.

            if sizes[0].src_size >= sizes[0].dst_size:
                for i in range(nl):
                    op( buf + (i*sizes[0].src_size),    # input pointer
                        buf + (i*sizes[0].dst_size),    # output pointer
                        bkg + (i*bkg_stride),           # backing buffer
                        cdata[0].priv)                  # conversion context
            else:
                for i in range(nl-1, -1, -1):
                    op( buf + (i*sizes[0].src_size),
                        buf + (i*sizes[0].dst_size),
                        bkg + (i*bkg_stride),
                        cdata[0].priv)
        else:
            # With explicit strides, we assume that the library knows the
            # alignment better than us.  Therefore we use the given stride
            # offsets exclusively.
            for i in range(nl):
                op( buf + (i*buf_stride),
                    buf + (i*buf_stride),   # note this is the same!
                    bkg + (i*bkg_stride),
                    cdata[0].priv)
    else:
        return -2   # Unrecognized command.  Note this is NOT an exception.
    return 0

# =============================================================================
# Helper functions

cdef void log_convert_registered(hid_t src, hid_t dst):
    logger.debug("Creating converter from %s to %s", H5Tget_class(src), H5Tget_class(dst))


# =============================================================================
# Generic conversion

ctypedef struct conv_size_t:
    size_t src_size
    size_t dst_size
    int cset

cdef herr_t init_generic(hid_t src, hid_t dst, void** priv) except -1:

    cdef conv_size_t *sizes
    sizes = emalloc(sizeof(conv_size_t))
    priv[0] = sizes
    sizes[0].src_size = H5Tget_size(src)
    sizes[0].dst_size = H5Tget_size(dst)
    log_convert_registered(src, dst)

    return 0

# =============================================================================
# Vlen string conversion

cdef bint _is_pyobject_opaque(hid_t obj):
    # This complexity is needed to sure:
    #   1) That ctag is freed
    #   2) We don't segfault (for some reason a try-finally statement is needed,
    #   even if we do (what I think are) the right steps in copying and freeing.
    cdef char* ctag = NULL
    try:
        if H5Tget_class(obj) == H5T_OPAQUE:
            ctag = H5Tget_tag(obj)
            if ctag != NULL:
                if strcmp(ctag, H5PY_PYTHON_OPAQUE_TAG) == 0:
                    return True
        return False
    finally:
        H5free_memory(ctag)

cdef herr_t init_vlen2str(hid_t src_vlen, hid_t dst_str, void** priv) except -1:
    # /!\ Untested
    cdef conv_size_t *sizes

    if not H5Tis_variable_str(src_vlen):
        return -2

    if not _is_pyobject_opaque(dst_str):
        return -2

    log_convert_registered(src_vlen, dst_str)

    sizes = emalloc(sizeof(conv_size_t))
    priv[0] = sizes

    sizes[0].src_size = H5Tget_size(src_vlen)
    sizes[0].dst_size = H5Tget_size(dst_str)
    return 0

cdef herr_t init_str2vlen(hid_t src_str, hid_t dst_vlen, void** priv) except -1:
    # /!\ untested !
    cdef conv_size_t *sizes

    if not H5Tis_variable_str(dst_vlen):
        return -2

    if not _is_pyobject_opaque(src_str):
        return -2

    log_convert_registered(src_str, dst_vlen)

    sizes = emalloc(sizeof(conv_size_t))
    priv[0] = sizes
    sizes[0].src_size = H5Tget_size(src_str)
    sizes[0].dst_size = H5Tget_size(dst_vlen)

    return 0

cdef int conv_vlen2str(void* ipt, void* opt, void* bkg, void* priv) except -1:
    cdef:
        PyObject** buf_obj = opt
        char** buf_cstring = ipt
        PyObject* tmp_object
        bytes tmp_bytes
        conv_size_t *sizes = priv
        char* buf_cstring0

    buf_cstring0 = buf_cstring[0]

    if buf_cstring0 == NULL:
        tmp_bytes =  b""
    else:
        tmp_bytes = buf_cstring0 # Let cython converts char* -> bytes for us
    tmp_object = tmp_bytes

    # Since all data conversions are by definition in-place, it
    # is our responsibility to free the memory used by the vlens.
    efree(buf_cstring0)

    # Write the new unicode object to the buffer in-place and ensure it is not destroyed
    buf_obj[0] = tmp_object
    Py_XINCREF(tmp_object)
    return 0

cdef int conv_str2vlen(void* ipt, void* opt, void* bkg, void* priv) except -1:
    cdef:
        PyObject** buf_obj = ipt
        char** buf_cstring = opt
        conv_size_t* sizes = priv
        char* temp_string = NULL
        size_t temp_string_len = 0  # Not including null term
        PyObject* buf_obj0
        char* buf_cstring0
        object temp_object

    buf_obj0 = buf_obj[0]
    temp_object =  buf_obj0

    if isinstance(temp_object, unicode):
        enc = 'utf-8' if (sizes[0].cset == H5T_CSET_UTF8) else 'ascii'
        temp_object = temp_object.encode(enc)

    elif not isinstance(temp_object, bytes):
        raise TypeError("Can't implicitly convert non-string objects to strings")

    # temp_object is bytes
    temp_string = temp_object  # cython cast it as char *
    temp_string_len = len(temp_object)

    if strlen(temp_string) != temp_string_len:
        raise ValueError("VLEN strings do not support embedded NULLs")
    buf_cstring0 = emalloc(temp_string_len+1)
    memcpy(buf_cstring0, temp_string, temp_string_len+1)
    buf_cstring[0] = buf_cstring0

    return 0

# =============================================================================
# VLEN to fixed-width strings

cdef herr_t init_vlen2fixed(hid_t src, hid_t dst, void** priv) except -1:
    cdef conv_size_t *sizes

    # /!\ Untested

    if not (H5Tis_variable_str(src) and (not H5Tis_variable_str(dst))):
        return -2
    log_convert_registered(src, dst)

    sizes = emalloc(sizeof(conv_size_t))
    priv[0] = sizes

    sizes[0].src_size = H5Tget_size(src)
    sizes[0].dst_size = H5Tget_size(dst)
    return 0

cdef herr_t init_fixed2vlen(hid_t src, hid_t dst, void** priv) except -1:

    cdef conv_size_t *sizes
    if not (H5Tis_variable_str(dst) and (not H5Tis_variable_str(src))):
        return -2
    log_convert_registered(src, dst)

    # /!\ untested !

    sizes = emalloc(sizeof(conv_size_t))
    priv[0] = sizes
    sizes[0].src_size = H5Tget_size(src)
    sizes[0].dst_size = H5Tget_size(dst)

    return 0

cdef int conv_vlen2fixed(void* ipt, void* opt, void* bkg, void* priv) except -1:
    cdef:
        char** buf_vlen = ipt
        char* buf_fixed = opt
        char* temp_string = NULL
        size_t temp_string_len = 0  # Without null term
        conv_size_t *sizes = priv
        char* buf_vlen0

    # /!\ untested !

    buf_vlen0 = buf_vlen[0]

    if buf_vlen0 != NULL:
        temp_string = buf_vlen0
        temp_string_len = strlen(temp_string)

        if temp_string_len <= sizes[0].dst_size:
            # Pad with zeros
            memcpy(buf_fixed, temp_string, temp_string_len)
            memset(buf_fixed + temp_string_len, c'\0', sizes[0].dst_size - temp_string_len)
        else:
            # Simply truncate the string
            memcpy(buf_fixed, temp_string, sizes[0].dst_size)
    else:
        memset(buf_fixed, c'\0', sizes[0].dst_size)

    return 0

cdef int conv_fixed2vlen(void* ipt, void* opt, void* bkg, void* priv) except -1:
    cdef:
        char** buf_vlen = opt
        char* buf_fixed = ipt
        char* temp_string = NULL
        conv_size_t *sizes = priv

    # /!\ untested !

    temp_string = emalloc(sizes[0].src_size+1)
    memcpy(temp_string, buf_fixed, sizes[0].src_size)
    temp_string[sizes[0].src_size] = c'\0'

    memcpy(buf_vlen, &temp_string, sizeof(temp_string))

    return 0

# =============================================================================
# HDF5 references to Python instances of h5r.Reference

cdef inline int conv_objref2pyref(void* ipt, void* opt, void* bkg, void* priv) except -1:
    cdef:
        PyObject** buf_obj = opt
        hobj_ref_t* buf_ref = ipt
        Reference ref
        PyObject* ref_ptr = NULL

    ref = Reference()
    ref.ref.obj_ref = buf_ref[0]
    ref.typecode = H5R_OBJECT

    ref_ptr = ref
    Py_INCREF(ref)  # prevent ref from garbage collection
    buf_obj[0] = ref_ptr

    return 0

cdef inline int conv_pyref2objref(void* ipt, void* opt, void* bkg, void* priv)  except -1:
    cdef:
        PyObject** buf_obj = ipt
        hobj_ref_t* buf_ref = opt
        object obj
        Reference ref
        PyObject* buf_obj0

    buf_obj0 = buf_obj[0]

    if buf_obj0 != NULL and buf_obj0 != Py_None:
        obj = (buf_obj0)
        if not isinstance(obj, Reference):
            raise TypeError("Can't convert incompatible object to HDF5 object reference")
        ref = (buf_obj0)
        buf_ref[0] = ref.ref.obj_ref
    else:
        memset(buf_ref, c'\0', sizeof(hobj_ref_t))

    return 0

cdef inline int conv_regref2pyref(void* ipt, void* opt, void* bkg, void* priv) except -1:
    cdef:
        PyObject** buf_obj = opt
        PyObject** bkg_obj = bkg
        hdset_reg_ref_t* buf_ref = ipt
        RegionReference ref
        PyObject* ref_ptr = NULL
        PyObject* bkg_obj0

    bkg_obj0 = bkg_obj[0]
    ref = RegionReference()
    ref.ref.reg_ref = buf_ref[0]
    ref.typecode = H5R_DATASET_REGION
    ref_ptr = ref
    Py_INCREF(ref)  # because Cython discards its reference when the
                        # function exits

    Py_XDECREF(bkg_obj0)
    buf_obj[0] = ref_ptr

    return 0

cdef inline int conv_pyref2regref(void* ipt, void* opt, void* bkg, void* priv) except -1:
    cdef:
        PyObject** buf_obj = ipt
        hdset_reg_ref_t* buf_ref = opt
        object obj
        RegionReference ref
        PyObject* buf_obj0

    buf_obj0 = buf_obj[0]

    if buf_obj0 != NULL and buf_obj0 != Py_None:
        obj = (buf_obj0)
        if not isinstance(obj, RegionReference):
            raise TypeError("Can't convert incompatible object to HDF5 region reference")
        ref = (buf_obj0)
        IF HDF5_VERSION >= (1, 12, 0):
            memcpy(buf_ref, ref.ref.reg_ref.data, sizeof(hdset_reg_ref_t))
        ELSE:
            memcpy(buf_ref, ref.ref.reg_ref, sizeof(hdset_reg_ref_t))
    else:
        memset(buf_ref, c'\0', sizeof(hdset_reg_ref_t))

    return 0

# =============================================================================
# Conversion functions


cdef inline herr_t vlen2str(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return generic_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl,  conv_vlen2str, init_vlen2str, H5T_BKG_YES)

cdef inline herr_t str2vlen(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl)except -1 with gil:
    return generic_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, conv_str2vlen, init_str2vlen, H5T_BKG_NO)

cdef inline herr_t vlen2fixed(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return generic_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, conv_vlen2fixed, init_vlen2fixed, H5T_BKG_NO)

cdef inline herr_t fixed2vlen(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return generic_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, conv_fixed2vlen, init_fixed2vlen, H5T_BKG_NO)

cdef inline herr_t objref2pyref(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return generic_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, conv_objref2pyref, init_generic, H5T_BKG_NO)

cdef inline herr_t pyref2objref(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return generic_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, conv_pyref2objref, init_generic, H5T_BKG_NO)

cdef inline herr_t regref2pyref(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return generic_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, conv_regref2pyref, init_generic, H5T_BKG_YES)

cdef inline herr_t pyref2regref(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return generic_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, conv_pyref2regref, init_generic, H5T_BKG_NO)

# =============================================================================
# Enum to integer converter

cdef struct conv_enum_t:
    size_t src_size
    size_t dst_size

cdef int enum_int_converter_init(hid_t src, hid_t dst,
                                 H5T_cdata_t *cdata, int forward) except -1:
    cdef conv_enum_t *info

    cdata[0].need_bkg = H5T_BKG_NO
    cdata[0].priv = info = emalloc(sizeof(conv_enum_t))
    info[0].src_size = H5Tget_size(src)
    info[0].dst_size = H5Tget_size(dst)

cdef void enum_int_converter_free(H5T_cdata_t *cdata):
    cdef conv_enum_t *info

    info = cdata[0].priv
    efree(info)
    cdata[0].priv = NULL


cdef int enum_int_converter_conv(hid_t src, hid_t dst, H5T_cdata_t *cdata,
                                  size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                                 void *bkg_i, hid_t dxpl, int forward) except -1:
    cdef:
        conv_enum_t *info
        size_t nalloc
        int i
        char* cbuf = NULL
        char* buf = buf_i
        int identical
        hid_t supertype = -1

    info = cdata[0].priv

    try:
        if forward:
            supertype = H5Tget_super(src)
            identical = H5Tequal(supertype, dst)
        else:
            supertype = H5Tget_super(dst)
            identical = H5Tequal(supertype, src)

        # Short-circuit success
        if identical:
            return 0

        if buf_stride == 0:
            # Contiguous case: call H5Tconvert directly
            if forward:
                H5Tconvert(supertype, dst, nl, buf, NULL, dxpl)
            else:
                H5Tconvert(src, supertype, nl, buf, NULL, dxpl)
        else:
            # Non-contiguous: gather, convert and then scatter
            if info[0].src_size > info[0].dst_size:
                nalloc = info[0].src_size*nl
            else:
                nalloc = info[0].dst_size*nl

            cbuf = emalloc(nalloc)
            if cbuf == NULL:
                raise MemoryError()

            for i in range(nl):
                memcpy(cbuf + (i*info[0].src_size), buf + (i*buf_stride),
                        info[0].src_size)

            if forward:
                H5Tconvert(supertype, dst, nl, cbuf, NULL, dxpl)
            else:
                H5Tconvert(src, supertype, nl, cbuf, NULL, dxpl)

            for i in range(nl):
                memcpy(buf + (i*buf_stride), cbuf + (i*info[0].dst_size),
                        info[0].dst_size)

    finally:
        efree(cbuf)
        cbuf = NULL
        if supertype > 0:
            H5Tclose(supertype)

    return 0


# Direction ("forward"): 1 = enum to int, 0 = int to enum
cdef herr_t enum_int_converter(hid_t src, hid_t dst, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                               void *bkg_i, hid_t dxpl, int forward) except -1:

    cdef int command = cdata[0].command

    if command == H5T_CONV_INIT:
        enum_int_converter_init(src, dst, cdata, forward)
    elif command == H5T_CONV_FREE:
        enum_int_converter_free(cdata)
    elif command == H5T_CONV_CONV:
        return enum_int_converter_conv(src, dst, cdata, nl, buf_stride,
                                       bkg_stride, buf_i, bkg_i, dxpl, forward)
    else:
        return -2

    return 0


cdef herr_t enum2int(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return enum_int_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, 1)

cdef herr_t int2enum(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                    size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                    void *bkg_i, hid_t dxpl) except -1 with gil:
    return enum_int_converter(src_id, dst_id, cdata, nl, buf_stride, bkg_stride,
             buf_i, bkg_i, dxpl, 0)

# =============================================================================
# ndarray to VLEN routines

cdef herr_t vlen2ndarray(hid_t src_id,
                         hid_t dst_id,
                         H5T_cdata_t *cdata,
                         size_t nl,
                         size_t buf_stride,
                         size_t bkg_stride,
                         void *buf_i,
                         void *bkg_i,
                         hid_t dxpl) except -1 with gil:
    """Convert variable length object to numpy array, typically a list of strings

    :param src_id: Identifier for the source datatype.
    :param dst_id: Identifier for the destination datatype.
    :param nl: number of element
    :param buf_stride: Array containing pre- and post-conversion values.
    :param bkg_stride: Optional background buffer
    :param dxpl: Dataset transfer property list identifier.
    :return: error-code
    """
    cdef:
        int command = cdata[0].command
        size_t src_size, dst_size
        TypeID supertype
        TypeID outtype
        cnp.dtype dt
        int i
        char* buf = buf_i

    if command == H5T_CONV_INIT:
        cdata[0].need_bkg = H5T_BKG_NO
        if H5Tget_class(src_id) != H5T_VLEN or H5Tget_class(dst_id) != H5T_OPAQUE:
            return -2

    elif command == H5T_CONV_FREE:
        pass

    elif command == H5T_CONV_CONV:
        # need to pass element dtype to converter
        supertype = typewrap(H5Tget_super(src_id))
        dt = supertype.dtype
        outtype = py_create(dt)

        if buf_stride == 0:
            # No explicit stride seems to mean that the elements are packed
            # contiguously in the buffer.  In this case we must be careful
            # not to "stomp on" input elements if the output elements are
            # of a larger size.

            src_size = H5Tget_size(src_id)
            dst_size = H5Tget_size(dst_id)

            if src_size >= dst_size:
                for i in range(nl):
                    conv_vlen2ndarray(buf + (i*src_size), buf + (i*dst_size),
                                      dt, supertype, outtype)
            else:
                for i in range(nl-1, -1, -1):
                    conv_vlen2ndarray(buf + (i*src_size), buf + (i*dst_size),
                                      dt, supertype, outtype)
        else:
            # With explicit strides, we assume that the library knows the
            # alignment better than us.  Therefore we use the given stride
            # offsets exclusively.
            for i in range(nl):
                conv_vlen2ndarray(buf + (i*buf_stride), buf + (i*buf_stride),
                                  dt, supertype, outtype)

    else:
        return -2   # Unrecognized command.  Note this is NOT an exception.

    return 0


cdef struct vlen_t:
    size_t len
    void* ptr

cdef int conv_vlen2ndarray(void* ipt,
                           void* opt,
                           cnp.dtype elem_dtype,
                           TypeID intype,
                           TypeID outtype) except -1:
    """Convert variable length strings to numpy array

    :param ipt: input pointer: Point to the input data
    :param opt: output pointer: will contains the numpy array after exit
    :param elem_dtype: dtype of the element
    :param intype: ?
    :param outtype: ?
    """
    cdef:
        PyObject** buf_obj = opt
        vlen_t* in_vlen = ipt
        int flags = NPY_ARRAY_WRITEABLE | NPY_ARRAY_C_CONTIGUOUS | NPY_ARRAY_OWNDATA
        npy_intp dims[1]
        void* data
        char[:] buf
        void* back_buf = NULL
        cnp.ndarray ndarray
        PyObject* ndarray_obj
        vlen_t in_vlen0
        size_t size, itemsize

    #Replaces the memcpy
    size = in_vlen0.len = in_vlen[0].len
    data = in_vlen0.ptr = in_vlen[0].ptr

    dims[0] = size
    itemsize = H5Tget_size(outtype.id)
    if itemsize > H5Tget_size(intype.id):
        data = realloc(data, itemsize * size)

    if needs_bkg_buffer(intype.id, outtype.id):
        back_buf = emalloc(H5Tget_size(outtype.id)*size)

    try:
        H5Tconvert(intype.id, outtype.id, size, data, back_buf, H5P_DEFAULT)
    finally:
        free(back_buf)

    # We need to use different approaches to creating the ndarray with the converted
    # data depending on the destination dtype.
    # For simple dtypes, we can use SimpleNewFromData, but types like
    # string & void need a size specified, so this function can't be used.
    # Additionally, Cython doesn't expose NumPy C-API functions like NewFromDescr,
    # so we fall back on copying directly to the underlying buffer
    # of a new ndarray for other types.

    if elem_dtype.kind in b"biufcmMO":
        # type_num is enough to create an array for these dtypes
        ndarray = cnp.PyArray_SimpleNewFromData(1, dims, elem_dtype.type_num, data)
    elif not elem_dtype.hasobject:
        # This covers things like string dtypes and simple compound dtypes,
        # which can't be used with SimpleNewFromData.
        # Cython doesn't expose NumPy C-API functions
        # like NewFromDescr, so we'll construct this with a Python function.
        buf =  data
        ndarray = np.frombuffer(buf, dtype=elem_dtype)
    else:
        # Compound dtypes containing object fields: frombuffer() refuses these,
        # so we'll fall back to allocating a new array and copying the data in.
        ndarray = np.empty(size, dtype=elem_dtype)
        memcpy(PyArray_DATA(ndarray), data, itemsize * size)

        # In this code path, `data`, allocated by hdf5 to hold the v-len data,
        # will no longer be used since have copied its contents to the ndarray.
        efree(data)

    PyArray_ENABLEFLAGS(ndarray, flags)
    ndarray_obj = ndarray

    in_vlen0.ptr = NULL

    # Write the new ndarray object to the buffer in-place and ensure it is not destroyed
    buf_obj[0] = ndarray_obj
    Py_INCREF(ndarray)
    Py_INCREF(elem_dtype)
    return 0

cdef herr_t ndarray2vlen(hid_t src_id,
                         hid_t dst_id,
                         H5T_cdata_t *cdata,
                         size_t nl,
                         size_t buf_stride,
                         size_t bkg_stride,
                         void *buf_i,
                         void *bkg_i,
                         hid_t dxpl) except -1 with gil:
    cdef:
        int command = cdata[0].command
        size_t src_size, dst_size
        TypeID supertype
        TypeID outtype
        int i
        PyObject **pdata =  buf_i
        PyObject *pdata_elem
        char* buf = buf_i

    if command == H5T_CONV_INIT:
        cdata[0].need_bkg = H5T_BKG_NO
        if not H5Tequal(src_id, H5PY_OBJ) or H5Tget_class(dst_id) != H5T_VLEN:
            return -2
        supertype = typewrap(H5Tget_super(dst_id))
        for i in range(nl):
            # smells a lot
            memcpy(&pdata_elem, pdata+i, sizeof(pdata_elem))
            if supertype != py_create(( pdata_elem).dtype, 1):
                return -2
            if ( pdata_elem).ndim != 1:
                return -2
        log_convert_registered(src_id, dst_id)

    elif command == H5T_CONV_FREE:
        pass

    elif command == H5T_CONV_CONV:
        # If there are no elements to convert, pdata will not point to
        # a valid PyObject*, so bail here to prevent accessing the dtype below
        if nl == 0:
            return 0

        # need to pass element dtype to converter
        pdata_elem = pdata[0]
        supertype = py_create(( pdata_elem).dtype)
        outtype = typewrap(H5Tget_super(dst_id))

        if buf_stride == 0:
            # No explicit stride seems to mean that the elements are packed
            # contiguously in the buffer.  In this case we must be careful
            # not to "stomp on" input elements if the output elements are
            # of a larger size.

            src_size = H5Tget_size(src_id)
            dst_size = H5Tget_size(dst_id)

            if src_size >= dst_size:
                for i in range(nl):
                    conv_ndarray2vlen(buf + (i*src_size), buf + (i*dst_size),
                                      supertype, outtype)
            else:
                for i in range(nl-1, -1, -1):
                    conv_ndarray2vlen(buf + (i*src_size), buf + (i*dst_size),
                                      supertype, outtype)
        else:
            # With explicit strides, we assume that the library knows the
            # alignment better than us.  Therefore we use the given stride
            # offsets exclusively.
            for i in range(nl):
                conv_ndarray2vlen(buf + (i*buf_stride), buf + (i*buf_stride),
                                  supertype, outtype)

    else:
        return -2   # Unrecognized command.  Note this is NOT an exception.

    return 0


cdef int conv_ndarray2vlen(void* ipt,
                           void* opt,
                           TypeID intype,
                           TypeID outtype) except -1:
    cdef:
        PyObject** buf_obj = ipt
        vlen_t* in_vlen = opt
        void* data
        cnp.ndarray ndarray
        size_t len, nbytes
        PyObject* buf_obj0
        Py_buffer view
        void* back_buf = NULL
    try:
        buf_obj0 = buf_obj[0]
        ndarray =  buf_obj0
        len = ndarray.shape[0]
        nbytes = len * max(H5Tget_size(outtype.id), H5Tget_size(intype.id))

        data = emalloc(nbytes)

        PyObject_GetBuffer(ndarray, &view, PyBUF_INDIRECT)
        PyBuffer_ToContiguous(data, &view, view.len, b'C')
        PyBuffer_Release(&view)

        if needs_bkg_buffer(intype.id, outtype.id):
            back_buf = emalloc(H5Tget_size(outtype.id)*len)

        H5Tconvert(intype.id, outtype.id, len, data, back_buf, H5P_DEFAULT)

        in_vlen[0].len = len
        in_vlen[0].ptr = data

    finally:
        free(back_buf)

    return 0

# =============================================================================
# B8 to enum bool routines

cdef herr_t b82boolenum(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                        size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                        void *bkg_i, hid_t dxpl) except -1:
    return 0

cdef herr_t boolenum2b8(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                        size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                        void *bkg_i, hid_t dxpl) except -1:
    return 0

# =============================================================================
# BITFIELD to UINT routines

cdef herr_t bitfield2uint(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                     size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                     void *bkg_i, hid_t dxpl) except -1:
    return 0

cdef herr_t uint2bitfield(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
                     size_t nl, size_t buf_stride, size_t bkg_stride, void *buf_i,
                     void *bkg_i, hid_t dxpl) except -1:
    return 0

# =============================================================================

cpdef int register_converters() except -1:
    cdef:
        hid_t vlstring
        hid_t vlentype
        hid_t pyobj
        hid_t enum
        hid_t boolenum = -1
        int8_t f_value = 0
        int8_t t_value = 1

    vlstring = H5Tcopy(H5T_C_S1)
    H5Tset_size(vlstring, H5T_VARIABLE)

    enum = H5Tenum_create(H5T_STD_I32LE)

    vlentype = H5Tvlen_create(H5T_STD_I32LE)

    pyobj = H5PY_OBJ

    boolenum = H5Tenum_create(H5T_NATIVE_INT8)
    H5Tenum_insert(boolenum, cfg._f_name, &f_value)
    H5Tenum_insert(boolenum, cfg._t_name, &t_value)

    H5Tregister(H5T_PERS_SOFT, "vlen2fixed", vlstring, H5T_C_S1, vlen2fixed)
    H5Tregister(H5T_PERS_SOFT, "fixed2vlen", H5T_C_S1, vlstring, fixed2vlen)

    H5Tregister(H5T_PERS_HARD, "objref2pyref", H5T_STD_REF_OBJ, pyobj, objref2pyref)
    H5Tregister(H5T_PERS_HARD, "pyref2objref", pyobj, H5T_STD_REF_OBJ, pyref2objref)

    H5Tregister(H5T_PERS_HARD, "regref2pyref", H5T_STD_REF_DSETREG, pyobj, regref2pyref)
    H5Tregister(H5T_PERS_HARD, "pyref2regref", pyobj, H5T_STD_REF_DSETREG, pyref2regref)

    H5Tregister(H5T_PERS_SOFT, "enum2int", enum, H5T_STD_I32LE, enum2int)
    H5Tregister(H5T_PERS_SOFT, "int2enum", H5T_STD_I32LE, enum, int2enum)

    H5Tregister(H5T_PERS_SOFT, "vlen2ndarray", vlentype, pyobj, vlen2ndarray)
    H5Tregister(H5T_PERS_SOFT, "ndarray2vlen", pyobj, vlentype, ndarray2vlen)

    H5Tregister(H5T_PERS_HARD, "boolenum2b8", boolenum, H5T_NATIVE_B8, boolenum2b8)
    H5Tregister(H5T_PERS_HARD, "b82boolenum", H5T_NATIVE_B8, boolenum, b82boolenum)

    H5Tregister(H5T_PERS_HARD, "uint82b8", H5T_STD_U8BE, H5T_STD_B8BE, uint2bitfield)
    H5Tregister(H5T_PERS_HARD, "b82uint8", H5T_STD_B8BE, H5T_STD_U8BE, bitfield2uint)

    H5Tregister(H5T_PERS_HARD, "uint82b8", H5T_STD_U8LE, H5T_STD_B8LE, uint2bitfield)
    H5Tregister(H5T_PERS_HARD, "b82uint8", H5T_STD_B8LE, H5T_STD_U8LE, bitfield2uint)

    H5Tregister(H5T_PERS_HARD, "uint162b16", H5T_STD_U16BE, H5T_STD_B16BE, uint2bitfield)
    H5Tregister(H5T_PERS_HARD, "b162uint16", H5T_STD_B16BE, H5T_STD_U16BE, bitfield2uint)

    H5Tregister(H5T_PERS_HARD, "uint162b16", H5T_STD_U16LE, H5T_STD_B16LE, uint2bitfield)
    H5Tregister(H5T_PERS_HARD, "b162uint16", H5T_STD_B16LE, H5T_STD_U16LE, bitfield2uint)

    H5Tregister(H5T_PERS_HARD, "uint322b32", H5T_STD_U32BE, H5T_STD_B32BE, uint2bitfield)
    H5Tregister(H5T_PERS_HARD, "b322uint32", H5T_STD_B32BE, H5T_STD_U32BE, bitfield2uint)

    H5Tregister(H5T_PERS_HARD, "uint322b32", H5T_STD_U32LE, H5T_STD_B32LE, uint2bitfield)
    H5Tregister(H5T_PERS_HARD, "b322uint32", H5T_STD_B32LE, H5T_STD_U32LE, bitfield2uint)

    H5Tregister(H5T_PERS_HARD, "uint642b64", H5T_STD_U64BE, H5T_STD_B64BE, uint2bitfield)
    H5Tregister(H5T_PERS_HARD, "b642uint64", H5T_STD_B64BE, H5T_STD_U64BE, bitfield2uint)

    H5Tregister(H5T_PERS_HARD, "uint642b64", H5T_STD_U64LE, H5T_STD_B64LE, uint2bitfield)
    H5Tregister(H5T_PERS_HARD, "b642uint64", H5T_STD_B64LE, H5T_STD_U64LE, bitfield2uint)

    H5Tregister(H5T_PERS_SOFT, "vlen2str", vlstring, pyobj, vlen2str)
    H5Tregister(H5T_PERS_SOFT, "str2vlen", pyobj, vlstring, str2vlen)

    H5Tclose(vlstring)
    H5Tclose(vlentype)
    H5Tclose(enum)
    H5Tclose(boolenum)

    return 0

cpdef int unregister_converters() except -1:

    H5Tunregister(H5T_PERS_SOFT, "vlen2str", -1, -1, vlen2str)
    H5Tunregister(H5T_PERS_SOFT, "str2vlen", -1, -1, str2vlen)

    H5Tunregister(H5T_PERS_SOFT, "vlen2fixed", -1, -1, vlen2fixed)
    H5Tunregister(H5T_PERS_SOFT, "fixed2vlen", -1, -1, fixed2vlen)

    H5Tunregister(H5T_PERS_HARD, "objref2pyref", -1, -1, objref2pyref)
    H5Tunregister(H5T_PERS_HARD, "pyref2objref", -1, -1, pyref2objref)

    H5Tunregister(H5T_PERS_HARD, "regref2pyref", -1, -1, regref2pyref)
    H5Tunregister(H5T_PERS_HARD, "pyref2regref", -1, -1, pyref2regref)

    H5Tunregister(H5T_PERS_SOFT, "enum2int", -1, -1, enum2int)
    H5Tunregister(H5T_PERS_SOFT, "int2enum", -1, -1, int2enum)

    H5Tunregister(H5T_PERS_SOFT, "vlen2ndarray", -1, -1, vlen2ndarray)
    H5Tunregister(H5T_PERS_SOFT, "ndarray2vlen", -1, -1, ndarray2vlen)

    H5Tunregister(H5T_PERS_HARD, "boolenum2b8", -1, -1, boolenum2b8)
    H5Tunregister(H5T_PERS_HARD, "b82boolenum", -1, -1, b82boolenum)

    # Pass an empty string to unregister all methods that use these functions
    H5Tunregister(H5T_PERS_HARD, "", -1, -1, uint2bitfield)
    H5Tunregister(H5T_PERS_HARD, "", -1, -1, bitfield2uint)

    return 0
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1705055674.0
h5py-3.13.0/h5py/_errors.pxd0000644000175000017500000004736714550212672016463 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

include "config.pxi"

from .api_types_hdf5 cimport *

# Auto-set exception.  Returns 1 if exception set, 0 if no HDF5 error found.
cdef int set_exception() except -1

cdef extern from "hdf5.h":

    IF HDF5_VERSION < (1, 13, 0):
        ctypedef enum H5E_major_t:
            H5E_NONE_MAJOR       = 0,   # special zero, no error
            H5E_ARGS,                   # invalid arguments to routine
            H5E_RESOURCE,               # resource unavailable
            H5E_INTERNAL,               # Internal error (too specific to document)
            H5E_FILE,                   # file Accessibility
            H5E_IO,                     # Low-level I/O
            H5E_FUNC,                   # function Entry/Exit
            H5E_ATOM,                   # object Atom
            H5E_CACHE,                  # object Cache
            H5E_BTREE,                  # B-Tree Node
            H5E_SYM,                    # symbol Table
            H5E_HEAP,                   # Heap
            H5E_OHDR,                   # object Header
            H5E_DATATYPE,               # Datatype
            H5E_DATASPACE,              # Dataspace
            H5E_DATASET,                # Dataset
            H5E_STORAGE,                # data storage
            H5E_PLIST,                  # Property lists
            H5E_ATTR,                   # Attribute
            H5E_PLINE,                  # Data filters
            H5E_EFL,                    # External file list
            H5E_REFERENCE,              # References
            H5E_VFL,                    # Virtual File Layer
            H5E_TST,                    # Ternary Search Trees
            H5E_RS,                     # Reference Counted Strings
            H5E_ERROR,                  # Error API
            H5E_SLIST                   # Skip Lists

        ctypedef enum H5E_minor_t:
            # Generic low-level file I/O errors
            H5E_SEEKERROR      # Seek failed
            H5E_READERROR      # Read failed
            H5E_WRITEERROR     # Write failed
            H5E_CLOSEERROR     # Close failed
            H5E_OVERFLOW       # Address overflowed
            H5E_FCNTL          # File control (fcntl) failed

            # Resource errors
            H5E_NOSPACE        # No space available for allocation
            H5E_CANTALLOC      # Can't allocate space
            H5E_CANTCOPY       # Unable to copy object
            H5E_CANTFREE       # Unable to free object
            H5E_ALREADYEXISTS  # Object already exists
            H5E_CANTLOCK       # Unable to lock object
            H5E_CANTUNLOCK     # Unable to unlock object
            H5E_CANTGC         # Unable to garbage collect
            H5E_CANTGETSIZE    # Unable to compute size
            H5E_OBJOPEN        # Object is already open

            # Heap errors
            H5E_CANTRESTORE    # Can't restore condition
            H5E_CANTCOMPUTE    # Can't compute value
            H5E_CANTEXTEND     # Can't extend heap's space
            H5E_CANTATTACH     # Can't attach object
            H5E_CANTUPDATE     # Can't update object
            H5E_CANTOPERATE    # Can't operate on object

            # Function entry/exit interface errors
            H5E_CANTINIT       # Unable to initialize object
            H5E_ALREADYINIT    # Object already initialized
            H5E_CANTRELEASE    # Unable to release object

            # Property list errors
            H5E_CANTGET        # Can't get value
            H5E_CANTSET        # Can't set value
            H5E_DUPCLASS       # Duplicate class name in parent class

            # Free space errors
            H5E_CANTMERGE      # Can't merge objects
            H5E_CANTREVIVE     # Can't revive object
            H5E_CANTSHRINK     # Can't shrink container

            # Object header related errors
            H5E_LINKCOUNT      # Bad object header link
            H5E_VERSION        # Wrong version number
            H5E_ALIGNMENT      # Alignment error
            H5E_BADMESG        # Unrecognized message
            H5E_CANTDELETE     # Can't delete message
            H5E_BADITER        # Iteration failed
            H5E_CANTPACK       # Can't pack messages
            H5E_CANTRESET      # Can't reset object count

            # System level errors
            H5E_SYSERRSTR      # System error message

            # I/O pipeline errors
            H5E_NOFILTER       # Requested filter is not available
            H5E_CALLBACK       # Callback failed
            H5E_CANAPPLY       # Error from filter 'can apply' callback
            H5E_SETLOCAL       # Error from filter 'set local' callback
            H5E_NOENCODER      # Filter present but encoding disabled
            H5E_CANTFILTER     # Filter operation failed

            # Group related errors
            H5E_CANTOPENOBJ    # Can't open object
            H5E_CANTCLOSEOBJ   # Can't close object
            H5E_COMPLEN        # Name component is too long
            H5E_PATH           # Problem with path to object

            # No error
            H5E_NONE_MINOR     # No error

            # File accessibility errors
            H5E_FILEEXISTS     # File already exists
            H5E_FILEOPEN       # File already open
            H5E_CANTCREATE     # Unable to create file
            H5E_CANTOPENFILE   # Unable to open file
            H5E_CANTCLOSEFILE  # Unable to close file
            H5E_NOTHDF5        # Not an HDF5 file
            H5E_BADFILE        # Bad file ID accessed
            H5E_TRUNCATED      # File has been truncated
            H5E_MOUNT          # File mount error

            # Object atom related errors
            H5E_BADATOM        # Unable to find atom information (already closed?)
            H5E_BADGROUP       # Unable to find ID group information
            H5E_CANTREGISTER   # Unable to register new atom
            H5E_CANTINC        # Unable to increment reference count
            H5E_CANTDEC        # Unable to decrement reference count
            H5E_NOIDS          # Out of IDs for group

            # Cache related errors
            H5E_CANTFLUSH      # Unable to flush data from cache
            H5E_CANTSERIALIZE  # Unable to serialize data from cache
            H5E_CANTLOAD       # Unable to load metadata into cache
            H5E_PROTECT        # Protected metadata error
            H5E_NOTCACHED      # Metadata not currently cached
            H5E_SYSTEM         # Internal error detected
            H5E_CANTINS        # Unable to insert metadata into cache
            H5E_CANTRENAME     # Unable to rename metadata
            H5E_CANTPROTECT    # Unable to protect metadata
            H5E_CANTUNPROTECT  # Unable to unprotect metadata
            H5E_CANTPIN        # Unable to pin cache entry
            H5E_CANTUNPIN      # Unable to un-pin cache entry
            H5E_CANTMARKDIRTY  # Unable to mark a pinned entry as dirty
            H5E_CANTDIRTY      # Unable to mark metadata as dirty
            H5E_CANTEXPUNGE    # Unable to expunge a metadata cache entry
            H5E_CANTRESIZE     # Unable to resize a metadata cache entry

            # Link related errors
            H5E_TRAVERSE       # Link traversal failure
            H5E_NLINKS         # Too many soft links in path
            H5E_NOTREGISTERED  # Link class not registered
            H5E_CANTMOVE       # Move callback returned error
            H5E_CANTSORT       # Can't sort objects

            # Parallel MPI errors
            H5E_MPI           # Some MPI function failed
            H5E_MPIERRSTR     # MPI Error String
            H5E_CANTRECV      # Can't receive data

            # Dataspace errors
            H5E_CANTCLIP      # Can't clip hyperslab region
            H5E_CANTCOUNT     # Can't count elements
            H5E_CANTSELECT    # Can't select hyperslab
            H5E_CANTNEXT      # Can't move to next iterator location
            H5E_BADSELECT     # Invalid selection
            H5E_CANTCOMPARE   # Can't compare objects

            # Argument errors
            H5E_UNINITIALIZED  # Information is uinitialized
            H5E_UNSUPPORTED    # Feature is unsupported
            H5E_BADTYPE        # Inappropriate type
            H5E_BADRANGE       # Out of range
            H5E_BADVALUE       # Bad value

            # B-tree related errors
            H5E_NOTFOUND            # Object not found
            H5E_EXISTS              # Object already exists
            H5E_CANTENCODE          # Unable to encode value
            H5E_CANTDECODE          # Unable to decode value
            H5E_CANTSPLIT           # Unable to split node
            H5E_CANTREDISTRIBUTE    # Unable to redistribute records
            H5E_CANTSWAP            # Unable to swap records
            H5E_CANTINSERT          # Unable to insert object
            H5E_CANTLIST            # Unable to list node
            H5E_CANTMODIFY          # Unable to modify record
            H5E_CANTREMOVE          # Unable to remove object

            # Datatype conversion errors
            H5E_CANTCONVERT         # Can't convert datatypes
            H5E_BADSIZE             # Bad size for object
    ELSE:
        ctypedef enum H5E_major_t:
            H5E_NONE_MAJOR       = 0,   # special zero, no error
            H5E_ARGS,                   # invalid arguments to routine
            H5E_RESOURCE,               # resource unavailable
            H5E_INTERNAL,               # Internal error (too specific to document)
            H5E_FILE,                   # file Accessibility
            H5E_IO,                     # Low-level I/O
            H5E_FUNC,                   # function Entry/Exit
            H5E_ID,                     # object ID
            H5E_CACHE,                  # object Cache
            H5E_BTREE,                  # B-Tree Node
            H5E_SYM,                    # symbol Table
            H5E_HEAP,                   # Heap
            H5E_OHDR,                   # object Header
            H5E_DATATYPE,               # Datatype
            H5E_DATASPACE,              # Dataspace
            H5E_DATASET,                # Dataset
            H5E_STORAGE,                # data storage
            H5E_PLIST,                  # Property lists
            H5E_ATTR,                   # Attribute
            H5E_PLINE,                  # Data filters
            H5E_EFL,                    # External file list
            H5E_REFERENCE,              # References
            H5E_VFL,                    # Virtual File Layer
            H5E_TST,                    # Ternary Search Trees
            H5E_RS,                     # Reference Counted Strings
            H5E_ERROR,                  # Error API
            H5E_SLIST                   # Skip Lists

        ctypedef enum H5E_minor_t:
            # Generic low-level file I/O errors
            H5E_SEEKERROR      # Seek failed
            H5E_READERROR      # Read failed
            H5E_WRITEERROR     # Write failed
            H5E_CLOSEERROR     # Close failed
            H5E_OVERFLOW       # Address overflowed
            H5E_FCNTL          # File control (fcntl) failed

            # Resource errors
            H5E_NOSPACE        # No space available for allocation
            H5E_CANTALLOC      # Can't allocate space
            H5E_CANTCOPY       # Unable to copy object
            H5E_CANTFREE       # Unable to free object
            H5E_ALREADYEXISTS  # Object already exists
            H5E_CANTLOCK       # Unable to lock object
            H5E_CANTUNLOCK     # Unable to unlock object
            H5E_CANTGC         # Unable to garbage collect
            H5E_CANTGETSIZE    # Unable to compute size
            H5E_OBJOPEN        # Object is already open

            # Heap errors
            H5E_CANTRESTORE    # Can't restore condition
            H5E_CANTCOMPUTE    # Can't compute value
            H5E_CANTEXTEND     # Can't extend heap's space
            H5E_CANTATTACH     # Can't attach object
            H5E_CANTUPDATE     # Can't update object
            H5E_CANTOPERATE    # Can't operate on object

            # Function entry/exit interface errors
            H5E_CANTINIT       # Unable to initialize object
            H5E_ALREADYINIT    # Object already initialized
            H5E_CANTRELEASE    # Unable to release object

            # Property list errors
            H5E_CANTGET        # Can't get value
            H5E_CANTSET        # Can't set value
            H5E_DUPCLASS       # Duplicate class name in parent class

            # Free space errors
            H5E_CANTMERGE      # Can't merge objects
            H5E_CANTREVIVE     # Can't revive object
            H5E_CANTSHRINK     # Can't shrink container

            # Object header related errors
            H5E_LINKCOUNT      # Bad object header link
            H5E_VERSION        # Wrong version number
            H5E_ALIGNMENT      # Alignment error
            H5E_BADMESG        # Unrecognized message
            H5E_CANTDELETE     # Can't delete message
            H5E_BADITER        # Iteration failed
            H5E_CANTPACK       # Can't pack messages
            H5E_CANTRESET      # Can't reset object count

            # System level errors
            H5E_SYSERRSTR      # System error message

            # I/O pipeline errors
            H5E_NOFILTER       # Requested filter is not available
            H5E_CALLBACK       # Callback failed
            H5E_CANAPPLY       # Error from filter 'can apply' callback
            H5E_SETLOCAL       # Error from filter 'set local' callback
            H5E_NOENCODER      # Filter present but encoding disabled
            H5E_CANTFILTER     # Filter operation failed

            # Group related errors
            H5E_CANTOPENOBJ    # Can't open object
            H5E_CANTCLOSEOBJ   # Can't close object
            H5E_COMPLEN        # Name component is too long
            H5E_PATH           # Problem with path to object

            # No error
            H5E_NONE_MINOR     # No error

            # File accessibility errors
            H5E_FILEEXISTS     # File already exists
            H5E_FILEOPEN       # File already open
            H5E_CANTCREATE     # Unable to create file
            H5E_CANTOPENFILE   # Unable to open file
            H5E_CANTCLOSEFILE  # Unable to close file
            H5E_NOTHDF5        # Not an HDF5 file
            H5E_BADFILE        # Bad file ID accessed
            H5E_TRUNCATED      # File has been truncated
            H5E_MOUNT          # File mount error

            # Object ID related errors
            H5E_BADID          # Unable to find ID information (already closed?) */
            H5E_BADGROUP       # Unable to find ID group information
            H5E_CANTREGISTER   # Unable to register new ID
            H5E_CANTINC        # Unable to increment reference count
            H5E_CANTDEC        # Unable to decrement reference count
            H5E_NOIDS          # Out of IDs for group

            # Cache related errors
            H5E_CANTFLUSH      # Unable to flush data from cache
            H5E_CANTSERIALIZE  # Unable to serialize data from cache
            H5E_CANTLOAD       # Unable to load metadata into cache
            H5E_PROTECT        # Protected metadata error
            H5E_NOTCACHED      # Metadata not currently cached
            H5E_SYSTEM         # Internal error detected
            H5E_CANTINS        # Unable to insert metadata into cache
            H5E_CANTRENAME     # Unable to rename metadata
            H5E_CANTPROTECT    # Unable to protect metadata
            H5E_CANTUNPROTECT  # Unable to unprotect metadata
            H5E_CANTPIN        # Unable to pin cache entry
            H5E_CANTUNPIN      # Unable to un-pin cache entry
            H5E_CANTMARKDIRTY  # Unable to mark a pinned entry as dirty
            H5E_CANTDIRTY      # Unable to mark metadata as dirty
            H5E_CANTEXPUNGE    # Unable to expunge a metadata cache entry
            H5E_CANTRESIZE     # Unable to resize a metadata cache entry

            # Link related errors
            H5E_TRAVERSE       # Link traversal failure
            H5E_NLINKS         # Too many soft links in path
            H5E_NOTREGISTERED  # Link class not registered
            H5E_CANTMOVE       # Move callback returned error
            H5E_CANTSORT       # Can't sort objects

            # Parallel MPI errors
            H5E_MPI           # Some MPI function failed
            H5E_MPIERRSTR     # MPI Error String
            H5E_CANTRECV      # Can't receive data

            # Dataspace errors
            H5E_CANTCLIP      # Can't clip hyperslab region
            H5E_CANTCOUNT     # Can't count elements
            H5E_CANTSELECT    # Can't select hyperslab
            H5E_CANTNEXT      # Can't move to next iterator location
            H5E_BADSELECT     # Invalid selection
            H5E_CANTCOMPARE   # Can't compare objects

            # Argument errors
            H5E_UNINITIALIZED  # Information is uinitialized
            H5E_UNSUPPORTED    # Feature is unsupported
            H5E_BADTYPE        # Inappropriate type
            H5E_BADRANGE       # Out of range
            H5E_BADVALUE       # Bad value

            # B-tree related errors
            H5E_NOTFOUND            # Object not found
            H5E_EXISTS              # Object already exists
            H5E_CANTENCODE          # Unable to encode value
            H5E_CANTDECODE          # Unable to decode value
            H5E_CANTSPLIT           # Unable to split node
            H5E_CANTREDISTRIBUTE    # Unable to redistribute records
            H5E_CANTSWAP            # Unable to swap records
            H5E_CANTINSERT          # Unable to insert object
            H5E_CANTLIST            # Unable to list node
            H5E_CANTMODIFY          # Unable to modify record
            H5E_CANTREMOVE          # Unable to remove object

            # Datatype conversion errors
            H5E_CANTCONVERT         # Can't convert datatypes
            H5E_BADSIZE             # Bad size for object

    cdef enum H5E_direction_t:
        H5E_WALK_UPWARD    = 0      # begin deep, end at API function
        H5E_WALK_DOWNWARD = 1       # begin at API function, end deep

    ctypedef struct H5E_error_t:
        H5E_major_t     maj_num     #  major error number
        H5E_minor_t     min_num     #  minor error number
        char    *func_name          #  function in which error occurred
        char    *file_name          #  file in which error occurred
        unsigned    line            #  line in file where error occurs
        char    *desc               #  optional supplied description

    int H5E_DEFAULT  # ID for default error stack

    char      *H5Eget_major(H5E_major_t n)
    char      *H5Eget_minor(H5E_minor_t n)
    herr_t    H5Eclear(hid_t estack_id) except *

    ctypedef herr_t (*H5E_auto_t)(void *client_data)
    herr_t    H5Eset_auto(hid_t estack_id, H5E_auto_t func, void *client_data) nogil
    herr_t    H5Eget_auto(hid_t estack_id, H5E_auto_t *func, void** client_data)

    herr_t    H5Eprint(hid_t estack_id, void *stream)

    ctypedef herr_t (*H5E_walk_t)(unsigned int n, const H5E_error_t *err_desc, void* client_data)
    herr_t    H5Ewalk(hid_t estack_id, H5E_direction_t direction, H5E_walk_t func, void* client_data)

# --- Functions for managing the HDF5 error callback mechanism ---

ctypedef struct err_cookie:
    # Defines the error handler state (callback and callback data)
    H5E_auto_t func
    void *data

# Set (via H5Eset_auto) the HDF5 error handler for this thread.  Returns
# the old (presently installed) handler.
cdef err_cookie set_error_handler(err_cookie handler)

# Set the default error handler set by silence_errors/unsilence_errors
cdef void set_default_error_handler() noexcept nogil
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1705055674.0
h5py-3.13.0/h5py/_errors.pyx0000644000175000017500000001657214550212672016502 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

# Python-style minor error classes.  If the minor error code matches an entry
# in this dict, the generated exception will be used.

include "config.pxi"

from cpython cimport PyErr_Occurred, PyErr_SetObject
import re


_minor_table = {
    H5E_SEEKERROR:      OSError,    # Seek failed
    H5E_READERROR:      OSError,    # Read failed
    H5E_WRITEERROR:     OSError,    # Write failed
    H5E_CLOSEERROR:     OSError,    # Close failed
    H5E_OVERFLOW:       OSError,    # Address overflowed
    H5E_FCNTL:          OSError,    # File control (fcntl) failed

    H5E_FILEEXISTS:     OSError,    # File already exists
    H5E_FILEOPEN:       OSError,    # File already open
    H5E_CANTCREATE:     OSError,    # Unable to create file
    H5E_CANTOPENFILE:   OSError,    # Unable to open file
    H5E_CANTCLOSEFILE:  OSError,    # Unable to close file
    H5E_NOTHDF5:        OSError,    # Not an HDF5 file
    H5E_BADFILE:        ValueError, # Bad file ID accessed
    H5E_TRUNCATED:      OSError,    # File has been truncated
    H5E_MOUNT:          OSError,    # File mount error

    H5E_NOFILTER:       OSError,    # Requested filter is not available
    H5E_CALLBACK:       OSError,    # Callback failed
    H5E_CANAPPLY:       OSError,    # Error from filter 'can apply' callback
    H5E_SETLOCAL:       OSError,    # Error from filter 'set local' callback
    H5E_NOENCODER:      OSError,    # Filter present but encoding disabled

    H5E_BADGROUP:       ValueError,  # Unable to find ID group information
    H5E_BADSELECT:      ValueError,  # Invalid selection (hyperslabs)
    H5E_UNINITIALIZED:  ValueError,  # Information is uninitialized
    H5E_UNSUPPORTED:    NotImplementedError,    # Feature is unsupported

    H5E_NOTFOUND:       KeyError,    # Object not found
    H5E_CANTINSERT:     ValueError,  # Unable to insert object

    H5E_BADTYPE:        TypeError,   # Inappropriate type
    H5E_BADRANGE:       ValueError,  # Out of range
    H5E_BADVALUE:       ValueError,  # Bad value

    H5E_EXISTS:         ValueError,  # Object already exists
    H5E_ALREADYEXISTS:  ValueError,  # Object already exists, part II
    H5E_CANTCONVERT:    TypeError,   # Can't convert datatypes

    H5E_CANTDELETE:     KeyError,    # Can't delete message

    H5E_CANTOPENOBJ:    KeyError,

    H5E_CANTMOVE:       ValueError,  # Can't move a link
}

# "Fudge" table to accommodate annoying inconsistencies in HDF5's use
# of the minor error codes.  If a (major, minor) entry appears here,
# it will override any entry in the minor error table.
_exact_table = {
    (H5E_CACHE, H5E_BADVALUE):      OSError,    # obj create w/o write intent
    (H5E_RESOURCE, H5E_CANTINIT):   OSError,    # obj create w/o write intent
    (H5E_INTERNAL, H5E_SYSERRSTR):  OSError,    # e.g. wrong file permissions
    (H5E_DATATYPE, H5E_CANTINIT):   TypeError,  # No conversion path
    (H5E_DATASET, H5E_CANTINIT):    ValueError, # bad param for dataset setup
    (H5E_ARGS, H5E_CANTINIT):       TypeError,  # Illegal operation on object
    (H5E_ARGS, H5E_BADTYPE):        ValueError, # Invalid location in file
    (H5E_REFERENCE, H5E_CANTINIT):  ValueError, # Dereferencing invalid ref

    # due to changes to H5F.c:H5Fstart_swmr_write
    (H5E_FILE, H5E_CANTCONVERT):    ValueError, # Invalid file format
}

IF HDF5_VERSION > (1, 12, 0):
    _exact_table[(H5E_DATASET, H5E_CANTCREATE)] = ValueError  # bad param for dataset setup

IF HDF5_VERSION < (1, 13, 0):
    _minor_table[H5E_BADATOM] = ValueError  # Unable to find atom information (already closed?)
    _exact_table[(H5E_SYM, H5E_CANTINIT)] = ValueError  # Object already exists/1.8
ELSE:
    _minor_table[H5E_BADID] = ValueError  # Unable to find ID information
    _exact_table[(H5E_SYM, H5E_CANTCREATE)] = ValueError  # Object already exists

cdef struct err_data_t:
    H5E_error_t err
    int n

cdef herr_t walk_cb(unsigned int n, const H5E_error_t *desc, void *e) noexcept nogil:

    cdef err_data_t *ee = e

    ee[0].err.maj_num = desc[0].maj_num
    ee[0].err.min_num = desc[0].min_num
    ee[0].err.desc = desc[0].desc
    ee[0].n = n

cdef int set_exception() except -1:

    cdef err_data_t err
    cdef const char *desc = NULL          # Note: HDF5 forbids freeing these
    cdef const char *desc_bottom = NULL

    if PyErr_Occurred():
        # An exception was already set, e.g. by a Python callback within the
        # HDF5 call. Skip translating the HDF5 error, and let the Python
        # exception propagate.
        return 1

    # First, extract the major & minor error codes from the top of the
    # stack, along with the top-level error description

    err.n = -1

    if H5Ewalk(H5E_DEFAULT, H5E_WALK_UPWARD, walk_cb, &err) < 0:
        raise RuntimeError("Failed to walk error stack")

    if err.n < 0:   # No HDF5 exception information found
        return 0

    eclass = _minor_table.get(err.err.min_num, RuntimeError)
    eclass = _exact_table.get((err.err.maj_num, err.err.min_num), eclass)

    desc = err.err.desc
    if desc is NULL:
        raise RuntimeError("Failed to extract top-level error description")

    # Second, retrieve the bottom-most error description for additional info

    err.n = -1

    if H5Ewalk(H5E_DEFAULT, H5E_WALK_DOWNWARD, walk_cb, &err) < 0:
        raise RuntimeError("Failed to walk error stack")

    desc_bottom = err.err.desc
    if desc_bottom is NULL:
        raise RuntimeError("Failed to extract bottom-level error description")

    msg = b"%b (%b)" % (bytes(desc).capitalize(), bytes(desc_bottom))

    # Finally, set the exception.  We do this with a Python C function
    # so that the traceback doesn't point here.

    m = re.search(b'errno\s*=\s*(\d+)', desc_bottom)
    if m and eclass is OSError:
        # Python can automatically create an appropriate OSError subclass
        # (e.g. FileNotFoundError) given the POSIX errno (e.g. ENOENT)
        errno = int(m.group(1))
        PyErr_SetObject(OSError, (errno, msg.decode('utf-8', 'replace')))
    else:
        PyErr_SetString(eclass, msg)

    return 1


cdef extern from "stdio.h":
    void *stderr

cdef err_cookie _error_handler  # Store error handler used by h5py
_error_handler.func = NULL
_error_handler.data = NULL

cdef void set_default_error_handler() noexcept nogil:
    """Set h5py's current default error handler"""
    H5Eset_auto(H5E_DEFAULT, _error_handler.func, _error_handler.data)

def silence_errors():
    """ Disable HDF5's automatic error printing in this thread """
    _error_handler.func = NULL
    _error_handler.data = NULL
    set_default_error_handler()

def unsilence_errors():
    """ Re-enable HDF5's automatic error printing in this thread """
    _error_handler.func =  H5Eprint
    _error_handler.data = stderr
    set_default_error_handler()


cdef err_cookie set_error_handler(err_cookie handler):
    # Note: exceptions here will be printed instead of raised.

    cdef err_cookie old_handler

    if H5Eget_auto(H5E_DEFAULT, &old_handler.func, &old_handler.data) < 0:
        raise RuntimeError("Failed to retrieve old handler")

    if H5Eset_auto(H5E_DEFAULT, handler.func, handler.data) < 0:
        raise RuntimeError("Failed to install new handler")

    return old_handler
././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5718822
h5py-3.13.0/h5py/_hl/0000755000175000017500000000000014755127217015023 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/_hl/__init__.py0000644000175000017500000000071114045746670017135 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    This subpackage implements the high-level interface for h5py.

    Don't manually import things from here; the public API lives directly
    in the top-level package namespace.
"""
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1718895541.0
h5py-3.13.0/h5py/_hl/attrs.py0000644000175000017500000002375514635041665016545 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements high-level operations for attributes.

    Provides the AttributeManager class, available on high-level objects
    as .attrs.
"""

import numpy
import uuid

from .. import h5, h5s, h5t, h5a, h5p
from . import base
from .base import phil, with_phil, Empty, is_empty_dataspace, product
from .datatype import Datatype


class AttributeManager(base.MutableMappingHDF5, base.CommonStateObject):

    """
        Allows dictionary-style access to an HDF5 object's attributes.

        These are created exclusively by the library and are available as
        a Python attribute at .attrs

        Like Group objects, attributes provide a minimal dictionary-
        style interface.  Anything which can be reasonably converted to a
        Numpy array or Numpy scalar can be stored.

        Attributes are automatically created on assignment with the
        syntax .attrs[name] = value, with the HDF5 type automatically
        deduced from the value.  Existing attributes are overwritten.

        To modify an existing attribute while preserving its type, use the
        method modify().  To specify an attribute of a particular type and
        shape, use create().
    """

    def __init__(self, parent):
        """ Private constructor.
        """
        self._id = parent.id

    @with_phil
    def __getitem__(self, name):
        """ Read the value of an attribute.
        """
        attr = h5a.open(self._id, self._e(name))
        shape = attr.shape

        # shape is None for empty dataspaces
        if shape is None:
            return Empty(attr.dtype)

        dtype = attr.dtype

        # Do this first, as we'll be fiddling with the dtype for top-level
        # array types
        htype = h5t.py_create(dtype)

        # NumPy doesn't support top-level array types, so we have to "fake"
        # the correct type and shape for the array.  For example, consider
        # attr.shape == (5,) and attr.dtype == '(3,)f'. Then:
        if dtype.subdtype is not None:
            subdtype, subshape = dtype.subdtype
            shape = attr.shape + subshape   # (5, 3)
            dtype = subdtype                # 'f'

        arr = numpy.zeros(shape, dtype=dtype, order='C')
        attr.read(arr, mtype=htype)

        string_info = h5t.check_string_dtype(dtype)
        if string_info and (string_info.length is None):
            # Vlen strings: convert bytes to Python str
            arr = numpy.array([
                b.decode('utf-8', 'surrogateescape') for b in arr.flat
            ], dtype=dtype).reshape(arr.shape)

        if arr.ndim == 0:
            return arr[()]
        return arr

    def get_id(self, name):
        """Get a low-level AttrID object for the named attribute.
        """
        return h5a.open(self._id, self._e(name))

    @with_phil
    def __setitem__(self, name, value):
        """ Set a new attribute, overwriting any existing attribute.

        The type and shape of the attribute are determined from the data.  To
        use a specific type or shape, or to preserve the type of an attribute,
        use the methods create() and modify().
        """
        self.create(name, data=value)

    @with_phil
    def __delitem__(self, name):
        """ Delete an attribute (which must already exist). """
        h5a.delete(self._id, self._e(name))

    def create(self, name, data, shape=None, dtype=None):
        """ Create a new attribute, overwriting any existing attribute.

        name
            Name of the new attribute (required)
        data
            An array to initialize the attribute (required)
        shape
            Shape of the attribute.  Overrides data.shape if both are
            given, in which case the total number of points must be unchanged.
        dtype
            Data type of the attribute.  Overrides data.dtype if both
            are given.
        """
        name = self._e(name)

        with phil:
            # First, make sure we have a NumPy array.  We leave the data type
            # conversion for HDF5 to perform.
            if not isinstance(data, Empty):
                data = base.array_for_new_object(data, specified_dtype=dtype)

            if shape is None:
                shape = data.shape
            elif isinstance(shape, int):
                shape = (shape,)

            use_htype = None    # If a committed type is given, we must use it
                                # in the call to h5a.create.

            if isinstance(dtype, Datatype):
                use_htype = dtype.id
                dtype = dtype.dtype
            elif dtype is None:
                dtype = data.dtype
            else:
                dtype = numpy.dtype(dtype) # In case a string, e.g. 'i8' is passed

            original_dtype = dtype  # We'll need this for top-level array types

            # Where a top-level array type is requested, we have to do some
            # fiddling around to present the data as a smaller array of
            # subarrays.
            if dtype.subdtype is not None:

                subdtype, subshape = dtype.subdtype

                # Make sure the subshape matches the last N axes' sizes.
                if shape[-len(subshape):] != subshape:
                    raise ValueError("Array dtype shape %s is incompatible with data shape %s" % (subshape, shape))

                # New "advertised" shape and dtype
                shape = shape[0:len(shape)-len(subshape)]
                dtype = subdtype

            # Not an array type; make sure to check the number of elements
            # is compatible, and reshape if needed.
            else:

                if shape is not None and product(shape) != product(data.shape):
                    raise ValueError("Shape of new attribute conflicts with shape of data")

                if shape != data.shape:
                    data = data.reshape(shape)

            # We need this to handle special string types.
            if not isinstance(data, Empty):
                data = numpy.asarray(data, dtype=dtype)

            # Make HDF5 datatype and dataspace for the H5A calls
            if use_htype is None:
                htype = h5t.py_create(original_dtype, logical=True)
                htype2 = h5t.py_create(original_dtype)  # Must be bit-for-bit representation rather than logical
            else:
                htype = use_htype
                htype2 = None

            if isinstance(data, Empty):
                space = h5s.create(h5s.NULL)
            else:
                space = h5s.create_simple(shape)

            # For a long time, h5py would create attributes with a random name
            # and then rename them, imitating how you can atomically replace
            # a file in a filesystem. But HDF5 does not offer atomic replacement
            # (you have to delete the existing attribute first), and renaming
            # exposes some bugs - see https://github.com/h5py/h5py/issues/1385
            # So we've gone back to the simpler delete & recreate model.
            if h5a.exists(self._id, name):
                h5a.delete(self._id, name)

            attr = h5a.create(self._id, name, htype, space)
            try:
                if not isinstance(data, Empty):
                    attr.write(data, mtype=htype2)
            except:
                attr.close()
                h5a.delete(self._id, name)
                raise
            attr.close()

    def modify(self, name, value):
        """ Change the value of an attribute while preserving its type.

        Differs from __setitem__ in that if the attribute already exists, its
        type is preserved.  This can be very useful for interacting with
        externally generated files.

        If the attribute doesn't exist, it will be automatically created.
        """
        with phil:
            if not name in self:
                self[name] = value
            else:
                attr = h5a.open(self._id, self._e(name))

                if is_empty_dataspace(attr):
                    raise OSError("Empty attributes can't be modified")

                # If the input data is already an array, let HDF5 do the conversion.
                # If it's a list or similar, don't make numpy guess a dtype for it.
                dt = None if isinstance(value, numpy.ndarray) else attr.dtype
                value = numpy.asarray(value, order='C', dtype=dt)

                # Allow the case of () <-> (1,)
                if (value.shape != attr.shape) and not \
                   (value.size == 1 and product(attr.shape) == 1):
                    raise TypeError("Shape of data is incompatible with existing attribute")
                attr.write(value)

    @with_phil
    def __len__(self):
        """ Number of attributes attached to the object. """
        # I expect we will not have more than 2**32 attributes
        return h5a.get_num_attrs(self._id)

    def __iter__(self):
        """ Iterate over the names of attributes. """
        with phil:

            attrlist = []
            def iter_cb(name, *args):
                """ Callback to gather attribute names """
                attrlist.append(self._d(name))

            cpl = self._id.get_create_plist()
            crt_order = cpl.get_attr_creation_order()
            cpl.close()
            if crt_order & h5p.CRT_ORDER_TRACKED:
                idx_type = h5.INDEX_CRT_ORDER
            else:
                idx_type = h5.INDEX_NAME

            h5a.iterate(self._id, iter_cb, index_type=idx_type)

        for name in attrlist:
            yield name

    @with_phil
    def __contains__(self, name):
        """ Determine if an attribute exists, by name. """
        return h5a.exists(self._id, self._e(name))

    @with_phil
    def __repr__(self):
        if not self._id:
            return ""
        return "" % id(self._id)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/_hl/base.py0000644000175000017500000003641214675110407016307 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements operations common to all high-level objects (File, etc.).
"""

from collections.abc import (
    Mapping, MutableMapping, KeysView, ValuesView, ItemsView
)
import os
import posixpath

import numpy as np

# The high-level interface is serialized; every public API function & method
# is wrapped in a lock.  We reuse the low-level lock because (1) it's fast,
# and (2) it eliminates the possibility of deadlocks due to out-of-order
# lock acquisition.
from .._objects import phil, with_phil
from .. import h5d, h5i, h5r, h5p, h5f, h5t, h5s
from .compat import fspath, filename_encode


def is_hdf5(fname):
    """ Determine if a file is valid HDF5 (False if it doesn't exist). """
    with phil:
        fname = os.path.abspath(fspath(fname))

        if os.path.isfile(fname):
            return h5f.is_hdf5(filename_encode(fname))
        return False


def find_item_type(data):
    """Find the item type of a simple object or collection of objects.

    E.g. [[['a']]] -> str

    The focus is on collections where all items have the same type; we'll return
    None if that's not the case.

    The aim is to treat numpy arrays of Python objects like normal Python
    collections, while treating arrays with specific dtypes differently.
    We're also only interested in array-like collections - lists and tuples,
    possibly nested - not things like sets or dicts.
    """
    if isinstance(data, np.ndarray):
        if (
            data.dtype.kind == 'O'
            and not h5t.check_string_dtype(data.dtype)
            and not h5t.check_vlen_dtype(data.dtype)
        ):
            item_types = {type(e) for e in data.flat}
        else:
            return None
    elif isinstance(data, (list, tuple)):
        item_types = {find_item_type(e) for e in data}
    else:
        return type(data)

    if len(item_types) != 1:
        return None
    return item_types.pop()


def guess_dtype(data):
    """ Attempt to guess an appropriate dtype for the object, returning None
    if nothing is appropriate (or if it should be left up the the array
    constructor to figure out)
    """
    with phil:
        if isinstance(data, h5r.RegionReference):
            return h5t.regionref_dtype
        if isinstance(data, h5r.Reference):
            return h5t.ref_dtype

        item_type = find_item_type(data)

        if item_type is bytes:
            return h5t.string_dtype(encoding='ascii')
        if item_type is str:
            return h5t.string_dtype()

        return None


def is_float16_dtype(dt):
    if dt is None:
        return False

    dt = np.dtype(dt)  # normalize strings -> np.dtype objects
    return dt.kind == 'f' and dt.itemsize == 2


def array_for_new_object(data, specified_dtype=None):
    """Prepare an array from data used to create a new dataset or attribute"""

    # We mostly let HDF5 convert data as necessary when it's written.
    # But if we are going to a float16 datatype, pre-convert in python
    # to workaround a bug in the conversion.
    # https://github.com/h5py/h5py/issues/819
    if is_float16_dtype(specified_dtype):
        as_dtype = specified_dtype
    elif not isinstance(data, np.ndarray) and (specified_dtype is not None):
        # If we need to convert e.g. a list to an array, don't leave numpy
        # to guess a dtype we already know.
        as_dtype = specified_dtype
    else:
        as_dtype = guess_dtype(data)

    data = np.asarray(data, order="C", dtype=as_dtype)

    # In most cases, this does nothing. But if data was already an array,
    # and as_dtype is a tagged h5py dtype (e.g. for an object array of strings),
    # asarray() doesn't replace its dtype object. This gives it the tagged dtype:
    if as_dtype is not None:
        data = data.view(dtype=as_dtype)

    return data


def default_lapl():
    """ Default link access property list """
    return None


def default_lcpl():
    """ Default link creation property list """
    lcpl = h5p.create(h5p.LINK_CREATE)
    lcpl.set_create_intermediate_group(True)
    return lcpl

dlapl = default_lapl()
dlcpl = default_lcpl()


def is_empty_dataspace(obj):
    """ Check if an object's dataspace is empty """
    if obj.get_space().get_simple_extent_type() == h5s.NULL:
        return True
    return False


class CommonStateObject:

    """
        Mixin class that allows sharing information between objects which
        reside in the same HDF5 file.  Requires that the host class have
        a ".id" attribute which returns a low-level ObjectID subclass.

        Also implements Unicode operations.
    """

    @property
    def _lapl(self):
        """ Fetch the link access property list appropriate for this object
        """
        return dlapl

    @property
    def _lcpl(self):
        """ Fetch the link creation property list appropriate for this object
        """
        return dlcpl

    def _e(self, name, lcpl=None):
        """ Encode a name according to the current file settings.

        Returns name, or 2-tuple (name, lcpl) if lcpl is True

        - Binary strings are always passed as-is, h5t.CSET_ASCII
        - Unicode strings are encoded utf8, h5t.CSET_UTF8

        If name is None, returns either None or (None, None) appropriately.
        """
        def get_lcpl(coding):
            """ Create an appropriate link creation property list """
            lcpl = self._lcpl.copy()
            lcpl.set_char_encoding(coding)
            return lcpl

        if name is None:
            return (None, None) if lcpl else None

        if isinstance(name, bytes):
            coding = h5t.CSET_ASCII
        elif isinstance(name, str):
            try:
                name = name.encode('ascii')
                coding = h5t.CSET_ASCII
            except UnicodeEncodeError:
                name = name.encode('utf8')
                coding = h5t.CSET_UTF8
        else:
            raise TypeError(f"A name should be string or bytes, not {type(name)}")

        if lcpl:
            return name, get_lcpl(coding)
        return name

    def _d(self, name):
        """ Decode a name according to the current file settings.

        - Try to decode utf8
        - Failing that, return the byte string

        If name is None, returns None.
        """
        if name is None:
            return None

        try:
            return name.decode('utf8')
        except UnicodeDecodeError:
            pass
        return name


class _RegionProxy:

    """
        Proxy object which handles region references.

        To create a new region reference (datasets only), use slicing syntax:

            >>> newref = obj.regionref[0:10:2]

        To determine the target dataset shape from an existing reference:

            >>> shape = obj.regionref.shape(existingref)

        where  may be any object in the file. To determine the shape of
        the selection in use on the target dataset:

            >>> selection_shape = obj.regionref.selection(existingref)
    """

    def __init__(self, obj):
        self.obj = obj
        self.id = obj.id

    def __getitem__(self, args):
        if not isinstance(self.id, h5d.DatasetID):
            raise TypeError("Region references can only be made to datasets")
        from . import selections
        with phil:
            selection = selections.select(self.id.shape, args, dataset=self.obj)
            return h5r.create(self.id, b'.', h5r.DATASET_REGION, selection.id)

    def shape(self, ref):
        """ Get the shape of the target dataspace referred to by *ref*. """
        with phil:
            sid = h5r.get_region(ref, self.id)
            return sid.shape

    def selection(self, ref):
        """ Get the shape of the target dataspace selection referred to by *ref*
        """
        from . import selections
        with phil:
            sid = h5r.get_region(ref, self.id)
            return selections.guess_shape(sid)


class HLObject(CommonStateObject):

    """
        Base class for high-level interface objects.
    """

    @property
    def file(self):
        """ Return a File instance associated with this object """
        from . import files
        with phil:
            return files.File(self.id)

    @property
    @with_phil
    def name(self):
        """ Return the full name of this object.  None if anonymous. """
        return self._d(h5i.get_name(self.id))

    @property
    @with_phil
    def parent(self):
        """Return the parent group of this object.

        This is always equivalent to obj.file[posixpath.dirname(obj.name)].
        ValueError if this object is anonymous.
        """
        if self.name is None:
            raise ValueError("Parent of an anonymous object is undefined")
        return self.file[posixpath.dirname(self.name)]

    @property
    @with_phil
    def id(self):
        """ Low-level identifier appropriate for this object """
        return self._id

    @property
    @with_phil
    def ref(self):
        """ An (opaque) HDF5 reference to this object """
        return h5r.create(self.id, b'.', h5r.OBJECT)

    @property
    @with_phil
    def regionref(self):
        """Create a region reference (Datasets only).

        The syntax is regionref[]. For example, dset.regionref[...]
        creates a region reference in which the whole dataset is selected.

        Can also be used to determine the shape of the referenced dataset
        (via .shape property), or the shape of the selection (via the
        .selection property).
        """
        return _RegionProxy(self)

    @property
    def attrs(self):
        """ Attributes attached to this object """
        from . import attrs
        with phil:
            return attrs.AttributeManager(self)

    @with_phil
    def __init__(self, oid):
        """ Setup this object, given its low-level identifier """
        self._id = oid

    @with_phil
    def __hash__(self):
        return hash(self.id)

    @with_phil
    def __eq__(self, other):
        if hasattr(other, 'id'):
            return self.id == other.id
        return NotImplemented

    def __bool__(self):
        with phil:
            return bool(self.id)
    __nonzero__ = __bool__

    def __getnewargs__(self):
        """Disable pickle.

        Handles for HDF5 objects can't be reliably deserialised, because the
        recipient may not have access to the same files. So we do this to
        fail early.

        If you really want to pickle h5py objects and can live with some
        limitations, look at the h5pickle project on PyPI.
        """
        raise TypeError("h5py objects cannot be pickled")

    def __getstate__(self):
        # Pickle protocols 0 and 1 use this instead of __getnewargs__
        raise TypeError("h5py objects cannot be pickled")

# --- Dictionary-style interface ----------------------------------------------

# To implement the dictionary-style interface from groups and attributes,
# we inherit from the appropriate abstract base classes in collections.
#
# All locking is taken care of by the subclasses.
# We have to override ValuesView and ItemsView here because Group and
# AttributeManager can only test for key names.


class KeysViewHDF5(KeysView):
    def __str__(self):
        return "".format(list(self))

    def __reversed__(self):
        yield from reversed(self._mapping)

    __repr__ = __str__

class ValuesViewHDF5(ValuesView):

    """
        Wraps e.g. a Group or AttributeManager to provide a value view.

        Note that __contains__ will have poor performance as it has
        to scan all the links or attributes.
    """

    def __contains__(self, value):
        with phil:
            for key in self._mapping:
                if value == self._mapping.get(key):
                    return True
            return False

    def __iter__(self):
        with phil:
            for key in self._mapping:
                yield self._mapping.get(key)

    def __reversed__(self):
        with phil:
            for key in reversed(self._mapping):
                yield self._mapping.get(key)


class ItemsViewHDF5(ItemsView):

    """
        Wraps e.g. a Group or AttributeManager to provide an items view.
    """

    def __contains__(self, item):
        with phil:
            key, val = item
            if key in self._mapping:
                return val == self._mapping.get(key)
            return False

    def __iter__(self):
        with phil:
            for key in self._mapping:
                yield (key, self._mapping.get(key))

    def __reversed__(self):
        with phil:
            for key in reversed(self._mapping):
                yield (key, self._mapping.get(key))


class MappingHDF5(Mapping):

    """
        Wraps a Group, AttributeManager or DimensionManager object to provide
        an immutable mapping interface.

        We don't inherit directly from MutableMapping because certain
        subclasses, for example DimensionManager, are read-only.
    """
    def keys(self):
        """ Get a view object on member names """
        return KeysViewHDF5(self)

    def values(self):
        """ Get a view object on member objects """
        return ValuesViewHDF5(self)

    def items(self):
        """ Get a view object on member items """
        return ItemsViewHDF5(self)

    def _ipython_key_completions_(self):
        """ Custom tab completions for __getitem__ in IPython >=5.0. """
        return sorted(self.keys())


class MutableMappingHDF5(MappingHDF5, MutableMapping):

    """
        Wraps a Group or AttributeManager object to provide a mutable
        mapping interface, in contrast to the read-only mapping of
        MappingHDF5.
    """

    pass


class Empty:

    """
        Proxy object to represent empty/null dataspaces (a.k.a H5S_NULL).

        This can have an associated dtype, but has no shape or data. This is not
        the same as an array with shape (0,).
    """
    shape = None
    size = None

    def __init__(self, dtype):
        self.dtype = np.dtype(dtype)

    def __eq__(self, other):
        if isinstance(other, Empty) and self.dtype == other.dtype:
            return True
        return False

    def __repr__(self):
        return "Empty(dtype={0!r})".format(self.dtype)


def product(nums):
    """Calculate a numeric product

    For small amounts of data (e.g. shape tuples), this simple code is much
    faster than calling numpy.prod().
    """
    prod = 1
    for n in nums:
        prod *= n
    return prod


# Simple variant of cached_property:
# Unlike functools, this has no locking, so we don't have to worry about
# deadlocks with phil (see issue gh-2064). Unlike cached-property on PyPI, it
# doesn't try to import asyncio (which can be ~100 extra modules).
# Many projects seem to have similar variants of this, often without attribution,
# but to be cautious, this code comes from cached-property (Copyright (c) 2015,
# Daniel Greenfeld, BSD license), where it is attributed to bottle (Copyright
# (c) 2009-2022, Marcel Hellkamp, MIT license).

class cached_property:
    def __init__(self, func):
        self.__doc__ = getattr(func, "__doc__")
        self.func = func

    def __get__(self, obj, cls):
        if obj is None:
            return self

        value = obj.__dict__[self.func.__name__] = self.func(obj)
        return value
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/h5py/_hl/compat.py0000644000175000017500000000310614755054371016660 0ustar00takluyvertakluyver"""
Compatibility module for high-level h5py
"""
import sys
from os import fspath, fsencode, fsdecode
from ..version import hdf5_built_version_tuple

# HDF5 supported passing paths as UTF-8 for Windows from 1.10.6, but this
# was broken again in 1.14.4 - https://github.com/HDFGroup/hdf5/issues/5037 .
# The change was reverted in 1.14.6.
if (1, 14, 4) <= hdf5_built_version_tuple < (1, 14, 6):
    WINDOWS_ENCODING = "mbcs"
else:
    WINDOWS_ENCODING = "utf-8"


def filename_encode(filename):
    """
    Encode filename for use in the HDF5 library.

    Due to how HDF5 handles filenames on different systems, this should be
    called on any filenames passed to the HDF5 library. See the documentation on
    filenames in h5py for more information.
    """
    filename = fspath(filename)
    if sys.platform == "win32":
        if isinstance(filename, str):
            return filename.encode(WINDOWS_ENCODING, "strict")
        return filename
    return fsencode(filename)


def filename_decode(filename):
    """
    Decode filename used by HDF5 library.

    Due to how HDF5 handles filenames on different systems, this should be
    called on any filenames passed from the HDF5 library. See the documentation
    on filenames in h5py for more information.
    """
    if sys.platform == "win32":
        if isinstance(filename, bytes):
            return filename.decode(WINDOWS_ENCODING, "strict")
        elif isinstance(filename, str):
            return filename
        else:
            raise TypeError("expect bytes or str, not %s" % type(filename).__name__)
    return fsdecode(filename)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/h5py/_hl/dataset.py0000644000175000017500000012364414755054371017034 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2020 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements support for high-level dataset access.
"""

import posixpath as pp
import sys
from abc import ABC, abstractmethod

import numpy

from .. import h5, h5s, h5t, h5r, h5d, h5p, h5fd, h5ds, _selector
from .base import (
    array_for_new_object, cached_property, Empty, find_item_type, HLObject,
    phil, product, with_phil,
)
from . import filters
from . import selections as sel
from . import selections2 as sel2
from .datatype import Datatype
from .compat import filename_decode
from .vds import VDSmap, vds_support

_LEGACY_GZIP_COMPRESSION_VALS = frozenset(range(10))
MPI = h5.get_config().mpi


def make_new_dset(parent, shape=None, dtype=None, data=None, name=None,
                  chunks=None, compression=None, shuffle=None,
                  fletcher32=None, maxshape=None, compression_opts=None,
                  fillvalue=None, scaleoffset=None, track_times=False,
                  external=None, track_order=None, dcpl=None, dapl=None,
                  efile_prefix=None, virtual_prefix=None, allow_unknown_filter=False,
                  rdcc_nslots=None, rdcc_nbytes=None, rdcc_w0=None, *,
                  fill_time=None):
    """ Return a new low-level dataset identifier """

    # Convert data to a C-contiguous ndarray
    if data is not None and not isinstance(data, Empty):
        data = array_for_new_object(data, specified_dtype=dtype)

    # Validate shape
    if shape is None:
        if data is None:
            if dtype is None:
                raise TypeError("One of data, shape or dtype must be specified")
            data = Empty(dtype)
        shape = data.shape
    else:
        shape = (shape,) if isinstance(shape, int) else tuple(shape)
        if data is not None and (product(shape) != product(data.shape)):
            raise ValueError("Shape tuple is incompatible with data")

    if isinstance(maxshape, int):
        maxshape = (maxshape,)
    tmp_shape = maxshape if maxshape is not None else shape

    # Validate chunk shape
    if isinstance(chunks, int) and not isinstance(chunks, bool):
        chunks = (chunks,)
    if isinstance(chunks, tuple) and any(
        chunk > dim for dim, chunk in zip(tmp_shape, chunks) if dim is not None
    ):
        errmsg = "Chunk shape must not be greater than data shape in any dimension. "\
                 "{} is not compatible with {}".format(chunks, shape)
        raise ValueError(errmsg)

    if isinstance(dtype, Datatype):
        # Named types are used as-is
        tid = dtype.id
        dtype = tid.dtype  # Following code needs this
    else:
        # Validate dtype
        if dtype is None and data is None:
            dtype = numpy.dtype("=f4")
        elif dtype is None and data is not None:
            dtype = data.dtype
        else:
            dtype = numpy.dtype(dtype)
        tid = h5t.py_create(dtype, logical=1)

    # Legacy
    if any((compression, shuffle, fletcher32, maxshape, scaleoffset)) and chunks is False:
        raise ValueError("Chunked format required for given storage options")

    # Legacy
    if compression is True:
        if compression_opts is None:
            compression_opts = 4
        compression = 'gzip'

    # Legacy
    if compression in _LEGACY_GZIP_COMPRESSION_VALS:
        if compression_opts is not None:
            raise TypeError("Conflict in compression options")
        compression_opts = compression
        compression = 'gzip'
    dcpl = filters.fill_dcpl(
        dcpl or h5p.create(h5p.DATASET_CREATE), shape, dtype,
        chunks, compression, compression_opts, shuffle, fletcher32,
        maxshape, scaleoffset, external, allow_unknown_filter,
        fill_time=fill_time)

    # Check that compression roundtrips correctly if it was specified
    if compression is not None:
        if isinstance(compression, filters.FilterRefBase):
            compression = compression.filter_id
        if isinstance(compression, int):
            compression = filters.get_filter_name(compression)
        if compression not in filters.get_filters(dcpl):
            raise ValueError(f'compression {compression!r} not in filters {filters.get_filters(dcpl)!r}')

    if fillvalue is not None:
        # prepare string-type dtypes for fillvalue
        string_info = h5t.check_string_dtype(dtype)
        if string_info is not None:
            # fake vlen dtype for fixed len string fillvalue
            # to not trigger unwanted encoding
            dtype = h5t.string_dtype(string_info.encoding)
            fillvalue = numpy.array(fillvalue, dtype=dtype)
        else:
            fillvalue = numpy.array(fillvalue)
        dcpl.set_fill_value(fillvalue)

    if track_times is None:
        # In case someone explicitly passes None for the default
        track_times = False
    if track_times in (True, False):
        dcpl.set_obj_track_times(track_times)
    else:
        raise TypeError("track_times must be either True or False")
    if track_order is True:
        dcpl.set_attr_creation_order(
            h5p.CRT_ORDER_TRACKED | h5p.CRT_ORDER_INDEXED)
    elif track_order is False:
        dcpl.set_attr_creation_order(0)
    elif track_order is not None:
        raise TypeError("track_order must be either True or False")

    if maxshape is not None:
        maxshape = tuple(m if m is not None else h5s.UNLIMITED for m in maxshape)

    if any([efile_prefix, virtual_prefix, rdcc_nbytes, rdcc_nslots, rdcc_w0]):
        dapl = dapl or h5p.create(h5p.DATASET_ACCESS)

    if efile_prefix is not None:
        dapl.set_efile_prefix(efile_prefix)

    if virtual_prefix is not None:
        dapl.set_virtual_prefix(virtual_prefix)

    if rdcc_nbytes or rdcc_nslots or rdcc_w0:
        cache_settings = list(dapl.get_chunk_cache())
        if rdcc_nslots is not None:
            cache_settings[0] = rdcc_nslots
        if rdcc_nbytes is not None:
            cache_settings[1] = rdcc_nbytes
        if rdcc_w0 is not None:
            cache_settings[2] = rdcc_w0
        dapl.set_chunk_cache(*cache_settings)

    if isinstance(data, Empty):
        sid = h5s.create(h5s.NULL)
    else:
        sid = h5s.create_simple(shape, maxshape)

    dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl, dapl=dapl)

    if (data is not None) and (not isinstance(data, Empty)):
        dset_id.write(h5s.ALL, h5s.ALL, data)

    return dset_id


def open_dset(parent, name, dapl=None, efile_prefix=None, virtual_prefix=None,
              rdcc_nslots=None, rdcc_nbytes=None, rdcc_w0=None, **kwds):
    """ Return an existing low-level dataset identifier """

    if any([efile_prefix, virtual_prefix, rdcc_nbytes, rdcc_nslots, rdcc_w0]):
        dapl = dapl or h5p.create(h5p.DATASET_ACCESS)

    if efile_prefix is not None:
        dapl.set_efile_prefix(efile_prefix)

    if virtual_prefix is not None:
        dapl.set_virtual_prefix(virtual_prefix)

    if rdcc_nbytes or rdcc_nslots or rdcc_w0:
        cache_settings = list(dapl.get_chunk_cache())
        if rdcc_nslots is not None:
            cache_settings[0] = rdcc_nslots
        if rdcc_nbytes is not None:
            cache_settings[1] = rdcc_nbytes
        if rdcc_w0 is not None:
            cache_settings[2] = rdcc_w0
        dapl.set_chunk_cache(*cache_settings)

    dset_id = h5d.open(parent.id, name, dapl=dapl)

    return dset_id



class AbstractView(ABC):
    _dset: "Dataset"

    def __init__(self, dset):
        self._dset = dset

    def __len__(self):
        return len(self._dset)

    @property
    @abstractmethod
    def dtype(self):
        ...  # pragma: nocover

    @property
    def ndim(self):
        return self._dset.ndim

    @property
    def shape(self):
        return self._dset.shape

    @property
    def size(self):
        return self._dset.size

    @abstractmethod
    def __getitem__(self, idx):
        ...  # pragma: nocover

    def __array__(self, dtype=None, copy=None):
        if copy is False:
            raise ValueError(
                f"{self.__class__.__name__}.__array__ received {copy=} "
                "but memory allocation cannot be avoided on read"
            )

        # If self.ndim == 0, convert np.generic back to np.ndarray
        return numpy.asarray(self[()], dtype=dtype or self.dtype)

class AsTypeView(AbstractView):
    """Wrapper to convert data on reading from a dataset.
    """
    def __init__(self, dset, dtype):
        super().__init__(dset)
        self._dtype = numpy.dtype(dtype)

    @property
    def dtype(self):
        return self._dtype

    def __getitem__(self, idx):
        return self._dset.__getitem__(idx, new_dtype=self._dtype)

    def __array__(self, dtype=None, copy=None):
        return self._dset.__array__(dtype or self._dtype, copy)


class AsStrView(AbstractView):
    """Wrapper to decode strings on reading the dataset"""
    def __init__(self, dset, encoding, errors='strict'):
        super().__init__(dset)
        self.encoding = encoding
        self.errors = errors

    @property
    def dtype(self):
        return numpy.dtype(object)

    def __getitem__(self, idx):
        bytes_arr = self._dset[idx]
        # numpy.char.decode() seems like the obvious thing to use. But it only
        # accepts numpy string arrays, not object arrays of bytes (which we
        # return from HDF5 variable-length strings). And the numpy
        # implementation is not faster than doing it with a loop; in fact, by
        # not converting the result to a numpy unicode array, the
        # naive way can be faster! (Comparing with numpy 1.18.4, June 2020)
        if numpy.isscalar(bytes_arr):
            return bytes_arr.decode(self.encoding, self.errors)

        return numpy.array([
            b.decode(self.encoding, self.errors) for b in bytes_arr.flat
        ], dtype=object).reshape(bytes_arr.shape)


class FieldsView(AbstractView):
    """Wrapper to extract named fields from a dataset with a struct dtype"""

    def __init__(self, dset, prior_dtype, names):
        super().__init__(dset)
        if isinstance(names, str):
            self.extract_field = names
            names = [names]
        else:
            self.extract_field = None
        self.read_dtype = readtime_dtype(prior_dtype, names)

    @property
    def dtype(self):
        t = self.read_dtype
        if self.extract_field is not None:
            t = t[self.extract_field]
        return t

    def __getitem__(self, idx):
        data = self._dset.__getitem__(idx, new_dtype=self.read_dtype)
        if self.extract_field is not None:
            data = data[self.extract_field]
        return data


def readtime_dtype(basetype, names):
    """Make a NumPy compound dtype with a subset of available fields"""
    if basetype.names is None:  # Names provided, but not compound
        raise ValueError("Field names only allowed for compound types")

    for name in names:  # Check all names are legal
        if name not in basetype.names:
            raise ValueError("Field %s does not appear in this type." % name)

    return numpy.dtype([(name, basetype.fields[name][0]) for name in names])


if MPI:
    class CollectiveContext:

        """ Manages collective I/O in MPI mode """

        # We don't bother with _local as threads are forbidden in MPI mode

        def __init__(self, dset):
            self._dset = dset

        def __enter__(self):
            # pylint: disable=protected-access
            self._dset._dxpl.set_dxpl_mpio(h5fd.MPIO_COLLECTIVE)

        def __exit__(self, *args):
            # pylint: disable=protected-access
            self._dset._dxpl.set_dxpl_mpio(h5fd.MPIO_INDEPENDENT)


class ChunkIterator:
    """
    Class to iterate through list of chunks of a given dataset
    """
    def __init__(self, dset, source_sel=None):
        self._shape = dset.shape
        rank = len(dset.shape)

        if not dset.chunks:
            # can only use with chunked datasets
            raise TypeError("Chunked dataset required")

        self._layout = dset.chunks
        if source_sel is None:
            # select over entire dataset
            self._sel = tuple(
                slice(0, self._shape[dim])
                for dim in range(rank)
            )
        else:
            if isinstance(source_sel, slice):
                self._sel = (source_sel,)
            else:
                self._sel = source_sel
        if len(self._sel) != rank:
            raise ValueError("Invalid selection - selection region must have same rank as dataset")
        self._chunk_index = []
        for dim in range(rank):
            s = self._sel[dim]
            if s.start < 0 or s.stop > self._shape[dim] or s.stop <= s.start:
                raise ValueError("Invalid selection - selection region must be within dataset space")
            index = s.start // self._layout[dim]
            self._chunk_index.append(index)

    def __iter__(self):
        return self

    def __next__(self):
        rank = len(self._shape)
        slices = []
        if rank == 0 or self._chunk_index[0] * self._layout[0] >= self._sel[0].stop:
            # ran past the last chunk, end iteration
            raise StopIteration()

        for dim in range(rank):
            s = self._sel[dim]
            start = self._chunk_index[dim] * self._layout[dim]
            stop = (self._chunk_index[dim] + 1) * self._layout[dim]
            # adjust the start if this is an edge chunk
            if start < s.start:
                start = s.start
            if stop > s.stop:
                stop = s.stop  # trim to end of the selection
            s = slice(start, stop, 1)
            slices.append(s)

        # bump up the last index and carry forward if we run outside the selection
        dim = rank - 1
        while dim >= 0:
            s = self._sel[dim]
            self._chunk_index[dim] += 1

            chunk_end = self._chunk_index[dim] * self._layout[dim]
            if chunk_end < s.stop:
                # we still have room to extend along this dimensions
                return tuple(slices)

            if dim > 0:
                # reset to the start and continue iterating with higher dimension
                self._chunk_index[dim] = s.start // self._layout[dim]
            dim -= 1
        return tuple(slices)


class Dataset(HLObject):

    """
        Represents an HDF5 dataset
    """

    def astype(self, dtype):
        """ Get a wrapper allowing you to perform reads to a
        different destination type, e.g.:

        >>> double_precision = dataset.astype('f8')[0:100:2]
        """
        return AsTypeView(self, dtype)

    def asstr(self, encoding=None, errors='strict'):
        """Get a wrapper to read string data as Python strings:

        >>> str_array = dataset.asstr()[:]

        The parameters have the same meaning as in ``bytes.decode()``.
        If ``encoding`` is unspecified, it will use the encoding in the HDF5
        datatype (either ascii or utf-8).
        """
        string_info = h5t.check_string_dtype(self.dtype)
        if string_info is None:
            raise TypeError(
                "dset.asstr() can only be used on datasets with "
                "an HDF5 string datatype"
            )
        if encoding is None:
            encoding = string_info.encoding
        return AsStrView(self, encoding, errors=errors)

    def fields(self, names, *, _prior_dtype=None):
        """Get a wrapper to read a subset of fields from a compound data type:

        >>> 2d_coords = dataset.fields(['x', 'y'])[:]

        If names is a string, a single field is extracted, and the resulting
        arrays will have that dtype. Otherwise, it should be an iterable,
        and the read data will have a compound dtype.
        """
        if _prior_dtype is None:
            _prior_dtype = self.dtype
        return FieldsView(self, _prior_dtype, names)

    if MPI:
        @property
        @with_phil
        def collective(self):
            """ Context manager for MPI collective reads & writes """
            return CollectiveContext(self)

    @property
    def dims(self):
        """ Access dimension scales attached to this dataset. """
        from .dims import DimensionManager
        with phil:
            return DimensionManager(self)

    @property
    @with_phil
    def ndim(self):
        """Numpy-style attribute giving the number of dimensions"""
        return self.id.rank

    @property
    def shape(self):
        """Numpy-style shape tuple giving dataset dimensions"""
        if 'shape' in self._cache_props:
            return self._cache_props['shape']

        with phil:
            shape = self.id.shape

        # If the file is read-only, cache the shape to speed-up future uses.
        # This cache is invalidated by .refresh() when using SWMR.
        if self._readonly:
            self._cache_props['shape'] = shape
        return shape

    @shape.setter
    @with_phil
    def shape(self, shape):
        # pylint: disable=missing-docstring
        self.resize(shape)

    @property
    def size(self):
        """Numpy-style attribute giving the total dataset size"""
        if 'size' in self._cache_props:
            return self._cache_props['size']

        if self._is_empty:
            size = None
        else:
            size = product(self.shape)

        # If the file is read-only, cache the size to speed-up future uses.
        # This cache is invalidated by .refresh() when using SWMR.
        if self._readonly:
            self._cache_props['size'] = size
        return size

    @property
    def nbytes(self):
        """Numpy-style attribute giving the raw dataset size as the number of bytes"""
        size = self.size
        if size is None:  # if we are an empty 0-D array, then there are no bytes in the dataset
            return 0
        return self.dtype.itemsize * size

    @property
    def _selector(self):
        """Internal object for optimised selection of data"""
        if '_selector' in self._cache_props:
            return self._cache_props['_selector']

        slr = _selector.Selector(self.id.get_space())

        # If the file is read-only, cache the reader to speed up future uses.
        # This cache is invalidated by .refresh() when using SWMR.
        if self._readonly:
            self._cache_props['_selector'] = slr
        return slr

    @property
    def _fast_reader(self):
        """Internal object for optimised reading of data"""
        if '_fast_reader' in self._cache_props:
            return self._cache_props['_fast_reader']

        rdr = _selector.Reader(self.id)

        # If the file is read-only, cache the reader to speed up future uses.
        # This cache is invalidated by .refresh() when using SWMR.
        if self._readonly:
            self._cache_props['_fast_reader'] = rdr
        return rdr

    @property
    @with_phil
    def dtype(self):
        """Numpy dtype representing the datatype"""
        return self.id.dtype

    @property
    @with_phil
    def chunks(self):
        """Dataset chunks (or None)"""
        dcpl = self._dcpl
        if dcpl.get_layout() == h5d.CHUNKED:
            return dcpl.get_chunk()
        return None

    @property
    @with_phil
    def compression(self):
        """Compression strategy (or None)"""
        for x in ('gzip','lzf','szip'):
            if x in self._filters:
                return x
        return None

    @property
    @with_phil
    def compression_opts(self):
        """ Compression setting.  Int(0-9) for gzip, 2-tuple for szip. """
        return self._filters.get(self.compression, None)

    @property
    @with_phil
    def shuffle(self):
        """Shuffle filter present (T/F)"""
        return 'shuffle' in self._filters

    @property
    @with_phil
    def fletcher32(self):
        """Fletcher32 filter is present (T/F)"""
        return 'fletcher32' in self._filters

    @property
    @with_phil
    def scaleoffset(self):
        """Scale/offset filter settings. For integer data types, this is
        the number of bits stored, or 0 for auto-detected. For floating
        point data types, this is the number of decimal places retained.
        If the scale/offset filter is not in use, this is None."""
        try:
            return self._filters['scaleoffset'][1]
        except KeyError:
            return None

    @property
    @with_phil
    def external(self):
        """External file settings. Returns a list of tuples of
        (name, offset, size) for each external file entry, or returns None
        if no external files are used."""
        count = self._dcpl.get_external_count()
        if count<=0:
            return None
        ext_list = list()
        for x in range(count):
            (name, offset, size) = self._dcpl.get_external(x)
            ext_list.append( (filename_decode(name), offset, size) )
        return ext_list

    @property
    @with_phil
    def maxshape(self):
        """Shape up to which this dataset can be resized.  Axes with value
        None have no resize limit. """
        space = self.id.get_space()
        dims = space.get_simple_extent_dims(True)
        if dims is None:
            return None

        return tuple(x if x != h5s.UNLIMITED else None for x in dims)

    @property
    @with_phil
    def fillvalue(self):
        """Fill value for this dataset (0 by default)"""
        arr = numpy.zeros((1,), dtype=self.dtype)
        self._dcpl.get_fill_value(arr)
        return arr[0]

    @cached_property
    @with_phil
    def _extent_type(self):
        """Get extent type for this dataset - SIMPLE, SCALAR or NULL"""
        return self.id.get_space().get_simple_extent_type()

    @cached_property
    def _is_empty(self):
        """Check if extent type is empty"""
        return self._extent_type == h5s.NULL

    @cached_property
    def _dcpl(self):
        """
        The dataset creation property list used when this dataset was created.
        """
        return self.id.get_create_plist()

    @cached_property
    def _filters(self):
        """
        The active filters of the dataset.
        """
        return filters.get_filters(self._dcpl)

    @with_phil
    def __init__(self, bind, *, readonly=False):
        """ Create a new Dataset object by binding to a low-level DatasetID.
        """
        if not isinstance(bind, h5d.DatasetID):
            raise ValueError("%s is not a DatasetID" % bind)
        super().__init__(bind)

        self._dxpl = h5p.create(h5p.DATASET_XFER)
        self._readonly = readonly
        self._cache_props = {}

    def resize(self, size, axis=None):
        """ Resize the dataset, or the specified axis.

        The dataset must be stored in chunked format; it can be resized up to
        the "maximum shape" (keyword maxshape) specified at creation time.
        The rank of the dataset cannot be changed.

        "Size" should be a shape tuple, or if an axis is specified, an integer.

        BEWARE: This functions differently than the NumPy resize() method!
        The data is not "reshuffled" to fit in the new shape; each axis is
        grown or shrunk independently.  The coordinates of existing data are
        fixed.
        """
        with phil:
            if self.chunks is None:
                raise TypeError("Only chunked datasets can be resized")

            if axis is not None:
                if not (axis >=0 and axis < self.id.rank):
                    raise ValueError("Invalid axis (0 to %s allowed)" % (self.id.rank-1))
                try:
                    newlen = int(size)
                except TypeError:
                    raise TypeError("Argument must be a single int if axis is specified")
                size = list(self.shape)
                size[axis] = newlen

            size = tuple(size)
            self.id.set_extent(size)
            #h5f.flush(self.id)  # THG recommends

    @with_phil
    def __len__(self):
        """ The size of the first axis.  TypeError if scalar.

        Limited to 2**32 on 32-bit systems; Dataset.len() is preferred.
        """
        size = self.len()
        if size > sys.maxsize:
            raise OverflowError("Value too big for Python's __len__; use Dataset.len() instead.")
        return size

    def len(self):
        """ The size of the first axis.  TypeError if scalar.

        Use of this method is preferred to len(dset), as Python's built-in
        len() cannot handle values greater then 2**32 on 32-bit systems.
        """
        with phil:
            shape = self.shape
            if len(shape) == 0:
                raise TypeError("Attempt to take len() of scalar dataset")
            return shape[0]

    @with_phil
    def __iter__(self):
        """ Iterate over the first axis.  TypeError if scalar.

        BEWARE: Modifications to the yielded data are *NOT* written to file.
        """
        shape = self.shape
        if len(shape) == 0:
            raise TypeError("Can't iterate over a scalar dataset")
        for i in range(shape[0]):
            yield self[i]

    @with_phil
    def iter_chunks(self, sel=None):
        """ Return chunk iterator.  If set, the sel argument is a slice or
        tuple of slices that defines the region to be used. If not set, the
        entire dataspace will be used for the iterator.

        For each chunk within the given region, the iterator yields a tuple of
        slices that gives the intersection of the given chunk with the
        selection area.

        A TypeError will be raised if the dataset is not chunked.

        A ValueError will be raised if the selection region is invalid.

        """
        return ChunkIterator(self, sel)

    @cached_property
    def _fast_read_ok(self):
        """Is this dataset suitable for simple reading"""
        return (
            self._extent_type == h5s.SIMPLE
            and isinstance(self.id.get_type(), (h5t.TypeIntegerID, h5t.TypeFloatID))
        )

    @with_phil
    def __getitem__(self, args, new_dtype=None):
        """ Read a slice from the HDF5 dataset.

        Takes slices and recarray-style field names (more than one is
        allowed!) in any order.  Obeys basic NumPy rules, including
        broadcasting.

        Also supports:

        * Boolean "mask" array indexing
        """
        args = args if isinstance(args, tuple) else (args,)

        if self._fast_read_ok and (new_dtype is None):
            try:
                return self._fast_reader.read(args)
            except TypeError:
                pass  # Fall back to Python read pathway below

        if self._is_empty:
            # Check 'is Ellipsis' to avoid equality comparison with an array:
            # array equality returns an array, not a boolean.
            if args == () or (len(args) == 1 and args[0] is Ellipsis):
                return Empty(self.dtype)
            raise ValueError("Empty datasets cannot be sliced")

        # Sort field names from the rest of the args.
        names = tuple(x for x in args if isinstance(x, str))

        if names:
            # Read a subset of the fields in this structured dtype
            if len(names) == 1:
                names = names[0]  # Read with simpler dtype of this field
            args = tuple(x for x in args if not isinstance(x, str))
            return self.fields(names, _prior_dtype=new_dtype)[args]

        if new_dtype is None:
            new_dtype = self.dtype
        mtype = h5t.py_create(new_dtype)

        # === Special-case region references ====

        if len(args) == 1 and isinstance(args[0], h5r.RegionReference):

            obj = h5r.dereference(args[0], self.id)
            if obj != self.id:
                raise ValueError("Region reference must point to this dataset")

            sid = h5r.get_region(args[0], self.id)
            mshape = sel.guess_shape(sid)
            if mshape is None:
                # 0D with no data (NULL or deselected SCALAR)
                return Empty(new_dtype)
            out = numpy.zeros(mshape, dtype=new_dtype)
            if out.size == 0:
                return out

            sid_out = h5s.create_simple(mshape)
            sid_out.select_all()
            self.id.read(sid_out, sid, out, mtype)
            return out

        # === Check for zero-sized datasets =====

        if self.size == 0:
            # Check 'is Ellipsis' to avoid equality comparison with an array:
            # array equality returns an array, not a boolean.
            if args == () or (len(args) == 1 and args[0] is Ellipsis):
                return numpy.zeros(self.shape, dtype=new_dtype)

        # === Scalar dataspaces =================

        if self.shape == ():
            fspace = self.id.get_space()
            selection = sel2.select_read(fspace, args)
            if selection.mshape is None:
                arr = numpy.zeros((), dtype=new_dtype)
            else:
                arr = numpy.zeros(selection.mshape, dtype=new_dtype)
            for mspace, fspace in selection:
                self.id.read(mspace, fspace, arr, mtype)
            if selection.mshape is None:
                return arr[()]
            return arr

        # === Everything else ===================

        # Perform the dataspace selection.
        selection = sel.select(self.shape, args, dataset=self)

        if selection.nselect == 0:
            return numpy.zeros(selection.array_shape, dtype=new_dtype)

        arr = numpy.zeros(selection.array_shape, new_dtype, order='C')

        # Perform the actual read
        mspace = h5s.create_simple(selection.mshape)
        fspace = selection.id
        self.id.read(mspace, fspace, arr, mtype, dxpl=self._dxpl)

        # Patch up the output for NumPy
        if arr.shape == ():
            return arr[()]   # 0 dim array -> numpy scalar
        return arr

    @with_phil
    def __setitem__(self, args, val):
        """ Write to the HDF5 dataset from a Numpy array.

        NumPy's broadcasting rules are honored, for "simple" indexing
        (slices and integers).  For advanced indexing, the shapes must
        match.
        """
        args = args if isinstance(args, tuple) else (args,)

        # Sort field indices from the slicing
        names = tuple(x for x in args if isinstance(x, str))
        args = tuple(x for x in args if not isinstance(x, str))

        # Generally we try to avoid converting the arrays on the Python
        # side.  However, for compound literals this is unavoidable.
        vlen = h5t.check_vlen_dtype(self.dtype)
        if vlen is not None and vlen not in (bytes, str):
            try:
                val = numpy.asarray(val, dtype=vlen)
            except (ValueError, TypeError):
                try:
                    val = numpy.array([numpy.array(x, dtype=vlen)
                                       for x in val], dtype=self.dtype)
                except (ValueError, TypeError):
                    pass
            if vlen == val.dtype:
                if val.ndim > 1:
                    tmp = numpy.empty(shape=val.shape[:-1], dtype=object)
                    tmp.ravel()[:] = [i for i in val.reshape(
                        (product(val.shape[:-1]), val.shape[-1])
                    )]
                else:
                    tmp = numpy.array([None], dtype=object)
                    tmp[0] = val
                val = tmp
        elif self.dtype.kind == "O" or \
          (self.dtype.kind == 'V' and \
          (not isinstance(val, numpy.ndarray) or val.dtype.kind != 'V') and \
          (self.dtype.subdtype is None)):
            if len(names) == 1 and self.dtype.fields is not None:
                # Single field selected for write, from a non-array source
                if not names[0] in self.dtype.fields:
                    raise ValueError("No such field for indexing: %s" % names[0])
                dtype = self.dtype.fields[names[0]][0]
                cast_compound = True
            else:
                dtype = self.dtype
                cast_compound = False

            val = numpy.asarray(val, dtype=dtype.base, order='C')
            if cast_compound:
                val = val.view(numpy.dtype([(names[0], dtype)]))
                val = val.reshape(val.shape[:len(val.shape) - len(dtype.shape)])
        elif (self.dtype.kind == 'S'
              and (h5t.check_string_dtype(self.dtype).encoding == 'utf-8')
              and (find_item_type(val) is str)
        ):
            # Writing str objects to a fixed-length UTF-8 string dataset.
            # Numpy's normal conversion only handles ASCII characters, but
            # when the destination is UTF-8, we want to allow any unicode.
            # This *doesn't* handle numpy fixed-length unicode data ('U' dtype),
            # as HDF5 has no equivalent, and converting fixed length UTF-32
            # to variable length UTF-8 would obscure what's going on.
            str_array = numpy.asarray(val, order='C', dtype=object)
            val = numpy.array([
                s.encode('utf-8') for s in str_array.flat
            ], dtype=self.dtype).reshape(str_array.shape)
        else:
            # If the input data is already an array, let HDF5 do the conversion.
            # If it's a list or similar, don't make numpy guess a dtype for it.
            dt = None if isinstance(val, numpy.ndarray) else self.dtype.base
            val = numpy.asarray(val, order='C', dtype=dt)

        # Check for array dtype compatibility and convert
        if self.dtype.subdtype is not None:
            shp = self.dtype.subdtype[1]
            valshp = val.shape[-len(shp):]
            if valshp != shp:  # Last dimension has to match
                raise TypeError("When writing to array types, last N dimensions have to match (got %s, but should be %s)" % (valshp, shp,))
            mtype = h5t.py_create(numpy.dtype((val.dtype, shp)))
            mshape = val.shape[0:len(val.shape)-len(shp)]

        # Make a compound memory type if field-name slicing is required
        elif len(names) != 0:

            mshape = val.shape

            # Catch common errors
            if self.dtype.fields is None:
                raise TypeError("Illegal slicing argument (not a compound dataset)")
            mismatch = [x for x in names if x not in self.dtype.fields]
            if len(mismatch) != 0:
                mismatch = ", ".join('"%s"'%x for x in mismatch)
                raise ValueError("Illegal slicing argument (fields %s not in dataset type)" % mismatch)

            # Write non-compound source into a single dataset field
            if len(names) == 1 and val.dtype.fields is None:
                subtype = h5t.py_create(val.dtype)
                mtype = h5t.create(h5t.COMPOUND, subtype.get_size())
                mtype.insert(self._e(names[0]), 0, subtype)

            # Make a new source type keeping only the requested fields
            else:
                fieldnames = [x for x in val.dtype.names if x in names] # Keep source order
                mtype = h5t.create(h5t.COMPOUND, val.dtype.itemsize)
                for fieldname in fieldnames:
                    subtype = h5t.py_create(val.dtype.fields[fieldname][0])
                    offset = val.dtype.fields[fieldname][1]
                    mtype.insert(self._e(fieldname), offset, subtype)

        # Use mtype derived from array (let DatasetID.write figure it out)
        else:
            mshape = val.shape
            mtype = None

        # Perform the dataspace selection
        selection = sel.select(self.shape, args, dataset=self)

        if selection.nselect == 0:
            return

        # Broadcast scalars if necessary.
        # In order to avoid slow broadcasting filling the destination by
        # the scalar value, we create an intermediate array of the same
        # size as the destination buffer provided that size is reasonable.
        # We assume as reasonable a size smaller or equal as the used dataset
        # chunk size if any.
        # In case of dealing with a non-chunked destination dataset or with
        # a selection whose size is larger than the dataset chunk size we fall
        # back to using an intermediate array of size equal to the last dimension
        # of the destination buffer.
        # The reasoning behind is that it makes sense to assume the creator of
        # the dataset used an appropriate chunk size according the available
        # memory. In any case, if we cannot afford to create an intermediate
        # array of the same size as the dataset chunk size, the user program has
        # little hope to go much further. Solves h5py issue #1067
        if mshape == () and selection.array_shape != ():
            if self.dtype.subdtype is not None:
                raise TypeError("Scalar broadcasting is not supported for array dtypes")
            if self.chunks and (product(self.chunks) >= product(selection.array_shape)):
                val2 = numpy.empty(selection.array_shape, dtype=val.dtype)
            else:
                val2 = numpy.empty(selection.array_shape[-1], dtype=val.dtype)
            val2[...] = val
            val = val2
            mshape = val.shape

        # Perform the write, with broadcasting
        mspace = h5s.create_simple(selection.expand_shape(mshape))
        for fspace in selection.broadcast(mshape):
            self.id.write(mspace, fspace, val, mtype, dxpl=self._dxpl)

    def read_direct(self, dest, source_sel=None, dest_sel=None):
        """ Read data directly from HDF5 into an existing NumPy array.

        The destination array must be C-contiguous and writable.
        Selections must be the output of numpy.s_[].

        Broadcasting is supported for simple indexing.
        """
        with phil:
            if self._is_empty:
                raise TypeError("Empty datasets have no numpy representation")
            if source_sel is None:
                source_sel = sel.SimpleSelection(self.shape)
            else:
                source_sel = sel.select(self.shape, source_sel, self)  # for numpy.s_
            fspace = source_sel.id

            if dest_sel is None:
                dest_sel = sel.SimpleSelection(dest.shape)
            else:
                dest_sel = sel.select(dest.shape, dest_sel)

            for mspace in dest_sel.broadcast(source_sel.array_shape):
                self.id.read(mspace, fspace, dest, dxpl=self._dxpl)

    def write_direct(self, source, source_sel=None, dest_sel=None):
        """ Write data directly to HDF5 from a NumPy array.

        The source array must be C-contiguous.  Selections must be
        the output of numpy.s_[].

        Broadcasting is supported for simple indexing.
        """
        with phil:
            if self._is_empty:
                raise TypeError("Empty datasets cannot be written to")
            if source_sel is None:
                source_sel = sel.SimpleSelection(source.shape)
            else:
                source_sel = sel.select(source.shape, source_sel)  # for numpy.s_
            mspace = source_sel.id

            if dest_sel is None:
                dest_sel = sel.SimpleSelection(self.shape)
            else:
                dest_sel = sel.select(self.shape, dest_sel, self)

            for fspace in dest_sel.broadcast(source_sel.array_shape):
                self.id.write(mspace, fspace, source, dxpl=self._dxpl)

    @with_phil
    def __array__(self, dtype=None, copy=None):
        """ Create a Numpy array containing the whole dataset.  DON'T THINK
        THIS MEANS DATASETS ARE INTERCHANGEABLE WITH ARRAYS.  For one thing,
        you have to read the whole dataset every time this method is called.
        """
        if copy is False:
            raise ValueError(
                f"Dataset.__array__ received {copy=} "
                "but memory allocation cannot be avoided on read"
            )
        arr = numpy.zeros(self.shape, dtype=self.dtype if dtype is None else dtype)

        # Special case for (0,)*-shape datasets
        if self.size == 0:
            return arr

        self.read_direct(arr)
        return arr

    @with_phil
    def __repr__(self):
        if not self:
            r = ''
        else:
            if self.name is None:
                namestr = '("anonymous")'
            else:
                name = pp.basename(pp.normpath(self.name))
                namestr = '"%s"' % (name if name != '' else '/')
            r = '' % (
                namestr, self.shape, self.dtype.str
            )
        return r

    if hasattr(h5d.DatasetID, "refresh"):
        @with_phil
        def refresh(self):
            """ Refresh the dataset metadata by reloading from the file.

            This is part of the SWMR features and only exist when the HDF5
            library version >=1.9.178
            """
            self._id.refresh()
            self._cache_props.clear()

    if hasattr(h5d.DatasetID, "flush"):
        @with_phil
        def flush(self):
            """ Flush the dataset data and metadata to the file.
            If the dataset is chunked, raw data chunks are written to the file.

            This is part of the SWMR features and only exist when the HDF5
            library version >=1.9.178
            """
            self._id.flush()

    if vds_support:
        @property
        @with_phil
        def is_virtual(self):
            """Check if this is a virtual dataset"""
            return self._dcpl.get_layout() == h5d.VIRTUAL

        @with_phil
        def virtual_sources(self):
            """Get a list of the data mappings for a virtual dataset"""
            if not self.is_virtual:
                raise RuntimeError("Not a virtual dataset")
            dcpl = self._dcpl
            return [
                VDSmap(dcpl.get_virtual_vspace(j),
                       dcpl.get_virtual_filename(j),
                       dcpl.get_virtual_dsetname(j),
                       dcpl.get_virtual_srcspace(j))
                for j in range(dcpl.get_virtual_count())]

    @with_phil
    def make_scale(self, name=''):
        """Make this dataset an HDF5 dimension scale.

        You can then attach it to dimensions of other datasets like this::

            other_ds.dims[0].attach_scale(ds)

        You can optionally pass a name to associate with this scale.
        """
        h5ds.set_scale(self._id, self._e(name))

    @property
    @with_phil
    def is_scale(self):
        """Return ``True`` if this dataset is also a dimension scale.

        Return ``False`` otherwise.
        """
        return h5ds.is_scale(self._id)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/_hl/datatype.py0000644000175000017500000000301414350630273017176 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements high-level access to committed datatypes in the file.
"""

import posixpath as pp

from ..h5t import TypeID
from .base import HLObject, with_phil

class Datatype(HLObject):

    """
        Represents an HDF5 named datatype stored in a file.

        To store a datatype, simply assign it to a name in a group:

        >>> MyGroup["name"] = numpy.dtype("f")
        >>> named_type = MyGroup["name"]
        >>> assert named_type.dtype == numpy.dtype("f")
    """

    @property
    @with_phil
    def dtype(self):
        """Numpy dtype equivalent for this datatype"""
        return self.id.dtype

    @with_phil
    def __init__(self, bind):
        """ Create a new Datatype object by binding to a low-level TypeID.
        """
        if not isinstance(bind, TypeID):
            raise ValueError("%s is not a TypeID" % bind)
        super().__init__(bind)

    @with_phil
    def __repr__(self):
        if not self.id:
            return ""
        if self.name is None:
            namestr = '("anonymous")'
        else:
            name = pp.basename(pp.normpath(self.name))
            namestr = '"%s"' % (name if name != '' else '/')
        return '' % \
            (namestr, self.dtype.str)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0
h5py-3.13.0/h5py/_hl/dims.py0000644000175000017500000001176514532312724016333 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements support for HDF5 dimension scales.
"""

import warnings

from .. import h5ds
from ..h5py_warnings import H5pyDeprecationWarning
from . import base
from .base import phil, with_phil
from .dataset import Dataset


class DimensionProxy(base.CommonStateObject):

    """
        Represents an HDF5 "dimension".
    """

    @property
    @with_phil
    def label(self):
        """ Get or set the dimension scale label """
        return self._d(h5ds.get_label(self._id, self._dimension))

    @label.setter
    @with_phil
    def label(self, val):
        # pylint: disable=missing-docstring
        h5ds.set_label(self._id, self._dimension, self._e(val))

    @with_phil
    def __init__(self, id_, dimension):
        self._id = id_
        self._dimension = dimension

    @with_phil
    def __hash__(self):
        return hash((type(self), self._id, self._dimension))

    @with_phil
    def __eq__(self, other):
        return hash(self) == hash(other)

    @with_phil
    def __iter__(self):
        yield from self.keys()

    @with_phil
    def __len__(self):
        return h5ds.get_num_scales(self._id, self._dimension)

    @with_phil
    def __getitem__(self, item):

        if isinstance(item, int):
            scales = []
            h5ds.iterate(self._id, self._dimension, scales.append, 0)
            return Dataset(scales[item])

        else:
            def f(dsid):
                """ Iterate over scales to find a matching name """
                if h5ds.get_scale_name(dsid) == self._e(item):
                    return dsid

            res = h5ds.iterate(self._id, self._dimension, f, 0)
            if res is None:
                raise KeyError(item)
            return Dataset(res)

    def attach_scale(self, dset):
        """ Attach a scale to this dimension.

        Provide the Dataset of the scale you would like to attach.
        """
        with phil:
            h5ds.attach_scale(self._id, dset.id, self._dimension)

    def detach_scale(self, dset):
        """ Remove a scale from this dimension.

        Provide the Dataset of the scale you would like to remove.
        """
        with phil:
            h5ds.detach_scale(self._id, dset.id, self._dimension)

    def items(self):
        """ Get a list of (name, Dataset) pairs with all scales on this
        dimension.
        """
        with phil:
            scales = []

            # H5DSiterate raises an error if there are no dimension scales,
            # rather than iterating 0 times.  See #483.
            if len(self) > 0:
                h5ds.iterate(self._id, self._dimension, scales.append, 0)

            return [
                (self._d(h5ds.get_scale_name(x)), Dataset(x))
                for x in scales
                ]

    def keys(self):
        """ Get a list of names for the scales on this dimension. """
        with phil:
            return [key for (key, _) in self.items()]

    def values(self):
        """ Get a list of Dataset for scales on this dimension. """
        with phil:
            return [val for (_, val) in self.items()]

    @with_phil
    def __repr__(self):
        if not self._id:
            return ""
        return ('<"%s" dimension %d of HDF5 dataset at %s>'
               % (self.label, self._dimension, id(self._id)))


class DimensionManager(base.CommonStateObject):

    """
        Represents a collection of dimension associated with a dataset.

        Like AttributeManager, an instance of this class is returned when
        accessing the ".dims" property on a Dataset.
    """

    @with_phil
    def __init__(self, parent):
        """ Private constructor.
        """
        self._id = parent.id

    @with_phil
    def __getitem__(self, index):
        """ Return a Dimension object
        """
        if index > len(self) - 1:
            raise IndexError('Index out of range')
        return DimensionProxy(self._id, index)

    @with_phil
    def __len__(self):
        """ Number of dimensions associated with the dataset. """
        return self._id.rank

    @with_phil
    def __iter__(self):
        """ Iterate over the dimensions. """
        for i in range(len(self)):
            yield self[i]

    @with_phil
    def __repr__(self):
        if not self._id:
            return ""
        return "" % id(self._id)

    def create_scale(self, dset, name=''):
        """ Create a new dimension, from an initial scale.

        Provide the dataset and a name for the scale.
        """
        warnings.warn("other_ds.dims.create_scale(ds, name) is deprecated. "
                      "Use ds.make_scale(name) instead.",
                      H5pyDeprecationWarning, stacklevel=2,
                     )
        dset.make_scale(name)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872510.0
h5py-3.13.0/h5py/_hl/files.py0000644000175000017500000006145514755054376016517 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements high-level support for HDF5 file objects.
"""

import inspect
import os
import sys
from warnings import warn

from .compat import filename_decode, filename_encode

from .base import phil, with_phil
from .group import Group
from .. import h5, h5f, h5p, h5i, h5fd, _objects
from .. import version

mpi = h5.get_config().mpi
ros3 = h5.get_config().ros3
direct_vfd = h5.get_config().direct_vfd
hdf5_version = version.hdf5_version_tuple[0:3]

swmr_support = True


libver_dict = {'earliest': h5f.LIBVER_EARLIEST, 'latest': h5f.LIBVER_LATEST,
               'v108': h5f.LIBVER_V18, 'v110': h5f.LIBVER_V110}
libver_dict_r = dict((y, x) for x, y in libver_dict.items())

if hdf5_version >= (1, 11, 4):
    libver_dict.update({'v112': h5f.LIBVER_V112})
    libver_dict_r.update({h5f.LIBVER_V112: 'v112'})

if hdf5_version >= (1, 13, 0):
    libver_dict.update({'v114': h5f.LIBVER_V114})
    libver_dict_r.update({h5f.LIBVER_V114: 'v114'})


def _set_fapl_mpio(plist, **kwargs):
    """Set file access property list for mpio driver"""
    if not mpi:
        raise ValueError("h5py was built without MPI support, can't use mpio driver")

    import mpi4py.MPI
    kwargs.setdefault('info', mpi4py.MPI.Info())
    plist.set_fapl_mpio(**kwargs)


def _set_fapl_fileobj(plist, **kwargs):
    """Set the Python file object driver in a file access property list"""
    plist.set_fileobj_driver(h5fd.fileobj_driver, kwargs.get('fileobj'))


_drivers = {
    'sec2': lambda plist, **kwargs: plist.set_fapl_sec2(**kwargs),
    'stdio': lambda plist, **kwargs: plist.set_fapl_stdio(**kwargs),
    'core': lambda plist, **kwargs: plist.set_fapl_core(**kwargs),
    'family': lambda plist, **kwargs: plist.set_fapl_family(
        memb_fapl=plist.copy(),
        **kwargs
    ),
    'mpio': _set_fapl_mpio,
    'fileobj': _set_fapl_fileobj,
    'split': lambda plist, **kwargs: plist.set_fapl_split(**kwargs),
}

if ros3:
    _drivers['ros3'] = lambda plist, **kwargs: plist.set_fapl_ros3(**kwargs)

if direct_vfd:
    _drivers['direct'] = lambda plist, **kwargs: plist.set_fapl_direct(**kwargs)  # noqa


def register_driver(name, set_fapl):
    """Register a custom driver.

    Parameters
    ----------
    name : str
        The name of the driver.
    set_fapl : callable[PropFAID, **kwargs] -> NoneType
        The function to set the fapl to use your custom driver.
    """
    _drivers[name] = set_fapl


def unregister_driver(name):
    """Unregister a custom driver.

    Parameters
    ----------
    name : str
        The name of the driver.
    """
    del _drivers[name]


def registered_drivers():
    """Return a frozenset of the names of all of the registered drivers.
    """
    return frozenset(_drivers)


def make_fapl(
    driver, libver=None, rdcc_nslots=None, rdcc_nbytes=None, rdcc_w0=None,
    locking=None, page_buf_size=None, min_meta_keep=0, min_raw_keep=0,
    alignment_threshold=1, alignment_interval=1, meta_block_size=None,
    **kwds
):
    """ Set up a file access property list """
    plist = h5p.create(h5p.FILE_ACCESS)

    if libver is not None:
        if libver in libver_dict:
            low = libver_dict[libver]
            high = h5f.LIBVER_LATEST
        else:
            low, high = (libver_dict[x] for x in libver)
    else:
        # we default to earliest
        low, high = h5f.LIBVER_EARLIEST, h5f.LIBVER_LATEST
    plist.set_libver_bounds(low, high)
    plist.set_alignment(alignment_threshold, alignment_interval)

    cache_settings = list(plist.get_cache())
    if rdcc_nslots is not None:
        cache_settings[1] = rdcc_nslots
    if rdcc_nbytes is not None:
        cache_settings[2] = rdcc_nbytes
    if rdcc_w0 is not None:
        cache_settings[3] = rdcc_w0
    plist.set_cache(*cache_settings)

    if page_buf_size:
        plist.set_page_buffer_size(int(page_buf_size), int(min_meta_keep),
                                   int(min_raw_keep))

    if meta_block_size is not None:
        plist.set_meta_block_size(int(meta_block_size))

    if locking is not None:
        if hdf5_version < (1, 12, 1) and (hdf5_version[:2] != (1, 10) or hdf5_version[2] < 7):
            raise ValueError(
                "HDF5 version >= 1.12.1 or 1.10.x >= 1.10.7 required for file locking.")

        if locking in ("false", False):
            plist.set_file_locking(False, ignore_when_disabled=False)
        elif locking in ("true", True):
            plist.set_file_locking(True, ignore_when_disabled=False)
        elif locking == "best-effort":
            plist.set_file_locking(True, ignore_when_disabled=True)
        else:
            raise ValueError(f"Unsupported locking value: {locking}")

    if driver is None or (driver == 'windows' and sys.platform == 'win32'):
        # Prevent swallowing unused key arguments
        if kwds:
            msg = "'{key}' is an invalid keyword argument for this function" \
                  .format(key=next(iter(kwds)))
            raise TypeError(msg)
        return plist

    try:
        set_fapl = _drivers[driver]
    except KeyError:
        raise ValueError('Unknown driver type "%s"' % driver)
    else:
        if driver == 'ros3':
            token = kwds.pop('session_token', None)
            set_fapl(plist, **kwds)
            if token:
                if hdf5_version < (1, 14, 2):
                    raise ValueError('HDF5 >= 1.14.2 required for AWS session token')
                plist.set_fapl_ros3_token(token)
        else:
            set_fapl(plist, **kwds)

    return plist


def make_fcpl(track_order=False, fs_strategy=None, fs_persist=False,
              fs_threshold=1, fs_page_size=None):
    """ Set up a file creation property list """
    if track_order or fs_strategy:
        plist = h5p.create(h5p.FILE_CREATE)
        if track_order:
            plist.set_link_creation_order(
                h5p.CRT_ORDER_TRACKED | h5p.CRT_ORDER_INDEXED)
            plist.set_attr_creation_order(
                h5p.CRT_ORDER_TRACKED | h5p.CRT_ORDER_INDEXED)
        if fs_strategy:
            strategies = {
                'fsm': h5f.FSPACE_STRATEGY_FSM_AGGR,
                'page': h5f.FSPACE_STRATEGY_PAGE,
                'aggregate': h5f.FSPACE_STRATEGY_AGGR,
                'none': h5f.FSPACE_STRATEGY_NONE
            }
            fs_strat_num = strategies.get(fs_strategy, -1)
            if fs_strat_num == -1:
                raise ValueError("Invalid file space strategy type")

            plist.set_file_space_strategy(fs_strat_num, fs_persist, fs_threshold)
            if fs_page_size and fs_strategy == 'page':
                plist.set_file_space_page_size(int(fs_page_size))
    else:
        plist = None
    return plist


def make_fid(name, mode, userblock_size, fapl, fcpl=None, swmr=False):
    """ Get a new FileID by opening or creating a file.
    Also validates mode argument."""

    if userblock_size is not None:
        if mode in ('r', 'r+'):
            raise ValueError("User block may only be specified "
                             "when creating a file")
        try:
            userblock_size = int(userblock_size)
        except (TypeError, ValueError):
            raise ValueError("User block size must be an integer")
        if fcpl is None:
            fcpl = h5p.create(h5p.FILE_CREATE)
        fcpl.set_userblock(userblock_size)

    if mode == 'r':
        flags = h5f.ACC_RDONLY
        if swmr and swmr_support:
            flags |= h5f.ACC_SWMR_READ
        fid = h5f.open(name, flags, fapl=fapl)
    elif mode == 'r+':
        fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
    elif mode in ['w-', 'x']:
        fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl)
    elif mode == 'w':
        fid = h5f.create(name, h5f.ACC_TRUNC, fapl=fapl, fcpl=fcpl)
    elif mode == 'a':
        # Open in append mode (read/write).
        # If that fails, create a new file only if it won't clobber an
        # existing one (ACC_EXCL)
        try:
            fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
        # Not all drivers raise FileNotFoundError (commented those that do not)
        except FileNotFoundError if fapl.get_driver() in (
            h5fd.SEC2,
            h5fd.DIRECT if direct_vfd else -1,
            # h5fd.STDIO,
            # h5fd.CORE,
            h5fd.FAMILY,
            h5fd.WINDOWS,
            # h5fd.MPIO,
            # h5fd.MPIPOSIX,
            h5fd.fileobj_driver,
            h5fd.ROS3D if ros3 else -1,
        ) else OSError:
            fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl)
    else:
        raise ValueError("Invalid mode; must be one of r, r+, w, w-, x, a")

    try:
        if userblock_size is not None:
            existing_fcpl = fid.get_create_plist()
            if existing_fcpl.get_userblock() != userblock_size:
                raise ValueError("Requested userblock size (%d) does not match that of existing file (%d)" % (userblock_size, existing_fcpl.get_userblock()))
    except Exception as e:
        fid.close()
        raise e

    return fid


class File(Group):

    """
        Represents an HDF5 file.
    """

    @property
    def attrs(self):
        """ Attributes attached to this object """
        # hdf5 complains that a file identifier is an invalid location for an
        # attribute. Instead of self, pass the root group to AttributeManager:
        from . import attrs
        with phil:
            return attrs.AttributeManager(self['/'])

    @property
    @with_phil
    def filename(self):
        """File name on disk"""
        return filename_decode(h5f.get_name(self.id))

    @property
    @with_phil
    def driver(self):
        """Low-level HDF5 file driver used to open file"""
        drivers = {h5fd.SEC2: 'sec2',
                   h5fd.STDIO: 'stdio',
                   h5fd.CORE: 'core',
                   h5fd.FAMILY: 'family',
                   h5fd.WINDOWS: 'windows',
                   h5fd.MPIO: 'mpio',
                   h5fd.MPIPOSIX: 'mpiposix',
                   h5fd.fileobj_driver: 'fileobj'}
        if ros3:
            drivers[h5fd.ROS3D] = 'ros3'
        if direct_vfd:
            drivers[h5fd.DIRECT] = 'direct'
        return drivers.get(self.id.get_access_plist().get_driver(), 'unknown')

    @property
    @with_phil
    def mode(self):
        """ Python mode used to open file """
        write_intent = h5f.ACC_RDWR
        if swmr_support:
            write_intent |= h5f.ACC_SWMR_WRITE
        return 'r+' if self.id.get_intent() & write_intent else 'r'

    @property
    @with_phil
    def libver(self):
        """File format version bounds (2-tuple: low, high)"""
        bounds = self.id.get_access_plist().get_libver_bounds()
        return tuple(libver_dict_r[x] for x in bounds)

    @property
    @with_phil
    def userblock_size(self):
        """ User block size (in bytes) """
        fcpl = self.id.get_create_plist()
        return fcpl.get_userblock()

    @property
    @with_phil
    def meta_block_size(self):
        """ Meta block size (in bytes) """
        fapl = self.id.get_access_plist()
        return fapl.get_meta_block_size()

    if mpi:

        @property
        @with_phil
        def atomic(self):
            """ Set/get MPI-IO atomic mode
            """
            return self.id.get_mpi_atomicity()

        @atomic.setter
        @with_phil
        def atomic(self, value):
            # pylint: disable=missing-docstring
            self.id.set_mpi_atomicity(value)

    @property
    @with_phil
    def swmr_mode(self):
        """ Controls single-writer multiple-reader mode """
        return swmr_support and bool(self.id.get_intent() & (h5f.ACC_SWMR_READ | h5f.ACC_SWMR_WRITE))

    @swmr_mode.setter
    @with_phil
    def swmr_mode(self, value):
        # pylint: disable=missing-docstring
        if value:
            self.id.start_swmr_write()
        else:
            raise ValueError("It is not possible to forcibly switch SWMR mode off.")

    def __init__(self, name, mode='r', driver=None, libver=None, userblock_size=None, swmr=False,
                 rdcc_nslots=None, rdcc_nbytes=None, rdcc_w0=None, track_order=None,
                 fs_strategy=None, fs_persist=False, fs_threshold=1, fs_page_size=None,
                 page_buf_size=None, min_meta_keep=0, min_raw_keep=0, locking=None,
                 alignment_threshold=1, alignment_interval=1, meta_block_size=None, **kwds):
        """Create a new file object.

        See the h5py user guide for a detailed explanation of the options.

        name
            Name of the file on disk, or file-like object.  Note: for files
            created with the 'core' driver, HDF5 still requires this be
            non-empty.
        mode
            r        Readonly, file must exist (default)
            r+       Read/write, file must exist
            w        Create file, truncate if exists
            w- or x  Create file, fail if exists
            a        Read/write if exists, create otherwise
        driver
            Name of the driver to use.  Legal values are None (default,
            recommended), 'core', 'sec2', 'direct', 'stdio', 'mpio', 'ros3'.
        libver
            Library version bounds.  Supported values: 'earliest', 'v108',
            'v110', 'v112'  and 'latest'.
        userblock_size
            Desired size of user block.  Only allowed when creating a new
            file (mode w, w- or x).
        swmr
            Open the file in SWMR read mode. Only used when mode = 'r'.
        rdcc_nbytes
            Total size of the dataset chunk cache in bytes. The default size
            is 1024**2 (1 MiB) per dataset. Applies to all datasets unless individually changed.
        rdcc_w0
            The chunk preemption policy for all datasets.  This must be
            between 0 and 1 inclusive and indicates the weighting according to
            which chunks which have been fully read or written are penalized
            when determining which chunks to flush from cache.  A value of 0
            means fully read or written chunks are treated no differently than
            other chunks (the preemption is strictly LRU) while a value of 1
            means fully read or written chunks are always preempted before
            other chunks.  If your application only reads or writes data once,
            this can be safely set to 1.  Otherwise, this should be set lower
            depending on how often you re-read or re-write the same data.  The
            default value is 0.75. Applies to all datasets unless individually changed.
        rdcc_nslots
            The number of chunk slots in the raw data chunk cache for this
            file. Increasing this value reduces the number of cache collisions,
            but slightly increases the memory used. Due to the hashing
            strategy, this value should ideally be a prime number. As a rule of
            thumb, this value should be at least 10 times the number of chunks
            that can fit in rdcc_nbytes bytes. For maximum performance, this
            value should be set approximately 100 times that number of
            chunks. The default value is 521. Applies to all datasets unless individually changed.
        track_order
            Track dataset/group/attribute creation order under root group
            if True. If None use global default h5.get_config().track_order.
        fs_strategy
            The file space handling strategy to be used.  Only allowed when
            creating a new file (mode w, w- or x).  Defined as:
            "fsm"        FSM, Aggregators, VFD
            "page"       Paged FSM, VFD
            "aggregate"  Aggregators, VFD
            "none"       VFD
            If None use HDF5 defaults.
        fs_page_size
            File space page size in bytes. Only used when fs_strategy="page". If
            None use the HDF5 default (4096 bytes).
        fs_persist
            A boolean value to indicate whether free space should be persistent
            or not.  Only allowed when creating a new file.  The default value
            is False.
        fs_threshold
            The smallest free-space section size that the free space manager
            will track.  Only allowed when creating a new file.  The default
            value is 1.
        page_buf_size
            Page buffer size in bytes. Only allowed for HDF5 files created with
            fs_strategy="page". Must be a power of two value and greater or
            equal than the file space page size when creating the file. It is
            not used by default.
        min_meta_keep
            Minimum percentage of metadata to keep in the page buffer before
            allowing pages containing metadata to be evicted. Applicable only if
            page_buf_size is set. Default value is zero.
        min_raw_keep
            Minimum percentage of raw data to keep in the page buffer before
            allowing pages containing raw data to be evicted. Applicable only if
            page_buf_size is set. Default value is zero.
        locking
            The file locking behavior. Defined as:

            - False (or "false") --  Disable file locking
            - True (or "true")   --  Enable file locking
            - "best-effort"      --  Enable file locking but ignore some errors
            - None               --  Use HDF5 defaults

            .. warning::

                The HDF5_USE_FILE_LOCKING environment variable can override
                this parameter.

            Only available with HDF5 >= 1.12.1 or 1.10.x >= 1.10.7.

        alignment_threshold
            Together with ``alignment_interval``, this property ensures that
            any file object greater than or equal in size to the alignment
            threshold (in bytes) will be aligned on an address which is a
            multiple of alignment interval.

        alignment_interval
            This property should be used in conjunction with
            ``alignment_threshold``. See the description above. For more
            details, see
            https://portal.hdfgroup.org/display/HDF5/H5P_SET_ALIGNMENT

        meta_block_size
            Set the current minimum size, in bytes, of new metadata block allocations.
            See https://portal.hdfgroup.org/display/HDF5/H5P_SET_META_BLOCK_SIZE

        Additional keywords
            Passed on to the selected file driver.
        """
        if driver == 'ros3':
            if ros3:
                from urllib.parse import urlparse
                url = urlparse(name)
                if url.scheme == 's3':
                    aws_region = kwds.get('aws_region', b'').decode('ascii')
                    if len(aws_region) == 0:
                        raise ValueError('AWS region required for s3:// location')
                    name = f'https://s3.{aws_region}.amazonaws.com/{url.netloc}{url.path}'
                elif url.scheme not in ('https', 'http'):
                    raise ValueError(f'{name}: S3 location must begin with '
                                     'either "https://", "http://", or "s3://"')
            else:
                raise ValueError(
                    "h5py was built without ROS3 support, can't use ros3 driver")

        if locking is not None and hdf5_version < (1, 12, 1) and (
                hdf5_version[:2] != (1, 10) or hdf5_version[2] < 7):
            raise ValueError("HDF5 version >= 1.12.1 or 1.10.x >= 1.10.7 required for file locking options.")

        if isinstance(name, _objects.ObjectID):
            if fs_strategy:
                raise ValueError("Unable to set file space strategy of an existing file")

            with phil:
                fid = h5i.get_file_id(name)
        else:
            if hasattr(name, 'read') and hasattr(name, 'seek'):
                if driver not in (None, 'fileobj'):
                    raise ValueError("Driver must be 'fileobj' for file-like object if specified.")
                driver = 'fileobj'
                if kwds.get('fileobj', name) != name:
                    raise ValueError("Invalid value of 'fileobj' argument; "
                                     "must equal to file-like object if specified.")
                kwds.update(fileobj=name)
                name = repr(name).encode('ASCII', 'replace')
            else:
                name = filename_encode(name)

            if track_order is None:
                track_order = h5.get_config().track_order

            if fs_strategy and mode not in ('w', 'w-', 'x'):
                raise ValueError("Unable to set file space strategy of an existing file")

            if swmr and mode != 'r':
                warn(
                    "swmr=True only affects read ('r') mode. For swmr write "
                    "mode, set f.swmr_mode = True after opening the file.",
                    stacklevel=2,
                )

            with phil:
                fapl = make_fapl(driver, libver, rdcc_nslots, rdcc_nbytes, rdcc_w0,
                                 locking, page_buf_size, min_meta_keep, min_raw_keep,
                                 alignment_threshold=alignment_threshold,
                                 alignment_interval=alignment_interval,
                                 meta_block_size=meta_block_size,
                                 **kwds)
                fcpl = make_fcpl(track_order=track_order, fs_strategy=fs_strategy,
                                 fs_persist=fs_persist, fs_threshold=fs_threshold,
                                 fs_page_size=fs_page_size)
                fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)

            if isinstance(libver, tuple):
                self._libver = libver
            else:
                self._libver = (libver, 'latest')

        super().__init__(fid)

    _in_memory_file_counter = 0

    @classmethod
    @with_phil
    def in_memory(cls, file_image=None, **kwargs):
        """Create an HDF5 file in memory, without an underlying file

        file_image
            The initial file contents as bytes (or anything that supports the
            Python buffer interface). HDF5 takes a copy of this data.
        block_size
            Chunk size for new memory alloactions (default 64 KiB).

        Other keyword arguments are like File(), although name, mode,
        driver and locking can't be passed.
        """
        for k in ('driver', 'locking', 'backing_store'):
            if k in kwargs:
                raise TypeError(
                    f"File.in_memory() got an unexpected keyword argument {k!r}"
                )
        fcpl_kwargs = {}
        for k in inspect.signature(make_fcpl).parameters:
            if k in kwargs:
                fcpl_kwargs[k] = kwargs.pop(k)
        fcpl = make_fcpl(**fcpl_kwargs)

        fapl = make_fapl(driver="core", backing_store=False, **kwargs)
        if file_image:
            if fcpl_kwargs:
                kw = ', '.join(fcpl_kwargs)
                raise TypeError(f"{kw} parameters cannot be used with file_image")
            fapl.set_file_image(file_image)

        # We have to give HDF5 a filename, but it should never use it.
        # This is a hint both in memory, and in case a bug ever creates a file.
        # The name also needs to be different from any other open file;
        # we use a simple counter (protected by the 'phil' lock) for this.
        name = b"h5py_in_memory_nonfile_%d"  % cls._in_memory_file_counter
        cls._in_memory_file_counter += 1

        if file_image:
            fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
        else:
            fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl)
        return cls(fid)

    def close(self):
        """ Close the file.  All open objects become invalid """
        with phil:
            # Check that the file is still open, otherwise skip
            if self.id.valid:
                # We have to explicitly murder all open objects related to the file

                # Close file-resident objects first, then the files.
                # Otherwise we get errors in MPI mode.
                self.id._close_open_objects(h5f.OBJ_LOCAL | ~h5f.OBJ_FILE)
                self.id._close_open_objects(h5f.OBJ_LOCAL | h5f.OBJ_FILE)

                self.id.close()
                _objects.nonlocal_close()

    def flush(self):
        """ Tell the HDF5 library to flush its buffers.
        """
        with phil:
            h5f.flush(self.id)

    @with_phil
    def __enter__(self):
        return self

    @with_phil
    def __exit__(self, *args):
        if self.id:
            self.close()

    @with_phil
    def __repr__(self):
        if not self.id:
            r = ''
        else:
            # Filename has to be forced to Unicode if it comes back bytes
            # Mode is always a "native" string
            filename = self.filename
            if isinstance(filename, bytes):  # Can't decode fname
                filename = filename.decode('utf8', 'replace')
            r = f''

        return r
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/h5py/_hl/filters.py0000644000175000017500000003436014755054371017053 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements support for HDF5 compression filters via the high-level
    interface.  The following types of filter are available:

    "gzip"
        Standard DEFLATE-based compression, at integer levels from 0 to 9.
        Built-in to all public versions of HDF5.  Use this if you want a
        decent-to-good ratio, good portability, and don't mind waiting.

    "lzf"
        Custom compression filter for h5py.  This filter is much, much faster
        than gzip (roughly 10x in compression vs. gzip level 4, and 3x faster
        in decompressing), but at the cost of a worse compression ratio.  Use
        this if you want cheap compression and portability is not a concern.

    "szip"
        Access to the HDF5 SZIP encoder.  SZIP is a non-mainstream compression
        format used in space science on integer and float datasets.  SZIP is
        subject to license requirements, which means the encoder is not
        guaranteed to be always available.  However, it is also much faster
        than gzip.

    The following constants in this module are also useful:

    decode
        Tuple of available filter names for decoding

    encode
        Tuple of available filter names for encoding
"""
from collections.abc import Mapping
import operator

import numpy as np
from .base import product
from .compat import filename_encode
from .. import h5z, h5p, h5d, h5f


_COMP_FILTERS = {'gzip': h5z.FILTER_DEFLATE,
                'szip': h5z.FILTER_SZIP,
                'lzf': h5z.FILTER_LZF,
                'shuffle': h5z.FILTER_SHUFFLE,
                'fletcher32': h5z.FILTER_FLETCHER32,
                'scaleoffset': h5z.FILTER_SCALEOFFSET }
_FILL_TIME_ENUM = {'alloc': h5d.FILL_TIME_ALLOC,
                   'never': h5d.FILL_TIME_NEVER,
                   'ifset': h5d.FILL_TIME_IFSET,
                   }

DEFAULT_GZIP = 4
DEFAULT_SZIP = ('nn', 8)

def _gen_filter_tuples():
    """ Bootstrap function to figure out what filters are available. """
    dec = []
    enc = []
    for name, code in _COMP_FILTERS.items():
        if h5z.filter_avail(code):
            info = h5z.get_filter_info(code)
            if info & h5z.FILTER_CONFIG_ENCODE_ENABLED:
                enc.append(name)
            if info & h5z.FILTER_CONFIG_DECODE_ENABLED:
                dec.append(name)

    return tuple(dec), tuple(enc)

decode, encode = _gen_filter_tuples()

def _external_entry(entry):
    """ Check for and return a well-formed entry tuple for
    a call to h5p.set_external. """
    # We require only an iterable entry but also want to guard against
    # raising a confusing exception from unpacking below a str or bytes that
    # was mistakenly passed as an entry.  We go further than that and accept
    # only a tuple, which allows simpler documentation and exception
    # messages.
    if not isinstance(entry, tuple):
        raise TypeError(
            "Each external entry must be a tuple of (name, offset, size)")
    name, offset, size = entry  # raise ValueError without three elements
    name = filename_encode(name)
    offset = operator.index(offset)
    size = operator.index(size)
    return (name, offset, size)

def _normalize_external(external):
    """ Normalize external into a well-formed list of tuples and return. """
    if external is None:
        return []
    try:
        # Accept a solitary name---a str, bytes, or os.PathLike acceptable to
        # filename_encode.
        return [_external_entry((external, 0, h5f.UNLIMITED))]
    except TypeError:
        pass
    # Check and rebuild each entry to be well-formed.
    return [_external_entry(entry) for entry in external]

class FilterRefBase(Mapping):
    """Base class for referring to an HDF5 and describing its options

    Your subclass must define filter_id, and may define a filter_options tuple.
    """
    filter_id = None
    filter_options = ()

    # Mapping interface supports using instances as **kwargs for compatibility
    # with older versions of h5py
    @property
    def _kwargs(self):
        return {
            'compression': self.filter_id,
            'compression_opts': self.filter_options
        }

    def __hash__(self):
        return hash((self.filter_id, self.filter_options))

    def __eq__(self, other):
        return (
            isinstance(other, FilterRefBase)
            and self.filter_id == other.filter_id
            and self.filter_options == other.filter_options
        )

    def __len__(self):
        return len(self._kwargs)

    def __iter__(self):
        return iter(self._kwargs)

    def __getitem__(self, item):
        return self._kwargs[item]

class Gzip(FilterRefBase):
    filter_id = h5z.FILTER_DEFLATE

    def __init__(self, level=DEFAULT_GZIP):
        self.filter_options = (level,)

def fill_dcpl(plist, shape, dtype, chunks, compression, compression_opts,
              shuffle, fletcher32, maxshape, scaleoffset, external,
              allow_unknown_filter=False, *, fill_time=None):
    """ Generate a dataset creation property list.

    Undocumented and subject to change without warning.
    """

    if shape is None or shape == ():
        shapetype = 'Empty' if shape is None else 'Scalar'
        if any((chunks, compression, compression_opts, shuffle, fletcher32,
                scaleoffset is not None)):
            raise TypeError(
                f"{shapetype} datasets don't support chunk/filter options"
            )
        if maxshape and maxshape != ():
            raise TypeError(f"{shapetype} datasets cannot be extended")
        return h5p.create(h5p.DATASET_CREATE)

    def rq_tuple(tpl, name):
        """ Check if chunks/maxshape match dataset rank """
        if tpl in (None, True):
            return
        try:
            tpl = tuple(tpl)
        except TypeError:
            raise TypeError('"%s" argument must be None or a sequence object' % name)
        if len(tpl) != len(shape):
            raise ValueError('"%s" must have same rank as dataset shape' % name)

    rq_tuple(chunks, 'chunks')
    rq_tuple(maxshape, 'maxshape')

    if compression is not None:
        if isinstance(compression, FilterRefBase):
            compression_opts = compression.filter_options
            compression = compression.filter_id

        if compression not in encode and not isinstance(compression, int):
            raise ValueError('Compression filter "%s" is unavailable' % compression)

        if compression == 'gzip':
            if compression_opts is None:
                gzip_level = DEFAULT_GZIP
            elif compression_opts in range(10):
                gzip_level = compression_opts
            else:
                raise ValueError("GZIP setting must be an integer from 0-9, not %r" % compression_opts)

        elif compression == 'lzf':
            if compression_opts is not None:
                raise ValueError("LZF compression filter accepts no options")

        elif compression == 'szip':
            if compression_opts is None:
                compression_opts = DEFAULT_SZIP

            err = "SZIP options must be a 2-tuple ('ec'|'nn', even integer 0-32)"
            try:
                szmethod, szpix = compression_opts
            except TypeError:
                raise TypeError(err)
            if szmethod not in ('ec', 'nn'):
                raise ValueError(err)
            if not (0= 0')

        if dtype.kind == 'f':
            if scaleoffset is True:
                raise ValueError('integer scaleoffset must be provided for '
                                 'floating point types')
        elif dtype.kind in ('u', 'i'):
            if scaleoffset is True:
                scaleoffset = h5z.SO_INT_MINBITS_DEFAULT
        else:
            raise TypeError('scale/offset filter only supported for integer '
                            'and floating-point types')

        # Scale/offset following fletcher32 in the filter chain will (almost?)
        # always triggers a read error, as most scale/offset settings are
        # lossy. Since fletcher32 must come first (see comment below) we
        # simply prohibit the combination of fletcher32 and scale/offset.
        if fletcher32:
            raise ValueError('fletcher32 cannot be used with potentially lossy'
                             ' scale/offset filter')

    external = _normalize_external(external)
    # End argument validation

    if (chunks is True) or (chunks is None and any((
            shuffle,
            fletcher32,
            compression,
            (maxshape and not len(external)),
            scaleoffset is not None,
    ))):
        chunks = guess_chunk(shape, maxshape, dtype.itemsize)

    if maxshape is True:
        maxshape = (None,)*len(shape)

    if chunks is not None:
        plist.set_chunk(chunks)

    if fill_time is not None:
        if (ft := _FILL_TIME_ENUM.get(fill_time)) is not None:
            plist.set_fill_time(ft)
        else:
            msg = ("fill_time must be one of the following choices: 'alloc', "
                   f"'never' or 'ifset', but it is {fill_time}.")
            raise ValueError(msg)

    # scale-offset must come before shuffle and compression
    if scaleoffset is not None:
        if dtype.kind in ('u', 'i'):
            plist.set_scaleoffset(h5z.SO_INT, scaleoffset)
        else: # dtype.kind == 'f'
            plist.set_scaleoffset(h5z.SO_FLOAT_DSCALE, scaleoffset)

    for item in external:
        plist.set_external(*item)

    if shuffle:
        plist.set_shuffle()

    if compression == 'gzip':
        plist.set_deflate(gzip_level)
    elif compression == 'lzf':
        plist.set_filter(h5z.FILTER_LZF, h5z.FLAG_OPTIONAL)
    elif compression == 'szip':
        opts = {'ec': h5z.SZIP_EC_OPTION_MASK, 'nn': h5z.SZIP_NN_OPTION_MASK}
        plist.set_szip(opts[szmethod], szpix)
    elif isinstance(compression, int):
        if not allow_unknown_filter and not h5z.filter_avail(compression):
            raise ValueError("Unknown compression filter number: %s" % compression)

        plist.set_filter(compression, h5z.FLAG_OPTIONAL, compression_opts)

    # `fletcher32` must come after `compression`, otherwise, if `compression`
    # is "szip" and the data is 64bit, the fletcher32 checksum will be wrong
    # (see GitHub issue #953).
    if fletcher32:
        plist.set_fletcher32()

    return plist

def get_filter_name(code):
    """
    Return the name of the compression filter for a given filter identifier.

    Undocumented and subject to change without warning.
    """
    filters = {h5z.FILTER_DEFLATE: 'gzip', h5z.FILTER_SZIP: 'szip',
               h5z.FILTER_SHUFFLE: 'shuffle', h5z.FILTER_FLETCHER32: 'fletcher32',
               h5z.FILTER_LZF: 'lzf', h5z.FILTER_SCALEOFFSET: 'scaleoffset'}
    return filters.get(code, str(code))

def get_filters(plist):
    """ Extract a dictionary of active filters from a DCPL, along with
    their settings.

    Undocumented and subject to change without warning.
    """

    pipeline = {}

    nfilters = plist.get_nfilters()

    for i in range(nfilters):

        code, _, vals, _ = plist.get_filter(i)

        if code == h5z.FILTER_DEFLATE:
            vals = vals[0] # gzip level

        elif code == h5z.FILTER_SZIP:
            mask, pixels = vals[0:2]
            if mask & h5z.SZIP_EC_OPTION_MASK:
                mask = 'ec'
            elif mask & h5z.SZIP_NN_OPTION_MASK:
                mask = 'nn'
            else:
                raise TypeError("Unknown SZIP configuration")
            vals = (mask, pixels)
        elif code == h5z.FILTER_LZF:
            vals = None
        else:
            if len(vals) == 0:
                vals = None

        pipeline[get_filter_name(code)] = vals

    return pipeline

CHUNK_BASE = 16*1024    # Multiplier by which chunks are adjusted
CHUNK_MIN = 8*1024      # Soft lower limit (8k)
CHUNK_MAX = 1024*1024   # Hard upper limit (1M)

def guess_chunk(shape, maxshape, typesize):
    """ Guess an appropriate chunk layout for a dataset, given its shape and
    the size of each element in bytes.  Will allocate chunks only as large
    as MAX_SIZE.  Chunks are generally close to some power-of-2 fraction of
    each axis, slightly favoring bigger values for the last index.

    Undocumented and subject to change without warning.
    """
    # pylint: disable=unused-argument

    # For unlimited dimensions we have to guess 1024
    shape = tuple((x if x!=0 else 1024) for i, x in enumerate(shape))

    ndims = len(shape)
    if ndims == 0:
        raise ValueError("Chunks not allowed for scalar datasets.")

    chunks = np.array(shape, dtype='=f8')
    if not np.all(np.isfinite(chunks)):
        raise ValueError("Illegal value in chunk tuple")

    # Determine the optimal chunk size in bytes using a PyTables expression.
    # This is kept as a float.
    dset_size = product(chunks)*typesize
    target_size = CHUNK_BASE * (2**np.log10(dset_size/(1024.*1024)))

    if target_size > CHUNK_MAX:
        target_size = CHUNK_MAX
    elif target_size < CHUNK_MIN:
        target_size = CHUNK_MIN

    idx = 0
    while True:
        # Repeatedly loop over the axes, dividing them by 2.  Stop when:
        # 1a. We're smaller than the target chunk size, OR
        # 1b. We're within 50% of the target chunk size, AND
        #  2. The chunk is smaller than the maximum chunk size

        chunk_bytes = product(chunks)*typesize

        if (chunk_bytes < target_size or \
         abs(chunk_bytes-target_size)/target_size < 0.5) and \
         chunk_bytes < CHUNK_MAX:
            break

        if product(chunks) == 1:
            break  # Element size larger than CHUNK_MAX

        chunks[idx%ndims] = np.ceil(chunks[idx%ndims] / 2.0)
        idx += 1

    return tuple(int(x) for x in chunks)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/h5py/_hl/group.py0000644000175000017500000007362014755054371016541 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements support for high-level access to HDF5 groups.
"""

from contextlib import contextmanager
import posixpath as pp
import numpy


from .compat import filename_decode, filename_encode

from .. import h5, h5g, h5i, h5o, h5r, h5t, h5l, h5p
from . import base
from .base import HLObject, MutableMappingHDF5, phil, with_phil
from . import dataset
from . import datatype
from .vds import vds_support


class Group(HLObject, MutableMappingHDF5):

    """ Represents an HDF5 group.
    """

    def __init__(self, bind):
        """ Create a new Group object by binding to a low-level GroupID.
        """
        with phil:
            if not isinstance(bind, h5g.GroupID):
                raise ValueError("%s is not a GroupID" % bind)
            super().__init__(bind)

    _gcpl_crt_order = h5p.create(h5p.GROUP_CREATE)
    _gcpl_crt_order.set_link_creation_order(
        h5p.CRT_ORDER_TRACKED | h5p.CRT_ORDER_INDEXED)
    _gcpl_crt_order.set_attr_creation_order(
        h5p.CRT_ORDER_TRACKED | h5p.CRT_ORDER_INDEXED)

    def create_group(self, name, track_order=None):
        """ Create and return a new subgroup.

        Name may be absolute or relative.  Fails if the target name already
        exists.

        track_order
            Track dataset/group/attribute creation order under this group
            if True. If None use global default h5.get_config().track_order.
        """
        if track_order is None:
            track_order = h5.get_config().track_order

        with phil:
            name, lcpl = self._e(name, lcpl=True)
            gcpl = Group._gcpl_crt_order if track_order else None
            gid = h5g.create(self.id, name, lcpl=lcpl, gcpl=gcpl)
            return Group(gid)

    def create_dataset(self, name, shape=None, dtype=None, data=None, **kwds):
        """ Create a new HDF5 dataset

        name
            Name of the dataset (absolute or relative).  Provide None to make
            an anonymous dataset.
        shape
            Dataset shape.  Use "()" for scalar datasets.  Required if "data"
            isn't provided.
        dtype
            Numpy dtype or string.  If omitted, dtype('f') will be used.
            Required if "data" isn't provided; otherwise, overrides data
            array's dtype.
        data
            Provide data to initialize the dataset.  If used, you can omit
            shape and dtype arguments.

        Keyword-only arguments:

        chunks
            (Tuple or int) Chunk shape, or True to enable auto-chunking. Integers can
            be used for 1D shape.

        maxshape
            (Tuple or int) Make the dataset resizable up to this shape. Use None for
            axes within the tuple you want to be unlimited. Integers can be used for 1D shape.
            For 1D datasets with unlimited maxshape, a shape tuple of length 1 must be
            provided, ``(None,)``. Passing ``None`` sets ``maxshape` to `shape`, making the
            dataset un-resizable, which is the default.
        compression
            (String or int) Compression strategy.  Legal values are 'gzip',
            'szip', 'lzf'.  If an integer in range(10), this indicates gzip
            compression level. Otherwise, an integer indicates the number of a
            dynamically loaded compression filter.
        compression_opts
            Compression settings.  This is an integer for gzip, 2-tuple for
            szip, etc. If specifying a dynamically loaded compression filter
            number, this must be a tuple of values.
        scaleoffset
            (Integer) Enable scale/offset filter for (usually) lossy
            compression of integer or floating-point data. For integer
            data, the value of scaleoffset is the number of bits to
            retain (pass 0 to let HDF5 determine the minimum number of
            bits necessary for lossless compression). For floating point
            data, scaleoffset is the number of digits after the decimal
            place to retain; stored values thus have absolute error
            less than 0.5*10**(-scaleoffset).
        shuffle
            (T/F) Enable shuffle filter.
        fletcher32
            (T/F) Enable fletcher32 error detection. Not permitted in
            conjunction with the scale/offset filter.
        fillvalue
            (Scalar) Use this value for uninitialized parts of the dataset.
        track_times
            (T/F) Enable dataset creation timestamps.
        track_order
            (T/F) Track attribute creation order if True. If omitted use
            global default h5.get_config().track_order.
        external
            (Iterable of tuples) Sets the external storage property, thus
            designating that the dataset will be stored in one or more
            non-HDF5 files external to the HDF5 file.  Adds each tuple
            of (name, offset, size) to the dataset's list of external files.
            Each name must be a str, bytes, or os.PathLike; each offset and
            size, an integer.  If only a name is given instead of an iterable
            of tuples, it is equivalent to [(name, 0, h5py.h5f.UNLIMITED)].
        efile_prefix
            (String) External dataset file prefix for dataset access property
            list. Does not persist in the file.
        virtual_prefix
            (String) Virtual dataset file prefix for dataset access property
            list. Does not persist in the file.
        allow_unknown_filter
            (T/F) Do not check that the requested filter is available for use.
            This should only be used with ``write_direct_chunk``, where the caller
            compresses the data before handing it to h5py.
        rdcc_nbytes
            Total size of the dataset's chunk cache in bytes. The default size
            is 1024**2 (1 MiB).
        rdcc_w0
            The chunk preemption policy for this dataset.  This must be
            between 0 and 1 inclusive and indicates the weighting according to
            which chunks which have been fully read or written are penalized
            when determining which chunks to flush from cache.  A value of 0
            means fully read or written chunks are treated no differently than
            other chunks (the preemption is strictly LRU) while a value of 1
            means fully read or written chunks are always preempted before
            other chunks.  If your application only reads or writes data once,
            this can be safely set to 1.  Otherwise, this should be set lower
            depending on how often you re-read or re-write the same data.  The
            default value is 0.75.
        rdcc_nslots
            The number of chunk slots in the dataset's chunk cache. Increasing
            this value reduces the number of cache collisions, but slightly
            increases the memory used. Due to the hashing strategy, this value
            should ideally be a prime number. As a rule of thumb, this value
            should be at least 10 times the number of chunks that can fit in
            rdcc_nbytes bytes. For maximum performance, this value should be set
            approximately 100 times that number of chunks. The default value is
            521.
        """
        if 'track_order' not in kwds:
            kwds['track_order'] = h5.get_config().track_order

        if 'efile_prefix' in kwds:
            kwds['efile_prefix'] = self._e(kwds['efile_prefix'])

        if 'virtual_prefix' in kwds:
            kwds['virtual_prefix'] = self._e(kwds['virtual_prefix'])

        with phil:
            group = self
            if name:
                name = self._e(name)
                if b'/' in name.lstrip(b'/'):
                    parent_path, name = name.rsplit(b'/', 1)
                    group = self.require_group(parent_path)

            dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
            dset = dataset.Dataset(dsid)
            return dset

    if vds_support:
        def create_virtual_dataset(self, name, layout, fillvalue=None):
            """Create a new virtual dataset in this group.

            See virtual datasets in the docs for more information.

            name
                (str) Name of the new dataset

            layout
                (VirtualLayout) Defines the sources for the virtual dataset

            fillvalue
                The value to use where there is no data.

            """
            with phil:
                group = self

                if name:
                    name = self._e(name)
                    if b'/' in name.lstrip(b'/'):
                        parent_path, name = name.rsplit(b'/', 1)
                        group = self.require_group(parent_path)

                dsid = layout.make_dataset(
                    group, name=name, fillvalue=fillvalue,
                )
                dset = dataset.Dataset(dsid)

            return dset

        @contextmanager
        def build_virtual_dataset(
                self, name, shape, dtype, maxshape=None, fillvalue=None
        ):
            """Assemble a virtual dataset in this group.

            This is used as a context manager::

                with f.build_virtual_dataset('virt', (10, 1000), np.uint32) as layout:
                    layout[0] = h5py.VirtualSource('foo.h5', 'data', (1000,))

            name
                (str) Name of the new dataset
            shape
                (tuple) Shape of the dataset
            dtype
                A numpy dtype for data read from the virtual dataset
            maxshape
                (tuple, optional) Maximum dimensions if the dataset can grow.
                Use None for unlimited dimensions.
            fillvalue
                The value used where no data is available.
            """
            from .vds import VirtualLayout
            layout = VirtualLayout(shape, dtype, maxshape, self.file.filename)
            yield layout

            self.create_virtual_dataset(name, layout, fillvalue)

    def require_dataset(self, name, shape, dtype, exact=False, **kwds):
        """ Open a dataset, creating it if it doesn't exist.

        If keyword "exact" is False (default), an existing dataset must have
        the same shape and a conversion-compatible dtype to be returned.  If
        True, the shape and dtype must match exactly.

        If keyword "maxshape" is given, the maxshape and dtype must match
        instead.

        If any of the keywords "rdcc_nslots", "rdcc_nbytes", or "rdcc_w0" are
        given, they will be used to configure the dataset's chunk cache.

        Other dataset keywords (see create_dataset) may be provided, but are
        only used if a new dataset is to be created.

        Raises TypeError if an incompatible object already exists, or if the
        shape, maxshape or dtype don't match according to the above rules.
        """
        if 'efile_prefix' in kwds:
            kwds['efile_prefix'] = self._e(kwds['efile_prefix'])

        if 'virtual_prefix' in kwds:
            kwds['virtual_prefix'] = self._e(kwds['virtual_prefix'])

        with phil:
            if name not in self:
                return self.create_dataset(name, *(shape, dtype), **kwds)

            if isinstance(shape, int):
                shape = (shape,)

            try:
                dsid = dataset.open_dset(self, self._e(name), **kwds)
                dset = dataset.Dataset(dsid)
            except KeyError:
                dset = self[name]
                raise TypeError("Incompatible object (%s) already exists" % dset.__class__.__name__)

            if shape != dset.shape:
                if "maxshape" not in kwds:
                    raise TypeError("Shapes do not match (existing %s vs new %s)" % (dset.shape, shape))
                elif kwds["maxshape"] != dset.maxshape:
                    raise TypeError("Max shapes do not match (existing %s vs new %s)" % (dset.maxshape, kwds["maxshape"]))

            if exact:
                if dtype != dset.dtype:
                    raise TypeError("Datatypes do not exactly match (existing %s vs new %s)" % (dset.dtype, dtype))
            elif not numpy.can_cast(dtype, dset.dtype):
                raise TypeError("Datatypes cannot be safely cast (existing %s vs new %s)" % (dset.dtype, dtype))

            return dset

    def create_dataset_like(self, name, other, **kwupdate):
        """ Create a dataset similar to `other`.

        name
            Name of the dataset (absolute or relative).  Provide None to make
            an anonymous dataset.
        other
            The dataset which the new dataset should mimic. All properties, such
            as shape, dtype, chunking, ... will be taken from it, but no data
            or attributes are being copied.

        Any dataset keywords (see create_dataset) may be provided, including
        shape and dtype, in which case the provided values take precedence over
        those from `other`.
        """
        for k in ('shape', 'dtype', 'chunks', 'compression',
                  'compression_opts', 'scaleoffset', 'shuffle', 'fletcher32',
                  'fillvalue'):
            kwupdate.setdefault(k, getattr(other, k))
        # TODO: more elegant way to pass these (dcpl to create_dataset?)
        dcpl = other.id.get_create_plist()
        kwupdate.setdefault('track_times', dcpl.get_obj_track_times())
        kwupdate.setdefault('track_order', dcpl.get_attr_creation_order() > 0)

        # Special case: the maxshape property always exists, but if we pass it
        # to create_dataset, the new dataset will automatically get chunked
        # layout. So we copy it only if it is different from shape.
        if other.maxshape != other.shape:
            kwupdate.setdefault('maxshape', other.maxshape)

        return self.create_dataset(name, **kwupdate)

    def require_group(self, name):
        # TODO: support kwargs like require_dataset
        """Return a group, creating it if it doesn't exist.

        TypeError is raised if something with that name already exists that
        isn't a group.
        """
        with phil:
            if name not in self:
                return self.create_group(name)
            grp = self[name]
            if not isinstance(grp, Group):
                raise TypeError("Incompatible object (%s) already exists" % grp.__class__.__name__)
            return grp

    @with_phil
    def __getitem__(self, name):
        """ Open an object in the file """

        if isinstance(name, h5r.Reference):
            oid = h5r.dereference(name, self.id)
            if oid is None:
                raise ValueError("Invalid HDF5 object reference")
        elif isinstance(name, (bytes, str)):
            oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
        else:
            raise TypeError("Accessing a group is done with bytes or str, "
                            "not {}".format(type(name)))

        otype = h5i.get_type(oid)
        if otype == h5i.GROUP:
            return Group(oid)
        elif otype == h5i.DATASET:
            return dataset.Dataset(oid, readonly=(self.file.mode == 'r'))
        elif otype == h5i.DATATYPE:
            return datatype.Datatype(oid)
        else:
            raise TypeError("Unknown object type")

    def get(self, name, default=None, getclass=False, getlink=False):
        """ Retrieve an item or other information.

        "name" given only:
            Return the item, or "default" if it doesn't exist

        "getclass" is True:
            Return the class of object (Group, Dataset, etc.), or "default"
            if nothing with that name exists

        "getlink" is True:
            Return HardLink, SoftLink or ExternalLink instances.  Return
            "default" if nothing with that name exists.

        "getlink" and "getclass" are True:
            Return HardLink, SoftLink and ExternalLink classes.  Return
            "default" if nothing with that name exists.

        Example:

        >>> cls = group.get('foo', getclass=True)
        >>> if cls == SoftLink:
        """
        # pylint: disable=arguments-differ

        with phil:
            if not (getclass or getlink):
                try:
                    return self[name]
                except KeyError:
                    return default

            if name not in self:
                return default

            elif getclass and not getlink:
                typecode = h5o.get_info(self.id, self._e(name), lapl=self._lapl).type

                try:
                    return {h5o.TYPE_GROUP: Group,
                            h5o.TYPE_DATASET: dataset.Dataset,
                            h5o.TYPE_NAMED_DATATYPE: datatype.Datatype}[typecode]
                except KeyError:
                    raise TypeError("Unknown object type")

            elif getlink:
                typecode = self.id.links.get_info(self._e(name), lapl=self._lapl).type

                if typecode == h5l.TYPE_SOFT:
                    if getclass:
                        return SoftLink
                    linkbytes = self.id.links.get_val(self._e(name), lapl=self._lapl)
                    return SoftLink(self._d(linkbytes))

                elif typecode == h5l.TYPE_EXTERNAL:
                    if getclass:
                        return ExternalLink
                    filebytes, linkbytes = self.id.links.get_val(self._e(name), lapl=self._lapl)
                    return ExternalLink(
                        filename_decode(filebytes), self._d(linkbytes)
                    )

                elif typecode == h5l.TYPE_HARD:
                    return HardLink if getclass else HardLink()

                else:
                    raise TypeError("Unknown link type")

    def __setitem__(self, name, obj):
        """ Add an object to the group.  The name must not already be in use.

        The action taken depends on the type of object assigned:

        Named HDF5 object (Dataset, Group, Datatype)
            A hard link is created at "name" which points to the
            given object.

        SoftLink or ExternalLink
            Create the corresponding link.

        Numpy ndarray
            The array is converted to a dataset object, with default
            settings (contiguous storage, etc.).

        Numpy dtype
            Commit a copy of the datatype as a named datatype in the file.

        Anything else
            Attempt to convert it to an ndarray and store it.  Scalar
            values are stored as scalar datasets. Raise ValueError if we
            can't understand the resulting array dtype.
        """
        with phil:
            name, lcpl = self._e(name, lcpl=True)

            if isinstance(obj, HLObject):
                h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl)

            elif isinstance(obj, SoftLink):
                self.id.links.create_soft(name, self._e(obj.path), lcpl=lcpl, lapl=self._lapl)

            elif isinstance(obj, ExternalLink):
                fn = filename_encode(obj.filename)
                self.id.links.create_external(name, fn, self._e(obj.path),
                                              lcpl=lcpl, lapl=self._lapl)

            elif isinstance(obj, numpy.dtype):
                htype = h5t.py_create(obj, logical=True)
                htype.commit(self.id, name, lcpl=lcpl)

            else:
                ds = self.create_dataset(None, data=obj)
                h5o.link(ds.id, self.id, name, lcpl=lcpl)

    @with_phil
    def __delitem__(self, name):
        """ Delete (unlink) an item from this group. """
        self.id.unlink(self._e(name))

    @with_phil
    def __len__(self):
        """ Number of members attached to this group """
        return self.id.get_num_objs()

    @with_phil
    def __iter__(self):
        """ Iterate over member names """
        for x in self.id.__iter__():
            yield self._d(x)

    @with_phil
    def __reversed__(self):
        """ Iterate over member names in reverse order. """
        for x in self.id.__reversed__():
            yield self._d(x)

    @with_phil
    def __contains__(self, name):
        """ Test if a member name exists """
        if hasattr(h5g, "_path_valid"):
            if not self.id:
                return False
            return h5g._path_valid(self.id, self._e(name), self._lapl)
        return self._e(name) in self.id

    def copy(self, source, dest, name=None,
             shallow=False, expand_soft=False, expand_external=False,
             expand_refs=False, without_attrs=False):
        """Copy an object or group.

        The source can be a path, Group, Dataset, or Datatype object.  The
        destination can be either a path or a Group object.  The source and
        destinations need not be in the same file.

        If the source is a Group object, all objects contained in that group
        will be copied recursively.

        When the destination is a Group object, by default the target will
        be created in that group with its current name (basename of obj.name).
        You can override that by setting "name" to a string.

        There are various options which all default to "False":

         - shallow: copy only immediate members of a group.

         - expand_soft: expand soft links into new objects.

         - expand_external: expand external links into new objects.

         - expand_refs: copy objects that are pointed to by references.

         - without_attrs: copy object without copying attributes.

       Example:

        >>> f = File('myfile.hdf5', 'w')
        >>> f.create_group("MyGroup")
        >>> list(f.keys())
        ['MyGroup']
        >>> f.copy('MyGroup', 'MyCopy')
        >>> list(f.keys())
        ['MyGroup', 'MyCopy']

        """
        with phil:
            if isinstance(source, HLObject):
                source_path = '.'
            else:
                # Interpret source as a path relative to this group
                source_path = source
                source = self

            if isinstance(dest, Group):
                if name is not None:
                    dest_path = name
                elif source_path == '.':
                    dest_path = pp.basename(h5i.get_name(source.id))
                else:
                    # copy source into dest group: dest_name/source_name
                    dest_path = pp.basename(h5i.get_name(source[source_path].id))

            elif isinstance(dest, HLObject):
                raise TypeError("Destination must be path or Group object")
            else:
                # Interpret destination as a path relative to this group
                dest_path = dest
                dest = self

            flags = 0
            if shallow:
                flags |= h5o.COPY_SHALLOW_HIERARCHY_FLAG
            if expand_soft:
                flags |= h5o.COPY_EXPAND_SOFT_LINK_FLAG
            if expand_external:
                flags |= h5o.COPY_EXPAND_EXT_LINK_FLAG
            if expand_refs:
                flags |= h5o.COPY_EXPAND_REFERENCE_FLAG
            if without_attrs:
                flags |= h5o.COPY_WITHOUT_ATTR_FLAG
            if flags:
                copypl = h5p.create(h5p.OBJECT_COPY)
                copypl.set_copy_object(flags)
            else:
                copypl = None

            h5o.copy(source.id, self._e(source_path), dest.id, self._e(dest_path),
                     copypl, base.dlcpl)

    def move(self, source, dest):
        """ Move a link to a new location in the file.

        If "source" is a hard link, this effectively renames the object.  If
        "source" is a soft or external link, the link itself is moved, with its
        value unmodified.
        """
        with phil:
            if source == dest:
                return
            self.id.links.move(self._e(source), self.id, self._e(dest),
                               lapl=self._lapl, lcpl=self._lcpl)

    def visit(self, func):
        """ Recursively visit all names in this group and subgroups.

        Note: visit ignores soft and external links. To visit those, use
        visit_links.

        You supply a callable (function, method or callable object); it
        will be called exactly once for each link in this group and every
        group below it. Your callable must conform to the signature:

            func() => 

        Returning None continues iteration, returning anything else stops
        and immediately returns that value from the visit method.  No
        particular order of iteration within groups is guaranteed.

        Example:

        >>> # List the entire contents of the file
        >>> f = File("foo.hdf5")
        >>> list_of_names = []
        >>> f.visit(list_of_names.append)
        """
        with phil:
            def proxy(name):
                """ Call the function with the text name, not bytes """
                return func(self._d(name))
            return h5o.visit(self.id, proxy)

    def visititems(self, func):
        """ Recursively visit names and objects in this group.

        Note: visititems ignores soft and external links. To visit those, use
        visititems_links.

        You supply a callable (function, method or callable object); it
        will be called exactly once for each link in this group and every
        group below it. Your callable must conform to the signature:

            func(, ) => 

        Returning None continues iteration, returning anything else stops
        and immediately returns that value from the visit method.  No
        particular order of iteration within groups is guaranteed.

        Example:

        # Get a list of all datasets in the file
        >>> mylist = []
        >>> def func(name, obj):
        ...     if isinstance(obj, Dataset):
        ...         mylist.append(name)
        ...
        >>> f = File('foo.hdf5')
        >>> f.visititems(func)
        """
        with phil:
            def proxy(name):
                """ Use the text name of the object, not bytes """
                name = self._d(name)
                return func(name, self[name])
            return h5o.visit(self.id, proxy)

    def visit_links(self, func):
        """ Recursively visit all names in this group and subgroups.
        Each link will be visited exactly once, regardless of its target.

        You supply a callable (function, method or callable object); it
        will be called exactly once for each link in this group and every
        group below it. Your callable must conform to the signature:

            func() => 

        Returning None continues iteration, returning anything else stops
        and immediately returns that value from the visit method.  No
        particular order of iteration within groups is guaranteed.

        Example:

        >>> # List the entire contents of the file
        >>> f = File("foo.hdf5")
        >>> list_of_names = []
        >>> f.visit_links(list_of_names.append)
        """
        with phil:
            def proxy(name):
                """ Call the function with the text name, not bytes """
                return func(self._d(name))
            return self.id.links.visit(proxy)

    def visititems_links(self, func):
        """ Recursively visit links in this group.
        Each link will be visited exactly once, regardless of its target.

        You supply a callable (function, method or callable object); it
        will be called exactly once for each link in this group and every
        group below it. Your callable must conform to the signature:

            func(, ) => 

        Returning None continues iteration, returning anything else stops
        and immediately returns that value from the visit method.  No
        particular order of iteration within groups is guaranteed.

        Example:

        # Get a list of all softlinks in the file
        >>> mylist = []
        >>> def func(name, link):
        ...     if isinstance(link, SoftLink):
        ...         mylist.append(name)
        ...
        >>> f = File('foo.hdf5')
        >>> f.visititems_links(func)
        """
        with phil:
            def proxy(name):
                """ Use the text name of the object, not bytes """
                name = self._d(name)
                return func(name, self.get(name, getlink=True))
            return self.id.links.visit(proxy)

    @with_phil
    def __repr__(self):
        if not self:
            r = u""
        else:
            namestr = (
                '"%s"' % self.name
            ) if self.name is not None else u"(anonymous)"
            r = '' % (namestr, len(self))

        return r


class HardLink:

    """
        Represents a hard link in an HDF5 file.  Provided only so that
        Group.get works in a sensible way.  Has no other function.
    """

    pass


class SoftLink:

    """
        Represents a symbolic ("soft") link in an HDF5 file.  The path
        may be absolute or relative.  No checking is performed to ensure
        that the target actually exists.
    """

    @property
    def path(self):
        """ Soft link value.  Not guaranteed to be a valid path. """
        return self._path

    def __init__(self, path):
        self._path = str(path)

    def __repr__(self):
        return '' % self.path


class ExternalLink:

    """
        Represents an HDF5 external link.  Paths may be absolute or relative.
        No checking is performed to ensure either the target or file exists.
    """

    @property
    def path(self):
        """ Soft link path, i.e. the part inside the HDF5 file. """
        return self._path

    @property
    def filename(self):
        """ Path to the external HDF5 file in the filesystem. """
        return self._filename

    def __init__(self, filename, path):
        self._filename = filename_decode(filename_encode(filename))
        self._path = path

    def __repr__(self):
        return ' Create a new selection on "shape"-tuple
        __getitem__(args) => Perform a selection with the range specified.
                             What args are allowed depends on the
                             particular subclass in use.

        id (read-only) =>      h5py.h5s.SpaceID instance
        shape (read-only) =>   The shape of the dataspace.
        mshape  (read-only) => The shape of the selection region.
                               Not guaranteed to fit within "shape", although
                               the total number of points is less than
                               product(shape).
        nselect (read-only) => Number of selected points.  Always equal to
                               product(mshape).

        broadcast(target_shape) => Return an iterable which yields dataspaces
                                   for read, based on target_shape.

        The base class represents "unshaped" selections (1-D).
    """

    def __init__(self, shape, spaceid=None):
        """ Create a selection.  Shape may be None if spaceid is given. """
        if spaceid is not None:
            self._id = spaceid
            self._shape = spaceid.shape
        else:
            shape = tuple(shape)
            self._shape = shape
            self._id = h5s.create_simple(shape, (h5s.UNLIMITED,)*len(shape))
            self._id.select_all()

    @property
    def id(self):
        """ SpaceID instance """
        return self._id

    @property
    def shape(self):
        """ Shape of whole dataspace """
        return self._shape

    @property
    def nselect(self):
        """ Number of elements currently selected """
        return self._id.get_select_npoints()

    @property
    def mshape(self):
        """ Shape of selection (always 1-D for this class) """
        return (self.nselect,)

    @property
    def array_shape(self):
        """Shape of array to read/write (always 1-D for this class)"""
        return self.mshape

    # expand_shape and broadcast only really make sense for SimpleSelection
    def expand_shape(self, source_shape):
        if product(source_shape) != self.nselect:
            raise TypeError("Broadcasting is not supported for point-wise selections")
        return source_shape

    def broadcast(self, source_shape):
        """ Get an iterable for broadcasting """
        if product(source_shape) != self.nselect:
            raise TypeError("Broadcasting is not supported for point-wise selections")
        yield self._id

    def __getitem__(self, args):
        raise NotImplementedError("This class does not support indexing")

class PointSelection(Selection):

    """
        Represents a point-wise selection.  You can supply sequences of
        points to the three methods append(), prepend() and set(), or
        instantiate it with a single boolean array using from_mask().
    """
    def __init__(self, shape, spaceid=None, points=None):
        super().__init__(shape, spaceid)
        if points is not None:
            self._perform_selection(points, h5s.SELECT_SET)

    def _perform_selection(self, points, op):
        """ Internal method which actually performs the selection """
        points = np.asarray(points, order='C', dtype='u8')
        if len(points.shape) == 1:
            points.shape = (1,points.shape[0])

        if self._id.get_select_type() != h5s.SEL_POINTS:
            op = h5s.SELECT_SET

        if len(points) == 0:
            self._id.select_none()
        else:
            self._id.select_elements(points, op)

    @classmethod
    def from_mask(cls, mask, spaceid=None):
        """Create a point-wise selection from a NumPy boolean array """
        if not (isinstance(mask, np.ndarray) and mask.dtype.kind == 'b'):
            raise TypeError("PointSelection.from_mask only works with bool arrays")

        points = np.transpose(mask.nonzero())
        return cls(mask.shape, spaceid, points=points)

    def append(self, points):
        """ Add the sequence of points to the end of the current selection """
        self._perform_selection(points, h5s.SELECT_APPEND)

    def prepend(self, points):
        """ Add the sequence of points to the beginning of the current selection """
        self._perform_selection(points, h5s.SELECT_PREPEND)

    def set(self, points):
        """ Replace the current selection with the given sequence of points"""
        self._perform_selection(points, h5s.SELECT_SET)


class SimpleSelection(Selection):

    """ A single "rectangular" (regular) selection composed of only slices
        and integer arguments.  Can participate in broadcasting.
    """

    @property
    def mshape(self):
        """ Shape of current selection """
        return self._sel[1]

    @property
    def array_shape(self):
        scalar = self._sel[3]
        return tuple(x for x, s in zip(self.mshape, scalar) if not s)

    def __init__(self, shape, spaceid=None, hyperslab=None):
        super().__init__(shape, spaceid)
        if hyperslab is not None:
            self._sel = hyperslab
        else:
            # No hyperslab specified - select all
            rank = len(self.shape)
            self._sel = ((0,)*rank, self.shape, (1,)*rank, (False,)*rank)

    def expand_shape(self, source_shape):
        """Match the dimensions of an array to be broadcast to the selection

        The returned shape describes an array of the same size as the input
        shape, but its dimensions

        E.g. with a dataset shape (10, 5, 4, 2), writing like this::

            ds[..., 0] = np.ones((5, 4))

        The source shape (5, 4) will expand to (1, 5, 4, 1).
        Then the broadcast method below repeats that chunk 10
        times to write to an effective shape of (10, 5, 4, 1).
        """
        start, count, step, scalar = self._sel

        rank = len(count)
        remaining_src_dims = list(source_shape)

        eshape = []
        for idx in range(1, rank + 1):
            if len(remaining_src_dims) == 0 or scalar[-idx]:  # Skip scalar axes
                eshape.append(1)
            else:
                t = remaining_src_dims.pop()
                if t == 1 or count[-idx] == t:
                    eshape.append(t)
                else:
                    raise TypeError("Can't broadcast %s -> %s" % (source_shape, self.array_shape))  # array shape

        if any([n > 1 for n in remaining_src_dims]):
            # All dimensions from target_shape should either have been popped
            # to match the selection shape, or be 1.
            raise TypeError("Can't broadcast %s -> %s" % (source_shape, self.array_shape))  # array shape

        # We have built eshape backwards, so now reverse it
        return tuple(eshape[::-1])


    def broadcast(self, source_shape):
        """ Return an iterator over target dataspaces for broadcasting.

        Follows the standard NumPy broadcasting rules against the current
        selection shape (self.mshape).
        """
        if self.shape == ():
            if product(source_shape) != 1:
                raise TypeError("Can't broadcast %s to scalar" % source_shape)
            self._id.select_all()
            yield self._id
            return

        start, count, step, scalar = self._sel

        rank = len(count)
        tshape = self.expand_shape(source_shape)

        chunks = tuple(x//y for x, y in zip(count, tshape))
        nchunks = product(chunks)

        if nchunks == 1:
            yield self._id
        else:
            sid = self._id.copy()
            sid.select_hyperslab((0,)*rank, tshape, step)
            for idx in range(nchunks):
                offset = tuple(x*y*z + s for x, y, z, s in zip(np.unravel_index(idx, chunks), tshape, step, start))
                sid.offset_simple(offset)
                yield sid


class FancySelection(Selection):

    """
        Implements advanced NumPy-style selection operations in addition to
        the standard slice-and-int behavior.

        Indexing arguments may be ints, slices, lists of indices, or
        per-axis (1D) boolean arrays.

        Broadcasting is not supported for these selections.
    """

    @property
    def mshape(self):
        return self._mshape

    @property
    def array_shape(self):
        return self._array_shape

    def __init__(self, shape, spaceid=None, mshape=None, array_shape=None):
        super().__init__(shape, spaceid)
        if mshape is None:
            mshape = self.shape
        if array_shape is None:
            array_shape = mshape
        self._mshape = mshape
        self._array_shape = array_shape

    def expand_shape(self, source_shape):
        if not source_shape == self.array_shape:
            raise TypeError("Broadcasting is not supported for complex selections")
        return source_shape

    def broadcast(self, source_shape):
        if not source_shape == self.array_shape:
            raise TypeError("Broadcasting is not supported for complex selections")
        yield self._id


def guess_shape(sid):
    """ Given a dataspace, try to deduce the shape of the selection.

    Returns one of:
        * A tuple with the selection shape, same length as the dataspace
        * A 1D selection shape for point-based and multiple-hyperslab selections
        * None, for unselected scalars and for NULL dataspaces
    """

    sel_class = sid.get_simple_extent_type()    # Dataspace class
    sel_type = sid.get_select_type()            # Flavor of selection in use

    if sel_class == h5s.NULL:
        # NULL dataspaces don't support selections
        return None

    elif sel_class == h5s.SCALAR:
        # NumPy has no way of expressing empty 0-rank selections, so we use None
        if sel_type == h5s.SEL_NONE: return None
        if sel_type == h5s.SEL_ALL: return tuple()

    elif sel_class != h5s.SIMPLE:
        raise TypeError("Unrecognized dataspace class %s" % sel_class)

    # We have a "simple" (rank >= 1) dataspace

    N = sid.get_select_npoints()
    rank = len(sid.shape)

    if sel_type == h5s.SEL_NONE:
        return (0,)*rank

    elif sel_type == h5s.SEL_ALL:
        return sid.shape

    elif sel_type == h5s.SEL_POINTS:
        # Like NumPy, point-based selections yield 1D arrays regardless of
        # the dataspace rank
        return (N,)

    elif sel_type != h5s.SEL_HYPERSLABS:
        raise TypeError("Unrecognized selection method %s" % sel_type)

    # We have a hyperslab-based selection

    if N == 0:
        return (0,)*rank

    bottomcorner, topcorner = (np.array(x) for x in sid.get_select_bounds())

    # Shape of full selection box
    boxshape = topcorner - bottomcorner + np.ones((rank,))

    def get_n_axis(sid, axis):
        """ Determine the number of elements selected along a particular axis.

        To do this, we "mask off" the axis by making a hyperslab selection
        which leaves only the first point along the axis.  For a 2D dataset
        with selection box shape (X, Y), for axis 1, this would leave a
        selection of shape (X, 1).  We count the number of points N_leftover
        remaining in the selection and compute the axis selection length by
        N_axis = N/N_leftover.
        """

        if(boxshape[axis]) == 1:
            return 1

        start = bottomcorner.copy()
        start[axis] += 1
        count = boxshape.copy()
        count[axis] -= 1

        # Throw away all points along this axis
        masked_sid = sid.copy()
        masked_sid.select_hyperslab(tuple(start), tuple(count), op=h5s.SELECT_NOTB)

        N_leftover = masked_sid.get_select_npoints()

        return N//N_leftover


    shape = tuple(get_n_axis(sid, x) for x in range(rank))

    if product(shape) != N:
        # This means multiple hyperslab selections are in effect,
        # so we fall back to a 1D shape
        return (N,)

    return shape
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/_hl/selections2.py0000644000175000017500000000524314350630273017623 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements a portion of the selection operations.
"""

import numpy as np
from .. import h5s

def read_dtypes(dataset_dtype, names):
    """ Returns a 2-tuple containing:

    1. Output dataset dtype
    2. Dtype containing HDF5-appropriate description of destination
    """

    if len(names) == 0:     # Not compound, or all fields needed
        format_dtype = dataset_dtype

    elif dataset_dtype.names is None:
        raise ValueError("Field names only allowed for compound types")

    elif any(x not in dataset_dtype.names for x in names):
        raise ValueError("Field does not appear in this type.")

    else:
        format_dtype = np.dtype([(name, dataset_dtype.fields[name][0]) for name in names])

    if len(names) == 1:
        # We don't preserve the field information if only one explicitly selected.
        output_dtype = format_dtype.fields[names[0]][0]

    else:
        output_dtype = format_dtype

    return output_dtype, format_dtype


def read_selections_scalar(dsid, args):
    """ Returns a 2-tuple containing:

    1. Output dataset shape
    2. HDF5 dataspace containing source selection.

    Works for scalar datasets.
    """

    if dsid.shape != ():
        raise RuntimeError("Illegal selection function for non-scalar dataset")

    if args == ():
        # This is a signal that an array scalar should be returned instead
        # of an ndarray with shape ()
        out_shape = None

    elif args == (Ellipsis,):
        out_shape = ()

    else:
        raise ValueError("Illegal slicing argument for scalar dataspace")

    source_space = dsid.get_space()
    source_space.select_all()

    return out_shape, source_space

class ScalarReadSelection:

    """
        Implements slicing for scalar datasets.
    """

    def __init__(self, fspace, args):
        if args == ():
            self.mshape = None
        elif args == (Ellipsis,):
            self.mshape = ()
        else:
            raise ValueError("Illegal slicing argument for scalar dataspace")

        self.mspace = h5s.create(h5s.SCALAR)
        self.fspace = fspace

    def __iter__(self):
        self.mspace.select_all()
        yield self.fspace, self.mspace

def select_read(fspace, args):
    """ Top-level dispatch function for reading.

    At the moment, only supports reading from scalar datasets.
    """
    if fspace.shape == ():
        return ScalarReadSelection(fspace, args)

    raise NotImplementedError()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/_hl/vds.py0000644000175000017500000002224514507227212016165 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    High-level interface for creating HDF5 virtual datasets
"""

from copy import deepcopy as copy
from collections import namedtuple

import numpy as np

from .compat import filename_encode
from .datatype import Datatype
from .selections import SimpleSelection, select
from .. import h5d, h5p, h5s, h5t


class VDSmap(namedtuple('VDSmap', ('vspace', 'file_name',
                                   'dset_name', 'src_space'))):
    '''Defines a region in a virtual dataset mapping to part of a source dataset
    '''


vds_support = True


def _convert_space_for_key(space, key):
    """
    Converts the space with the given key. Mainly used to allow unlimited
    dimensions in virtual space selection.
    """
    key = key if isinstance(key, tuple) else (key,)
    type_code = space.get_select_type()

    # check for unlimited selections in case where selection is regular
    # hyperslab, which is the only allowed case for h5s.UNLIMITED to be
    # in the selection
    if type_code == h5s.SEL_HYPERSLABS and space.is_regular_hyperslab():
        rank = space.get_simple_extent_ndims()
        nargs = len(key)

        idx_offset = 0
        start, stride, count, block = space.get_regular_hyperslab()
        # iterate through keys. we ignore numeral indices. if we get a
        # slice, we check for an h5s.UNLIMITED value as the stop
        # if we get an ellipsis, we offset index by (rank - nargs)
        for i, sl in enumerate(key):
            if isinstance(sl, slice):
                if sl.stop == h5s.UNLIMITED:
                    counts = list(count)
                    idx = i + idx_offset
                    counts[idx] = h5s.UNLIMITED
                    count = tuple(counts)
            elif sl is Ellipsis:
                idx_offset = rank - nargs

        space.select_hyperslab(start, count, stride, block)


class VirtualSource:
    """Source definition for virtual data sets.

    Instantiate this class to represent an entire source dataset, and then
    slice it to indicate which regions should be used in the virtual dataset.

    path_or_dataset
        The path to a file, or an h5py dataset. If a dataset is given,
        no other parameters are allowed, as the relevant values are taken from
        the dataset instead.
    name
        The name of the source dataset within the file.
    shape
        A tuple giving the shape of the dataset.
    dtype
        Numpy dtype or string.
    maxshape
        The source dataset is resizable up to this shape. Use None for
        axes you want to be unlimited.
    """
    def __init__(self, path_or_dataset, name=None,
                 shape=None, dtype=None, maxshape=None):
        from .dataset import Dataset
        if isinstance(path_or_dataset, Dataset):
            failed = {k: v
                      for k, v in
                      {'name': name, 'shape': shape,
                       'dtype': dtype, 'maxshape': maxshape}.items()
                      if v is not None}
            if failed:
                raise TypeError("If a Dataset is passed as the first argument "
                                "then no other arguments may be passed.  You "
                                "passed {failed}".format(failed=failed))
            ds = path_or_dataset
            path = ds.file.filename
            name = ds.name
            shape = ds.shape
            dtype = ds.dtype
            maxshape = ds.maxshape
        else:
            path = path_or_dataset
            if name is None:
                raise TypeError("The name parameter is required when "
                                "specifying a source by path")
            if shape is None:
                raise TypeError("The shape parameter is required when "
                                "specifying a source by path")
            elif isinstance(shape, int):
                shape = (shape,)

            if isinstance(maxshape, int):
                maxshape = (maxshape,)

        self.path = path
        self.name = name
        self.dtype = dtype

        if maxshape is None:
            self.maxshape = shape
        else:
            self.maxshape = tuple([h5s.UNLIMITED if ix is None else ix
                                   for ix in maxshape])
        self.sel = SimpleSelection(shape)
        self._all_selected = True

    @property
    def shape(self):
        return self.sel.array_shape

    def __getitem__(self, key):
        if not self._all_selected:
            raise RuntimeError("VirtualSource objects can only be sliced once.")
        tmp = copy(self)
        tmp.sel = select(self.shape, key, dataset=None)
        _convert_space_for_key(tmp.sel.id, key)
        tmp._all_selected = False
        return tmp

class VirtualLayout:
    """Object for building a virtual dataset.

    Instantiate this class to define a virtual dataset, assign to slices of it
    (using VirtualSource objects), and then pass it to
    group.create_virtual_dataset() to add the virtual dataset to a file.

    This class does not allow access to the data; the virtual dataset must
    be created in a file before it can be used.

    shape
        A tuple giving the shape of the dataset.
    dtype
        Numpy dtype or string.
    maxshape
        The virtual dataset is resizable up to this shape. Use None for
        axes you want to be unlimited.
    filename
        The name of the destination file, if known in advance. Mappings from
        data in the same file will be stored with filename '.', allowing the
        file to be renamed later.
    """
    def __init__(self, shape, dtype, maxshape=None, filename=None):
        self.shape = (shape,) if isinstance(shape, int) else shape
        self.dtype = dtype
        self.maxshape = (maxshape,) if isinstance(maxshape, int) else maxshape
        self._filename = filename
        self._src_filenames = set()
        self.dcpl = h5p.create(h5p.DATASET_CREATE)

    def __setitem__(self, key, source):
        sel = select(self.shape, key, dataset=None)
        _convert_space_for_key(sel.id, key)
        src_filename = self._source_file_name(source.path, self._filename)

        self.dcpl.set_virtual(
            sel.id, src_filename, source.name.encode('utf-8'), source.sel.id
        )
        if self._filename is None:
            self._src_filenames.add(src_filename)

    @staticmethod
    def _source_file_name(src_filename, dst_filename) -> bytes:
        src_filename = filename_encode(src_filename)
        if dst_filename and (src_filename == filename_encode(dst_filename)):
            # use relative path if the source dataset is in the same
            # file, in order to keep the virtual dataset valid in case
            # the file is renamed.
            return b'.'
        return filename_encode(src_filename)

    def _get_dcpl(self, dst_filename):
        """Get the property list containing virtual dataset mappings

        If the destination filename wasn't known when the VirtualLayout was
        created, it is handled here.
        """
        dst_filename = filename_encode(dst_filename)
        if self._filename is not None:
            # filename was known in advance; check dst_filename matches
            if dst_filename != filename_encode(self._filename):
                raise Exception(f"{dst_filename!r} != {self._filename!r}")
            return self.dcpl

        # destination file not known in advance
        if dst_filename in self._src_filenames:
            # At least 1 source file is the same as the destination file,
            # but we didn't know this when making the mapping. Copy the mappings
            # to a new property list, replacing the dest filename with '.'
            new_dcpl = h5p.create(h5p.DATASET_CREATE)
            for i in range(self.dcpl.get_virtual_count()):
                src_filename = self.dcpl.get_virtual_filename(i)
                new_dcpl.set_virtual(
                    self.dcpl.get_virtual_vspace(i),
                    self._source_file_name(src_filename, dst_filename),
                    self.dcpl.get_virtual_dsetname(i).encode('utf-8'),
                    self.dcpl.get_virtual_srcspace(i),
                )
            return new_dcpl
        else:
            return self.dcpl  # Mappings are all from other files

    def make_dataset(self, parent, name, fillvalue=None):
        """ Return a new low-level dataset identifier for a virtual dataset """
        dcpl = self._get_dcpl(parent.file.filename)

        if fillvalue is not None:
            dcpl.set_fill_value(np.array([fillvalue]))

        maxshape = self.maxshape
        if maxshape is not None:
            maxshape = tuple(m if m is not None else h5s.UNLIMITED for m in maxshape)

        virt_dspace = h5s.create_simple(self.shape, maxshape)

        if isinstance(self.dtype, Datatype):
            # Named types are used as-is
            tid = self.dtype.id
        else:
            dtype = np.dtype(self.dtype)
            tid = h5t.py_create(dtype, logical=1)

        return h5d.create(parent.id, name=name, tid=tid, space=virt_dspace,
                          dcpl=dcpl)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1705055674.0
h5py-3.13.0/h5py/_locks.pxi0000644000175000017500000001076614550212672016260 0ustar00takluyvertakluyvercdef class BogoLock:

    def __enter__(self):
        pass

    def __exit__(self, *args):
        pass

## {{{ http://code.activestate.com/recipes/577336/ (r3)
from cpython cimport pythread
from cpython.exc cimport PyErr_NoMemory

cdef class FastRLock:
    """Fast, re-entrant locking.

    Under uncongested conditions, the lock is never acquired but only
    counted.  Only when a second thread comes in and notices that the
    lock is needed, it acquires the lock and notifies the first thread
    to release it when it's done.  This is all made possible by the
    wonderful GIL.
    """
    cdef pythread.PyThread_type_lock _real_lock
    cdef long _owner            # ID of thread owning the lock
    cdef int _count             # re-entry count
    cdef int _pending_requests  # number of pending requests for real lock
    cdef bint _is_locked        # whether the real lock is acquired

    def __cinit__(self):
        self._owner = -1
        self._count = 0
        self._is_locked = False
        self._pending_requests = 0
        self._real_lock = pythread.PyThread_allocate_lock()
        if self._real_lock is NULL:
            PyErr_NoMemory()

    def __dealloc__(self):
        if self._real_lock is not NULL:
            pythread.PyThread_free_lock(self._real_lock)
            self._real_lock = NULL

    def acquire(self, bint blocking=True):
        return lock_lock(self, pythread.PyThread_get_thread_ident(), blocking)

    def release(self):
        if self._owner != pythread.PyThread_get_thread_ident():
            raise RuntimeError("cannot release un-acquired lock")
        unlock_lock(self)

    # compatibility with threading.RLock

    def __enter__(self):
        # self.acquire()
        return lock_lock(self, pythread.PyThread_get_thread_ident(), True)

    def __exit__(self, t, v, tb):
        # self.release()
        if self._owner != pythread.PyThread_get_thread_ident():
            raise RuntimeError("cannot release un-acquired lock")
        unlock_lock(self)

    def _is_owned(self):
        return self._owner == pythread.PyThread_get_thread_ident()


cdef inline bint lock_lock(FastRLock lock, long current_thread, bint blocking) noexcept nogil:
    # Note that this function *must* hold the GIL when being called.
    # We just use 'nogil' in the signature to make sure that no Python
    # code execution slips in that might free the GIL

    if lock._count:
        # locked! - by myself?
        if current_thread == lock._owner:
            lock._count += 1
            return 1
    elif not lock._pending_requests:
        # not locked, not requested - go!
        lock._owner = current_thread
        lock._count = 1
        return 1
    # need to get the real lock
    return _acquire_lock(
        lock, current_thread,
        pythread.WAIT_LOCK if blocking else pythread.NOWAIT_LOCK)

cdef bint _acquire_lock(FastRLock lock, long current_thread, int wait) noexcept nogil:
    # Note that this function *must* hold the GIL when being called.
    # We just use 'nogil' in the signature to make sure that no Python
    # code execution slips in that might free the GIL

    if not lock._is_locked and not lock._pending_requests:
        # someone owns it but didn't acquire the real lock - do that
        # now and tell the owner to release it when done. Note that we
        # do not release the GIL here as we must absolutely be the one
        # who acquires the lock now.
        if not pythread.PyThread_acquire_lock(lock._real_lock, wait):
            return 0
        #assert not lock._is_locked
        lock._is_locked = True
    lock._pending_requests += 1
    with nogil:
        # wait for the lock owning thread to release it
        locked = pythread.PyThread_acquire_lock(lock._real_lock, wait)
    lock._pending_requests -= 1
    #assert not lock._is_locked
    #assert lock._count == 0
    if not locked:
        return 0
    lock._is_locked = True
    lock._owner = current_thread
    lock._count = 1
    return 1

cdef inline void unlock_lock(FastRLock lock) noexcept nogil:
    # Note that this function *must* hold the GIL when being called.
    # We just use 'nogil' in the signature to make sure that no Python
    # code execution slips in that might free the GIL

    #assert lock._owner == pythread.PyThread_get_thread_ident()
    #assert lock._count > 0
    lock._count -= 1
    if lock._count == 0:
        lock._owner = -1
        if lock._is_locked:
            pythread.PyThread_release_lock(lock._real_lock)
            lock._is_locked = False
## end of http://code.activestate.com/recipes/577336/ }}}
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/_objects.pxd0000644000175000017500000000137014045746670016571 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

cdef class ObjectID:

    cdef object __weakref__
    cdef readonly hid_t id
    cdef public int locked              # Cannot be closed, explicitly or auto
    cdef object _hash
    cdef size_t _pyid

# Convenience functions
cdef hid_t pdefault(ObjectID pid)
cdef int is_h5py_obj_valid(ObjectID obj)

# Inheritance scheme (for top-level cimport and import statements):
#
# _objects, _proxy, h5fd, h5z
# h5i, h5r, utils
# _conv, h5t, h5s
# h5p
# h5d, h5a, h5f, h5g
# h5l
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/_objects.pyx0000644000175000017500000002351714675110407016615 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Implements ObjectID base class.
"""

include "_locks.pxi"
from .defs cimport *

DEF USE_LOCKING = True
DEF DEBUG_ID = False

# --- Locking code ------------------------------------------------------------
#
# Most of the functions and methods in h5py spend all their time in the Cython
# code for h5py, and hold the GIL until they exit.  However, in some cases,
# particularly when calling native-Python functions from the stdlib or
# elsewhere, the GIL can be released mid-function and another thread can
# call into the API at the same time.
#
# This is bad news, especially for the object identifier registry.
#
# We serialize all access to the low-level API with a single recursive lock.
# Only one thread at a time can call any low-level routine.
#
# Note that this doesn't affect other applications like PyTables which are
# interacting with HDF5.  In the case of the identifier registry, this means
# that it's possible for an identifier to go stale and for PyTables to reuse
# it before we've had a chance to set obj.id = 0.  For this reason, h5py is
# advertised for EITHER multithreaded use OR use alongside PyTables/NetCDF4,
# but not both at the same time.

IF USE_LOCKING:
    cdef FastRLock _phil = FastRLock()
ELSE:
    cdef BogoLock _phil = BogoLock()

# Python alias for access from other modules
phil = _phil

def with_phil(func):
    """ Locking decorator """

    import functools

    def wrapper(*args, **kwds):
        with _phil:
            return func(*args, **kwds)

    functools.update_wrapper(wrapper, func)
    return wrapper

# --- End locking code --------------------------------------------------------


# --- Registry code ----------------------------------------------------------
#
# With HDF5 1.8, when an identifier is closed its value may be immediately
# re-used for a new object.  This leads to odd behavior when an ObjectID
# with an old identifier is left hanging around... For example, a GroupID
# belonging to a closed file could suddenly "mutate" into one from a
# different file.
#
# There are two ways such "zombie" identifiers can arise.  The first is if
# an identifier is explicitly closed via obj._close().  For this case we
# set obj.id = 0.  The second is that certain actions in HDF5 can *remotely*
# invalidate identifiers.  For example, closing a file opened with
# H5F_CLOSE_STRONG will also close open groups, etc.
#
# When such a "nonlocal" event occurs, we have to examine all live ObjectID
# instances, and manually set obj.id = 0.  That's what the function
# nonlocal_close() does.  We maintain an inventory of all live ObjectID
# instances in the registry dict.  Then, when a nonlocal event occurs,
# nonlocal_close() walks through the inventory and sets the stale identifiers
# to 0.  It must be explicitly called; currently, this happens in FileID.close()
# as well as the high-level File.close().
#
# The entire low-level API is now explicitly locked, so only one thread at at
# time is taking actions that may create or invalidate identifiers. See the
# "locking code" section above.
#
# See also __cinit__ and __dealloc__ for class ObjectID.

import gc
import weakref
import warnings

# Will map id(obj) -> weakref(obj), where obj is an ObjectID instance.
# Objects are added only via ObjectID.__cinit__, and removed only by
# ObjectID.__dealloc__.
cdef dict registry = {}

@with_phil
def print_reg():
    import h5py
    refs = registry.values()
    objs = [r() for r in refs]

    none = len([x for x in objs if x is None])
    files = len([x for x in objs if isinstance(x, h5py.h5f.FileID)])
    groups = len([x for x in objs if isinstance(x, h5py.h5g.GroupID)])

    print("REGISTRY: %d | %d None | %d FileID | %d GroupID" % (len(objs), none, files, groups))


@with_phil
def nonlocal_close():
    """ Find dead ObjectIDs and set their integer identifiers to 0.
    """
    cdef ObjectID obj
    cdef list reg_ids

    # create a cached list of ids whilst the gc is disabled to avoid hitting
    # the cyclic gc while iterating through the registry dict
    gc_was_enabled = gc.isenabled()
    gc.disable()
    try:
        reg_ids = list(registry)
    finally:
        if gc_was_enabled:
            gc.enable()

    for python_id in reg_ids:
        ref = registry.get(python_id)

        # registry dict has changed underneath us, skip to next item
        if ref is None:
            continue

        obj = ref()

        # Object died while walking the registry list, presumably because
        # the cyclic GC kicked in.
        if obj is None:
            continue

        # Locked objects are immortal, as they generally are provided by
        # the HDF5 library itself (property list classes, etc.).
        if obj.locked:
            continue

        # Invalid object; set obj.id = 0 so it doesn't become a zombie
        if not H5Iis_valid(obj.id):
            IF DEBUG_ID:
                print("NONLOCAL - invalidating %d of kind %s HDF5 id %d" %
                        (python_id, type(obj), obj.id) )
            obj.id = 0
            continue

# --- End registry code -------------------------------------------------------


cdef class ObjectID:

    """
        Represents an HDF5 identifier.

    attributes:
    cdef object __weakref__
    cdef readonly hid_t id
    cdef public int locked              # Cannot be closed, explicitly or auto
    cdef object _hash
    cdef size_t _pyid
    """

    @property
    def fileno(self):
        cdef H5G_stat_t stat
        with _phil:
            H5Gget_objinfo(self.id, '.', 0, &stat)
            return (stat.fileno[0], stat.fileno[1])


    @property
    def valid(self):
        return is_h5py_obj_valid(self)


    def __cinit__(self, id_):
        with _phil:
            self.id = id_
            self.locked = 0
            self._pyid = id(self)
            IF DEBUG_ID:
                print("CINIT - registering %d of kind %s HDF5 id %d" % (self._pyid, type(self), self.id))
            registry[self._pyid] = weakref.ref(self)


    def __dealloc__(self):
        with _phil:
            IF DEBUG_ID:
                print("DEALLOC - unregistering %d HDF5 id %d" % (self._pyid, self.id))
            if is_h5py_obj_valid(self) and (not self.locked):
                if H5Idec_ref(self.id) < 0:
                    warnings.warn(
                        "Reference counting issue with HDF5 id {}".format(
                            self.id
                        )
                    )
            if self._pyid is not None:
                del registry[self._pyid]


    def _close(self):
        """ Manually close this object. """

        with _phil:
            IF DEBUG_ID:
                print("CLOSE - %d HDF5 id %d" % (self._pyid, self.id))
            if is_h5py_obj_valid(self) and (not self.locked):
                if H5Idec_ref(self.id) < 0:
                    warnings.warn(
                        "Reference counting issue with HDF5 id {}".format(
                            self.id
                        )
                    )
            self.id = 0

    def close(self):
        """ Close this identifier. """
        # Note this is the default close method.  Subclasses, e.g. FileID,
        # which have nonlocal effects should override this.
        self._close()

    def __nonzero__(self):
        return self.valid

    def __copy__(self):
        cdef ObjectID cpy
        with _phil:
            cpy = type(self)(self.id)
            H5Iinc_ref(cpy.id)
            return cpy


    def __richcmp__(self, object other, int how):
        """ Default comparison mechanism for HDF5 objects (equal/not-equal)

        Default equality testing:
        1. Objects which are not both ObjectIDs are unequal
        2. Objects with the same HDF5 ID number are always equal
        3. Objects which hash the same are equal
        """
        cdef bint equal = 0

        with _phil:
            if how != 2 and how != 3:
                return NotImplemented

            if isinstance(other, ObjectID):
                if self.id == other.id:
                    equal = 1
                else:
                    try:
                        equal = hash(self) == hash(other)
                    except TypeError:
                        pass

            if how == 2:
                return equal
            return not equal


    def __hash__(self):
        """ Default hashing mechanism for HDF5 objects

        Default hashing strategy:
        1. Try to hash based on the object's fileno and objno records
        2. If (1) succeeds, cache the resulting value
        3. If (1) fails, raise TypeError
        """
        cdef H5G_stat_t stat

        with _phil:
            if self._hash is None:
                try:
                    H5Gget_objinfo(self.id, '.', 0, &stat)
                    self._hash = hash((stat.fileno[0], stat.fileno[1], stat.objno[0], stat.objno[1]))
                except Exception:
                    raise TypeError("Objects of class %s cannot be hashed" % self.__class__.__name__)

            return self._hash


cdef hid_t pdefault(ObjectID pid):

    if pid is None:
        return H5P_DEFAULT
    return pid.id


cdef int is_h5py_obj_valid(ObjectID obj):
    """
    Check that h5py object is valid, i.e. HDF5 object wrapper is valid and HDF5
    object is valid
    """
    # MUST BE CALLABLE AT ANY TIME, CANNOT USE PROPERTIES ETC. AS PER
    # http://cython.readthedocs.io/en/latest/src/userguide/special_methods.html

    # Locked objects are always valid, regardless of obj.id
    if obj.locked:
        return True

    # Former zombie object
    if obj.id == 0:
        return False

    # Ask HDF5.  Note that H5Iis_valid only works for "user"
    # identifiers, hence the above checks.
    with _phil:
        return H5Iis_valid(obj.id)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1695890882.0
h5py-3.13.0/h5py/_proxy.pxd0000644000175000017500000000113014505236702016302 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

cdef herr_t attr_rw(hid_t attr, hid_t mtype, void *progbuf, int read) except -1

cdef herr_t dset_rw(hid_t dset, hid_t mtype, hid_t mspace, hid_t fspace,
                    hid_t dxpl, void* progbuf, int read) except -1

cdef htri_t needs_bkg_buffer(hid_t src, hid_t dst) except -1
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1705055674.0
h5py-3.13.0/h5py/_proxy.pyx0000644000175000017500000002456214550212672016345 0ustar00takluyvertakluyver# cython: profile=False

# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Proxy functions for read/write, to work around the HDF5 bogus type issue.
"""

include "config.pxi"

cdef enum copy_dir:
    H5PY_SCATTER = 0,
    H5PY_GATHER

cdef herr_t attr_rw(hid_t attr, hid_t mtype, void *progbuf, int read) except -1:

    cdef htri_t need_bkg
    cdef hid_t atype = -1
    cdef hid_t aspace = -1
    cdef hsize_t npoints

    cdef size_t msize, asize
    cdef void* conv_buf = NULL
    cdef void* back_buf = NULL

    try:
        atype = H5Aget_type(attr)

        if not (needs_proxy(atype) or needs_proxy(mtype)):
            if read:
                H5Aread(attr, mtype, progbuf)
            else:
                H5Awrite(attr, mtype, progbuf)

        else:

            asize = H5Tget_size(atype)
            msize = H5Tget_size(mtype)
            aspace = H5Aget_space(attr)
            npoints = H5Sget_select_npoints(aspace)

            conv_buf = create_buffer(asize, msize, npoints)

            if read:
                need_bkg = needs_bkg_buffer(atype, mtype)
            else:
                need_bkg = needs_bkg_buffer(mtype, atype)
            if need_bkg:
                back_buf = create_buffer(msize, asize, npoints)
                if read:
                    memcpy(back_buf, progbuf, msize*npoints)

            if read:
                H5Aread(attr, atype, conv_buf)
                H5Tconvert(atype, mtype, npoints, conv_buf, back_buf, H5P_DEFAULT)
                memcpy(progbuf, conv_buf, msize*npoints)
            else:
                memcpy(conv_buf, progbuf, msize*npoints)
                H5Tconvert(mtype, atype, npoints, conv_buf, back_buf, H5P_DEFAULT)
                H5Awrite(attr, atype, conv_buf)
                H5Dvlen_reclaim(atype, aspace, H5P_DEFAULT, conv_buf)

    finally:
        free(conv_buf)
        free(back_buf)
        if atype > 0:
            H5Tclose(atype)
        if aspace > 0:
            H5Sclose(aspace)

    return 0


# =============================================================================
# Proxy for vlen buf workaround


cdef herr_t dset_rw(hid_t dset, hid_t mtype, hid_t mspace, hid_t fspace,
                    hid_t dxpl, void* progbuf, int read) except -1:

    cdef htri_t need_bkg
    cdef hid_t dstype = -1      # Dataset datatype
    cdef hid_t rawdstype = -1
    cdef hid_t dspace = -1      # Dataset dataspace
    cdef hid_t cspace = -1      # Temporary contiguous dataspaces

    cdef void* back_buf = NULL
    cdef void* conv_buf = NULL
    cdef hsize_t npoints

    try:
        # Issue 372: when a compound type is involved, using the dataset type
        # may result in uninitialized data being sent to H5Tconvert for fields
        # not present in the memory type.  Limit the type used for the dataset
        # to only those fields present in the memory type.  We can't use the
        # memory type directly because of course that triggers HDFFV-1063.
        if (H5Tget_class(mtype) == H5T_COMPOUND) and (not read):
            rawdstype = H5Dget_type(dset)
            dstype = make_reduced_type(mtype, rawdstype)
            H5Tclose(rawdstype)
        else:
            dstype = H5Dget_type(dset)

        if not (needs_proxy(dstype) or needs_proxy(mtype)):
            if read:
                H5Dread(dset, mtype, mspace, fspace, dxpl, progbuf)
            else:
                H5Dwrite(dset, mtype, mspace, fspace, dxpl, progbuf)
        else:

            if mspace == H5S_ALL and fspace != H5S_ALL:
                mspace = fspace
            elif mspace != H5S_ALL and fspace == H5S_ALL:
                fspace = mspace
            elif mspace == H5S_ALL and fspace == H5S_ALL:
                fspace = mspace = dspace = H5Dget_space(dset)

            npoints = H5Sget_select_npoints(mspace)
            cspace = H5Screate_simple(1, &npoints, NULL)

            conv_buf = create_buffer(H5Tget_size(dstype), H5Tget_size(mtype), npoints)

            # Only create a (contiguous) backing buffer if absolutely
            # necessary. Note this buffer always has memory type.
            if read:
                need_bkg = needs_bkg_buffer(dstype, mtype)
            else:
                need_bkg = needs_bkg_buffer(mtype, dstype)
            if need_bkg:
                back_buf = create_buffer(H5Tget_size(dstype), H5Tget_size(mtype), npoints)
                if read:
                    h5py_copy(mtype, mspace, back_buf, progbuf, H5PY_GATHER)

            if read:
                H5Dread(dset, dstype, cspace, fspace, dxpl, conv_buf)
                H5Tconvert(dstype, mtype, npoints, conv_buf, back_buf, dxpl)
                h5py_copy(mtype, mspace, conv_buf, progbuf, H5PY_SCATTER)
            else:
                h5py_copy(mtype, mspace, conv_buf, progbuf, H5PY_GATHER)
                H5Tconvert(mtype, dstype, npoints, conv_buf, back_buf, dxpl)
                H5Dwrite(dset, dstype, cspace, fspace, dxpl, conv_buf)
                H5Dvlen_reclaim(dstype, cspace, H5P_DEFAULT, conv_buf)

    finally:
        free(back_buf)
        free(conv_buf)
        if dstype > 0:
            H5Tclose(dstype)
        if dspace > 0:
            H5Sclose(dspace)
        if cspace > 0:
            H5Sclose(cspace)

    return 0


cdef hid_t make_reduced_type(hid_t mtype, hid_t dstype):
    # Go through dstype, pick out the fields which also appear in mtype, and
    # return a new compound type with the fields packed together
    # See also: issue 372

    cdef hid_t newtype, temptype
    cdef hsize_t newtype_size, offset
    cdef char* member_name = NULL
    cdef int idx

    # Make a list of all names in the memory type.
    mtype_fields = []
    for idx in range(H5Tget_nmembers(mtype)):
        member_name = H5Tget_member_name(mtype, idx)
        try:
            mtype_fields.append(member_name)
        finally:
            H5free_memory(member_name)
            member_name = NULL

    # First pass: add up the sizes of matching fields so we know how large a
    # type to make
    newtype_size = 0
    for idx in range(H5Tget_nmembers(dstype)):
        member_name = H5Tget_member_name(dstype, idx)
        try:
            if member_name not in mtype_fields:
                continue
            temptype = H5Tget_member_type(dstype, idx)
            newtype_size += H5Tget_size(temptype)
            H5Tclose(temptype)
        finally:
            H5free_memory(member_name)
            member_name = NULL

    newtype = H5Tcreate(H5T_COMPOUND, newtype_size)

    # Second pass: pick out the matching fields and pack them in the new type
    offset = 0
    for idx in range(H5Tget_nmembers(dstype)):
        member_name = H5Tget_member_name(dstype, idx)
        try:
            if member_name not in mtype_fields:
                continue
            temptype = H5Tget_member_type(dstype, idx)
            H5Tinsert(newtype, member_name, offset, temptype)
            offset += H5Tget_size(temptype)
            H5Tclose(temptype)
        finally:
            H5free_memory(member_name)
            member_name = NULL

    return newtype


cdef void* create_buffer(size_t ipt_size, size_t opt_size, size_t nl) except NULL:

    cdef size_t final_size
    cdef void* buf

    if ipt_size >= opt_size:
        final_size = ipt_size*nl
    else:
        final_size = opt_size*nl

    buf = malloc(final_size)
    if buf == NULL:
        raise MemoryError("Failed to allocate conversion buffer")

    return buf

# =============================================================================
# Scatter/gather routines

ctypedef struct h5py_scatter_t:
    size_t i
    size_t elsize
    void* buf

cdef herr_t h5py_scatter_cb(void* elem, hid_t type_id, unsigned ndim,
                const hsize_t *point, void *operator_data) except -1 nogil:
    cdef h5py_scatter_t* info = operator_data

    memcpy(elem, (info[0].buf)+((info[0].i)*(info[0].elsize)),
           info[0].elsize)

    info[0].i += 1

    return 0

cdef herr_t h5py_gather_cb(void* elem, hid_t type_id, unsigned ndim,
                const hsize_t *point, void *operator_data) except -1 nogil:
    cdef h5py_scatter_t* info = operator_data

    memcpy((info[0].buf)+((info[0].i)*(info[0].elsize)), elem,
            info[0].elsize)

    info[0].i += 1

    return 0

# Copy between a contiguous and non-contiguous buffer, with the layout
# of the latter specified by a dataspace selection.
cdef herr_t h5py_copy(hid_t tid, hid_t space, void* contig, void* noncontig,
                 copy_dir op) except -1:

    cdef h5py_scatter_t info
    cdef hsize_t elsize

    elsize = H5Tget_size(tid)

    info.i = 0
    info.elsize = elsize
    info.buf = contig

    if op == H5PY_SCATTER:
        H5Diterate(noncontig, tid, space, h5py_scatter_cb, &info)
    elif op == H5PY_GATHER:
        H5Diterate(noncontig, tid, space, h5py_gather_cb, &info)
    else:
        raise RuntimeError("Illegal direction")

    return 0

# =============================================================================
# VLEN support routines

cdef htri_t needs_bkg_buffer(hid_t src, hid_t dst) except -1:

    cdef H5T_cdata_t *info = NULL

    if H5Tdetect_class(src, H5T_COMPOUND) or H5Tdetect_class(dst, H5T_COMPOUND):
        return 1

    try:
        H5Tfind(src, dst, &info)
    except:
        print("Failed to find converter for %s -> %s" % (H5Tget_size(src), H5Tget_tag(dst)))
        raise

    if info[0].need_bkg == H5T_BKG_YES:
        return 1

    return 0

# Determine if the given type requires proxy buffering
cdef htri_t needs_proxy(hid_t tid) except -1:

    cdef H5T_class_t cls
    cdef hid_t supertype
    cdef int i, n
    cdef htri_t result

    cls = H5Tget_class(tid)

    if cls == H5T_VLEN or cls == H5T_REFERENCE:
        return 1

    elif cls == H5T_STRING:
        return H5Tis_variable_str(tid)

    elif cls == H5T_ARRAY:

        supertype = H5Tget_super(tid)
        try:
            return needs_proxy(supertype)
        finally:
            H5Tclose(supertype)

    elif cls == H5T_COMPOUND:

        n = H5Tget_nmembers(tid)
        for i in range(n):
            supertype = H5Tget_member_type(tid, i)
            try:
                result = needs_proxy(supertype)
                if result > 0:
                    return 1
            finally:
                H5Tclose(supertype)
        return 0

    return 0
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0
h5py-3.13.0/h5py/_selector.pyx0000644000175000017500000004006214532312724016774 0ustar00takluyvertakluyver# cython: language_level=3
"""Class to efficiently select and read data from an HDF5 dataset

This is written in Cython to reduce overhead when reading small amounts of
data. The core of it is translating between numpy-style slicing & indexing and
HDF5's H5Sselect_hyperslab calls.

Python & numpy distinguish indexing a[3] from slicing a single element a[3:4],
but there is no equivalent to this when selecting data in HDF5. So we store a
separate boolean ('scalar') for each dimension to distinguish these cases.
"""
from numpy cimport (
    ndarray, npy_intp, PyArray_ZEROS, PyArray_DATA, import_array,
    PyArray_IsNativeByteOrder,
)
from cpython cimport PyNumber_Index

import numpy as np
from .defs cimport *
from .h5d cimport DatasetID
from .h5s cimport SpaceID
from .h5t cimport TypeID, typewrap, py_create
from .utils cimport emalloc, efree, convert_dims

import_array()


cdef object convert_bools(bint* data, hsize_t rank):
    # Convert a bint array to a Python tuple of bools.
    cdef list bools_l
    cdef int i
    bools_l = []

    for i in range(rank):
        bools_l.append(bool(data[i]))

    return tuple(bools_l)


cdef class Selector:
    cdef SpaceID spaceobj
    cdef hid_t space
    cdef int rank
    cdef bint is_fancy
    cdef hsize_t* dims
    cdef hsize_t* start
    cdef hsize_t* stride
    cdef hsize_t* count
    cdef hsize_t* block
    cdef bint* scalar

    def __cinit__(self, SpaceID space):
        self.spaceobj = space
        self.space = space.id
        self.rank = H5Sget_simple_extent_ndims(self.space)
        self.is_fancy = False

        self.dims = emalloc(sizeof(hsize_t) * self.rank)
        self.start = emalloc(sizeof(hsize_t) * self.rank)
        self.stride = emalloc(sizeof(hsize_t) * self.rank)
        self.count = emalloc(sizeof(hsize_t) * self.rank)
        self.block = emalloc(sizeof(hsize_t) * self.rank)
        self.scalar = emalloc(sizeof(bint) * self.rank)

        H5Sget_simple_extent_dims(self.space, self.dims, NULL)

    def __dealloc__(self):
        efree(self.dims)
        efree(self.start)
        efree(self.stride)
        efree(self.count)
        efree(self.block)
        efree(self.scalar)

    cdef bint apply_args(self, tuple args) except 0:
        """Apply indexing arguments to this Selector object"""
        cdef:
            int nargs, ellipsis_ix, array_ix = -1
            bint seen_ellipsis = False
            int dim_ix = -1
            hsize_t l
            ndarray array_arg

        # If no explicit ellipsis, implicit ellipsis is after args
        nargs = ellipsis_ix = len(args)

        for a in args:
            dim_ix += 1

            if a is Ellipsis:
                # [...] - Ellipsis (fill any unspecified dimensions here)
                if seen_ellipsis:
                    raise ValueError("Only one ellipsis may be used.")
                seen_ellipsis = True

                ellipsis_ix = dim_ix
                nargs -= 1  # Don't count the ... itself
                if nargs > self.rank:
                    raise ValueError(f"{nargs} indexing arguments for {self.rank} dimensions")

                # Skip ahead to the remaining dimensions
                # -1 because the next iteration will increment dim_ix
                dim_ix += self.rank - nargs - 1
                continue

            if dim_ix >= self.rank:
                raise ValueError(f"{nargs} indexing arguments for {self.rank} dimensions")

            # Length of the relevant dimension
            l = self.dims[dim_ix]

            # [0:10] - slicing
            if isinstance(a, slice):
                start, stop, step = a.indices(l)
                # Now if step > 0, then start and stop are in [0, length];
                # if step < 0, they are in [-1, length - 1] (Python 2.6b2 and later;
                # Python issue 3004).

                if step < 1:
                    raise ValueError("Step must be >= 1 (got %d)" % step)
                if stop < start:
                    # list/tuple and numpy consider stop < start to be an empty selection
                    start, count, step = 0, 0, 1
                else:
                    count = 1 + (stop - start - 1) // step

                self.start[dim_ix] = start
                self.stride[dim_ix] = step
                self.count[dim_ix] = count
                self.block[dim_ix] = 1
                self.scalar[dim_ix] = False

                continue

            # [0] - simple integer indices
            try:
                # PyIndex_Check only checks the type - e.g. all numpy arrays
                # pass PyIndex_Check, but only scalar arrays are valid.
                a = PyNumber_Index(a)
            except TypeError:
                pass  # Fall through to check for list/array
            else:
                if a < 0:
                    a += l

                if not 0 <= a < l:
                    if l == 0:
                        msg = f"Index ({a}) out of range for empty dimension"
                    else:
                        msg = f"Index ({a}) out of range for (0-{l-1})"
                    raise IndexError(msg)

                self.start[dim_ix] = a
                self.stride[dim_ix] = 1
                self.count[dim_ix] = 1
                self.block[dim_ix] = 1
                self.scalar[dim_ix] = True

                continue

            # MultiBlockSlice exposes h5py's separate count & block parameters
            # to allow more complex repeating selections.
            if isinstance(a, MultiBlockSlice):
                (
                    self.start[dim_ix],
                    self.stride[dim_ix],
                    self.count[dim_ix],
                    self.block[dim_ix],
                ) = a.indices(l)
                self.scalar[dim_ix] = False

                continue

            # [[0, 2, 10]] - list/array of indices ('fancy indexing')
            if isinstance(a, np.ndarray):
                if a.ndim != 1:
                    raise TypeError("Only 1D arrays allowed for fancy indexing")
            else:
                arr = np.asarray(a)
                if arr.ndim != 1:
                    raise TypeError("Selection can't process %r" % a)
                a = arr
                if a.size == 0:
                    # asarray([]) gives float dtype by default
                    a = a.astype(np.intp)

            if a.dtype.kind == 'b':
                if self.rank == 1:
                    # The dataset machinery should fall back to a faster
                    # alternative (using PointSelection) in this case.
                    # https://github.com/h5py/h5py/issues/2189
                    raise TypeError("Use other code for boolean selection on 1D dataset")
                if a.size != l:
                    raise TypeError("boolean index did not match indexed array")
                a = a.nonzero()[0]
            if not np.issubdtype(a.dtype, np.integer):
                raise TypeError("Indexing arrays must have integer dtypes")
            if array_ix != -1:
                raise TypeError("Only one indexing vector or array is currently allowed for fancy indexing")

            # Convert negative indices to positive
            if np.any(a < 0):
                a = a.copy()
                a[a < 0] += l

            # Bounds check
            if np.any((a < 0) | (a > l)):
                if l == 0:
                    msg = "Fancy indexing out of range for empty dimension"
                else:
                    msg = f"Fancy indexing out of range for (0-{l-1})"
                raise IndexError(msg)

            if np.any(np.diff(a) <= 0):
                raise TypeError("Indexing elements must be in increasing order")

            array_ix = dim_ix
            array_arg = a
            self.start[dim_ix] = 0
            self.stride[dim_ix] = 1
            self.count[dim_ix] = a.shape[0]
            self.block[dim_ix] = 1
            self.scalar[dim_ix] = False

        if nargs < self.rank:
            # Fill in ellipsis or trailing dimensions
            ellipsis_end = ellipsis_ix + (self.rank - nargs)
            for dim_ix in range(ellipsis_ix, ellipsis_end):
                self.start[dim_ix] = 0
                self.stride[dim_ix] = 1
                self.count[dim_ix] = self.dims[dim_ix]
                self.block[dim_ix] = 1
                self.scalar[dim_ix] = False

        if nargs == 0:
            H5Sselect_all(self.space)
            self.is_fancy = False
        elif array_ix != -1:
            self.select_fancy(array_ix, array_arg)
            self.is_fancy = True
        else:
            H5Sselect_hyperslab(self.space, H5S_SELECT_SET, self.start, self.stride, self.count, self.block)
            self.is_fancy = False
        return True

    cdef select_fancy(self, int array_ix, ndarray array_arg):
        """Apply a 'fancy' selection (array of indices) to the dataspace"""
        cdef hsize_t* tmp_start
        cdef hsize_t* tmp_count
        cdef uint64_t i

        H5Sselect_none(self.space)

        tmp_start = emalloc(sizeof(hsize_t) * self.rank)
        tmp_count = emalloc(sizeof(hsize_t) * self.rank)
        try:
            memcpy(tmp_start, self.start, sizeof(hsize_t) * self.rank)
            memcpy(tmp_count, self.count, sizeof(hsize_t) * self.rank)
            tmp_count[array_ix] = 1

            # Iterate over the array of indices, add each hyperslab to the selection
            for i in array_arg:
                tmp_start[array_ix] = i
                H5Sselect_hyperslab(self.space, H5S_SELECT_OR, tmp_start, self.stride, tmp_count, self.block)
        finally:
            efree(tmp_start)
            efree(tmp_count)


    def make_selection(self, tuple args):
        """Apply indexing/slicing args and create a high-level selection object

        Returns an instance of SimpleSelection or FancySelection, with a copy
        of the selector's dataspace.
        """
        cdef:
            SpaceID space
            tuple shape, start, count, step, scalar, arr_shape
            int arr_rank, i
            npy_intp* arr_shape_p

        self.apply_args(args)
        space = SpaceID(H5Scopy(self.space))

        shape = convert_dims(self.dims, self.rank)
        count = convert_dims(self.count, self.rank)
        block = convert_dims(self.block, self.rank)
        mshape = tuple(c * b for c, b in zip(count, block))

        from ._hl.selections import SimpleSelection, FancySelection

        if self.is_fancy:
            arr_shape = tuple(
                mshape[i] for i in range(self.rank) if not self.scalar[i]
            )
            return FancySelection(shape, space, count, arr_shape)
        else:
            start = convert_dims(self.start, self.rank)
            step = convert_dims(self.stride, self.rank)
            scalar = convert_bools(self.scalar, self.rank)
            return SimpleSelection(shape, space, (start, mshape, step, scalar))


cdef class Reader:
    cdef hid_t dataset
    cdef Selector selector
    cdef TypeID h5_memory_datatype
    cdef int np_typenum
    cdef bint native_byteorder

    def __cinit__(self, DatasetID dsid):
        self.dataset = dsid.id
        self.selector = Selector(dsid.get_space())

        # HDF5 can use e.g. custom float datatypes which don't have an exact
        # match in numpy. Translating it to a numpy dtype chooses the smallest
        # dtype which won't lose any data, then we translate that back to a
        # HDF5 datatype (h5_memory_datatype).
        h5_stored_datatype = typewrap(H5Dget_type(self.dataset))
        np_dtype = h5_stored_datatype.py_dtype()
        self.np_typenum = np_dtype.num
        self.native_byteorder = PyArray_IsNativeByteOrder(ord(np_dtype.byteorder))
        self.h5_memory_datatype = py_create(np_dtype)

    cdef ndarray make_array(self, hsize_t* mshape):
        """Create an array to read the selected data into.

        .apply_args() should be called first, to set self.count and self.scalar.
        Only works for simple numeric dtypes which can be defined with typenum.
        """
        cdef int i, arr_rank = 0
        cdef npy_intp* arr_shape

        arr_shape = emalloc(sizeof(npy_intp) * self.selector.rank)
        try:
            # Copy any non-scalar selection dimensions for the array shape
            for i in range(self.selector.rank):
                if not self.selector.scalar[i]:
                    arr_shape[arr_rank] = mshape[i]
                    arr_rank += 1

            arr = PyArray_ZEROS(arr_rank, arr_shape, self.np_typenum, 0)
            if not self.native_byteorder:
                arr = arr.view(arr.dtype.newbyteorder())
        finally:
            efree(arr_shape)

        return arr

    def read(self, tuple args):
        """Index the dataset using args and read into a new numpy array

        Only works for simple numeric dtypes.
        """
        cdef void* buf
        cdef ndarray arr
        cdef hsize_t* mshape
        cdef hid_t mspace
        cdef int i

        self.selector.apply_args(args)

        # The selected length of each dimension is count * block
        mshape = emalloc(sizeof(hsize_t) * self.selector.rank)
        try:
            for i in range(self.selector.rank):
                mshape[i] = self.selector.count[i] * self.selector.block[i]
            arr = self.make_array(mshape)
            buf = PyArray_DATA(arr)

            mspace = H5Screate_simple(self.selector.rank, mshape, NULL)
        finally:
            efree(mshape)

        try:
            H5Dread(self.dataset, self.h5_memory_datatype.id, mspace,
                    self.selector.space, H5P_DEFAULT, buf)
        finally:
            H5Sclose(mspace)

        if arr.ndim == 0:
            return arr[()]
        else:
            return arr


class MultiBlockSlice:
    """
        A conceptual extension of the built-in slice object to allow selections
        using start, stride, count and block.

        If given, these parameters will be passed directly to
        H5Sselect_hyperslab. The defaults are start=0, stride=1, block=1,
        count=length, which will select the full extent.

        __init__(start, stride, count, block) => Create a new MultiBlockSlice, storing
            any given selection parameters and using defaults for the others
        start => The offset of the starting element of the specified hyperslab
        stride => The number of elements between the start of one block and the next
        count => The number of blocks to select
        block => The number of elements in each block

    """

    def __init__(self, start=0, stride=1, count=None, block=1):
        if start < 0:
            raise ValueError("Start can't be negative")
        if stride < 1 or (count is not None and count < 1) or block < 1:
            raise ValueError("Stride, count and block can't be 0 or negative")
        if block > stride:
            raise ValueError("Blocks will overlap if block > stride")

        self.start = start
        self.stride = stride
        self.count = count
        self.block = block

    def indices(self, length):
        """Calculate and validate start, stride, count and block for the given length"""
        if self.count is None:
            # Select as many full blocks as possible without exceeding extent
            count = (length - self.start - self.block) // self.stride + 1
            if count < 1:
                raise ValueError(
                    "No full blocks can be selected using {} "
                    "on dimension of length {}".format(self._repr(), length)
                )
        else:
            count = self.count

        end_index = self.start + self.block + (count - 1) * self.stride - 1
        if end_index >= length:
            raise ValueError(
                "{} range ({} - {}) extends beyond maximum index ({})".format(
                    self._repr(count), self.start, end_index, length - 1
                ))

        return self.start, self.stride, count, self.block

    def _repr(self, count=None):
        if count is None:
            count = self.count
        return "{}(start={}, stride={}, count={}, block={})".format(
            self.__class__.__name__, self.start, self.stride, count, self.block
        )

    def __repr__(self):
        return self._repr(count=None)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1705055674.0
h5py-3.13.0/h5py/api_compat.h0000644000175000017500000000241214550212672016536 0ustar00takluyvertakluyver/***** Preamble block *********************************************************
*
* This file is part of h5py, a Python interface to the HDF5 library.
*
* http://www.h5py.org
*
* Copyright 2008-2013 Andrew Collette and contributors
*
* License:  Standard 3-clause BSD; see "license.txt" for full license terms
*           and contributor agreement.
*
****** End preamble block ****************************************************/

/* Contains compatibility macros and definitions for use by Cython code */

#ifndef H5PY_COMPAT
#define H5PY_COMPAT

#if defined(MPI_VERSION) && (MPI_VERSION < 3) && !defined(PyMPI_HAVE_MPI_Message)
typedef void *PyMPI_MPI_Message;
#define MPI_Message PyMPI_MPI_Message
#endif

#include 
#include "Python.h"
#include "numpy/arrayobject.h"

/* The HOFFSET macro can't be used from Cython. */

#define h5py_size_n64 (sizeof(npy_complex64))
#define h5py_size_n128 (sizeof(npy_complex128))

#ifdef NPY_COMPLEX256
#define h5py_size_n256 (sizeof(npy_complex256))
#endif

#define h5py_offset_n64_real (0)
#define h5py_offset_n64_imag (sizeof(float))
#define h5py_offset_n128_real (0)
#define h5py_offset_n128_imag (sizeof(double))

#ifdef NPY_COMPLEX256
#define h5py_offset_n256_real (0)
#define h5py_offset_n256_imag (sizeof(long double))
#endif

#endif
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/h5py/api_functions.txt0000644000175000017500000010260314755054371017665 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.


# Defines the HDF5 C API functions wrapped by h5py. See also api_gen.py.
#
# Format of this file:
#
# header_declaration:
#   function_line
#   function_line
#   ...
#
# Header declarations are the basenames of the C header files (without the .h);
# for example, functions from "hdf5.h" are under the "hdf5" declaration.
#
# Function lines are C function signatures, optionally preceded by:
#   MPI     If present, function requires an MPI-aware build of hdf5
#   ROS3    If present, function requires a ROS3 enabled build of hdf5
#   DIRECT_VFD If present, function requires an DIRECT VFD enabled build of hdf5
#   ERROR   Explicit error checks are needed (HDF5 does not use error stack)
#   X.Y.Z   Minimum version of HDF5 required for this function
#
# Blank/whitespace-only lines are ignored, as are lines whose first
# non-whitespace character is "#".  Indentation must be via spaces only.


hdf5:

  # === H5 - General library functions ========================================

  herr_t    H5open()
  herr_t    H5close()
  herr_t    H5get_libversion(unsigned *majnum, unsigned *minnum, unsigned *relnum)
  herr_t    H5check_version(unsigned majnum, unsigned minnum, unsigned relnum )
  herr_t    H5free_memory(void *mem)

  # === H5A - Attributes API ==================================================

  hid_t     H5Acreate(hid_t loc_id, char *name, hid_t type_id, hid_t space_id, hid_t acpl_id, hid_t aapl_id)
  hid_t     H5Aopen_idx(hid_t loc_id, unsigned int idx)
  hid_t     H5Aopen_name(hid_t loc_id, char *name)
  herr_t    H5Aclose(hid_t attr_id)
  herr_t    H5Adelete(hid_t loc_id, char *name)

  herr_t    H5Aread(hid_t attr_id, hid_t mem_type_id, void *buf)
  herr_t    H5Awrite(hid_t attr_id, hid_t mem_type_id, void *buf )

  int       H5Aget_num_attrs(hid_t loc_id)
  ssize_t   H5Aget_name(hid_t attr_id, size_t buf_size, char *buf)
  hid_t     H5Aget_space(hid_t attr_id)
  hid_t     H5Aget_type(hid_t attr_id)

  herr_t    H5Adelete_by_name(hid_t loc_id, char *obj_name, char *attr_name, hid_t lapl_id)
  herr_t    H5Adelete_by_idx(hid_t loc_id, char *obj_name, H5_index_t idx_type, H5_iter_order_t order, hsize_t n, hid_t lapl_id)

  hid_t     H5Acreate_by_name(hid_t loc_id, char *obj_name, char *attr_name, hid_t type_id, hid_t space_id, hid_t acpl_id, hid_t aapl_id, hid_t lapl_id)

  hid_t     H5Aopen(hid_t obj_id, char *attr_name, hid_t aapl_id)
  hid_t     H5Aopen_by_name( hid_t loc_id, char *obj_name, char *attr_name, hid_t aapl_id, hid_t lapl_id)
  hid_t     H5Aopen_by_idx(hid_t loc_id, char *obj_name, H5_index_t idx_type, H5_iter_order_t order, hsize_t n, hid_t aapl_id, hid_t lapl_id)
  htri_t    H5Aexists_by_name( hid_t loc_id, char *obj_name, char *attr_name, hid_t lapl_id)
  htri_t    H5Aexists(hid_t obj_id, char *attr_name)

  herr_t    H5Arename(hid_t loc_id, char *old_attr_name, char *new_attr_name)
  herr_t    H5Arename_by_name(hid_t loc_id, char *obj_name, char *old_attr_name, char *new_attr_name, hid_t lapl_id)

  herr_t    H5Aget_info( hid_t attr_id, H5A_info_t *ainfo)
  herr_t    H5Aget_info_by_name(hid_t loc_id, char *obj_name, char *attr_name, H5A_info_t *ainfo, hid_t lapl_id)
  herr_t    H5Aget_info_by_idx(hid_t loc_id, char *obj_name, H5_index_t idx_type, H5_iter_order_t order, hsize_t n, H5A_info_t *ainfo, hid_t lapl_id)

  herr_t    H5Aiterate(hid_t obj_id, H5_index_t idx_type, H5_iter_order_t order, hsize_t *n, H5A_operator2_t op, void *op_data)
  hsize_t   H5Aget_storage_size(hid_t attr_id)


  # === H5D - Dataset API =====================================================

  hid_t     H5Dcreate(hid_t loc_id, char *name, hid_t type_id, hid_t space_id, hid_t lcpl_id, hid_t dcpl_id, hid_t dapl_id) nogil
  hid_t     H5Dcreate_anon(hid_t file_id, hid_t type_id, hid_t space_id, hid_t plist_id, hid_t dapl_id) nogil

  hid_t     H5Dopen(hid_t loc_id, char *name, hid_t dapl_id )
  herr_t    H5Dclose(hid_t dset_id)

  hid_t     H5Dget_space(hid_t dset_id)
  herr_t    H5Dget_space_status(hid_t dset_id, H5D_space_status_t *status)
  hid_t     H5Dget_type(hid_t dset_id)
  hid_t     H5Dget_create_plist(hid_t dataset_id)
  hid_t     H5Dget_access_plist(hid_t dataset_id)

  haddr_t   H5Dget_offset(hid_t dset_id)
  hsize_t   H5Dget_storage_size(hid_t dset_id)

  herr_t    H5Dread(hid_t dset_id, hid_t mem_type_id, hid_t mem_space_id, hid_t file_space_id, hid_t plist_id, void *buf) nogil
  herr_t    H5Dwrite(hid_t dset_id, hid_t mem_type, hid_t mem_space, hid_t file_space, hid_t xfer_plist, void* buf) nogil

  herr_t    H5Dextend(hid_t dataset_id, hsize_t *size) nogil

  herr_t    H5Dfill(void *fill, hid_t fill_type_id, void *buf,  hid_t buf_type_id, hid_t space_id) nogil
  herr_t    H5Dvlen_get_buf_size(hid_t dset_id, hid_t type_id, hid_t space_id, hsize_t *size)
  herr_t    H5Dvlen_reclaim(hid_t type_id, hid_t space_id,  hid_t plist, void *buf)

  herr_t    H5Diterate(void *buf, hid_t type_id, hid_t space_id,  H5D_operator_t op, void* operator_data) nogil
  herr_t    H5Dset_extent(hid_t dset_id, hsize_t* size) nogil

  # SWMR functions
  herr_t H5Dflush(hid_t dataset_id) nogil
  herr_t H5Drefresh(hid_t dataset_id) nogil

  # Direct Chunk Read/Write
  herr_t H5DOwrite_chunk(hid_t dset_id, hid_t dxpl_id, uint32_t filters, const hsize_t *offset, size_t data_size, const void *buf) nogil
  herr_t H5Dget_chunk_storage_size(hid_t dset_id, const hsize_t *offset, hsize_t *chunk_nbytes) nogil
  herr_t H5Dread_chunk(hid_t dset_id, hid_t dxpl_id, const hsize_t *offset, uint32_t *filters, void *buf) nogil

  # Chunk query functions
  herr_t H5Dget_num_chunks(hid_t dset_id, hid_t fspace_id, hsize_t *nchunks)
  herr_t H5Dget_chunk_info(hid_t dset_id, hid_t fspace_id, hsize_t chk_idx, hsize_t *offset, unsigned *filter_mask, haddr_t *addr, hsize_t *size)
  herr_t H5Dget_chunk_info_by_coord(hid_t dset_id, const hsize_t *offset, unsigned *filter_mask, haddr_t *addr, hsize_t *size)
  1.10.10-1.10.99 herr_t H5Dchunk_iter(hid_t dset_id, hid_t dxpl_id, H5D_chunk_iter_op_t cb, void *op_data)
  1.12.3    herr_t H5Dchunk_iter(hid_t dset_id, hid_t dxpl_id, H5D_chunk_iter_op_t cb, void *op_data)


  # === H5E - Minimal error-handling interface ================================

  # The error interfaces used by h5py are exposed directly through Cython
  # code in h5py._errors, so wrappers are not autogenerated for them.


  # === H5F - File API ========================================================

  hid_t     H5Fcreate(char *filename, unsigned int flags, hid_t create_plist, hid_t access_plist)
  hid_t     H5Fopen(char *name, unsigned flags, hid_t access_id)
  herr_t    H5Fclose (hid_t file_id)
  htri_t    H5Fis_hdf5(char *name)
  herr_t    H5Fflush(hid_t object_id, H5F_scope_t scope) nogil

  hid_t     H5Freopen(hid_t file_id)
  herr_t    H5Fmount(hid_t loc_id, char *name, hid_t child_id, hid_t plist_id)
  herr_t    H5Funmount(hid_t loc_id, char *name)
  herr_t    H5Fget_filesize(hid_t file_id, hsize_t *size)
  hid_t     H5Fget_create_plist(hid_t file_id )
  hid_t     H5Fget_access_plist(hid_t file_id)
  hssize_t  H5Fget_freespace(hid_t file_id)
  ssize_t   H5Fget_name(hid_t obj_id, char *name, size_t size)
  int       H5Fget_obj_count(hid_t file_id, unsigned int types)
  int       H5Fget_obj_ids(hid_t file_id, unsigned int types, int max_objs, hid_t *obj_id_list)
  herr_t    H5Fget_vfd_handle(hid_t file_id, hid_t fapl_id, void **file_handle)

  herr_t    H5Fget_intent(hid_t file_id, unsigned int *intent)
  herr_t    H5Fget_mdc_config(hid_t file_id, H5AC_cache_config_t *config_ptr)
  herr_t    H5Fget_mdc_hit_rate(hid_t file_id, double *hit_rate_ptr)
  herr_t    H5Fget_mdc_size(hid_t file_id, size_t *max_size_ptr, size_t *min_clean_size_ptr, size_t *cur_size_ptr, int *cur_num_entries_ptr)
  herr_t    H5Freset_mdc_hit_rate_stats(hid_t file_id)
  herr_t    H5Fset_mdc_config(hid_t file_id, H5AC_cache_config_t *config_ptr)

  # File Image Operations
  ssize_t H5Fget_file_image(hid_t file_id, void *buf_ptr, size_t buf_len)
  hid_t H5LTopen_file_image(void *buf_ptr, size_t buf_len, unsigned int flags)

  # MPI functions
  MPI herr_t H5Fset_mpi_atomicity(hid_t file_id, hbool_t flag)
  MPI herr_t H5Fget_mpi_atomicity(hid_t file_id, hbool_t *flag)

  # SWMR functions
  herr_t H5Fstart_swmr_write(hid_t file_id)

  # Page Buffering functions
  herr_t    H5Freset_page_buffering_stats(hid_t file_id)
  herr_t    H5Fget_page_buffering_stats(hid_t file_id, unsigned *accesses, unsigned *hits, unsigned *misses, unsigned *evictions, unsigned *bypasses)

  # === H5FD - Virtual File Layer =========================================================

  hid_t     H5FDregister(H5FD_class_t *cls_ptr)
  herr_t    H5FDunregister(hid_t driver_id)


  # === H5G - Groups API ======================================================

  herr_t    H5Gclose(hid_t group_id)
  herr_t    H5Glink2( hid_t curr_loc_id, char *current_name, H5G_link_t link_type, hid_t new_loc_id, char *new_name)

  herr_t    H5Gunlink (hid_t file_id, char *name)
  herr_t    H5Gmove2(hid_t src_loc_id, char *src_name, hid_t dst_loc_id, char *dst_name)
  herr_t    H5Gget_num_objs(hid_t loc_id, hsize_t*  num_obj)
  int       H5Gget_objname_by_idx(hid_t loc_id, hsize_t idx, char *name, size_t size)
  int       H5Gget_objtype_by_idx(hid_t loc_id, hsize_t idx)

  herr_t    H5Giterate(hid_t loc_id, char *name, int *idx, H5G_iterate_t op, void* data)
  herr_t    H5Gget_objinfo(hid_t loc_id, char* name, int follow_link, H5G_stat_t *statbuf)

  herr_t    H5Gget_linkval(hid_t loc_id, char *name, size_t size, char *value)
  herr_t    H5Gset_comment(hid_t loc_id, char *name, char *comment)
  int       H5Gget_comment(hid_t loc_id, char *name, size_t bufsize, char *comment)

  hid_t     H5Gcreate_anon( hid_t loc_id, hid_t gcpl_id, hid_t gapl_id)
  hid_t     H5Gcreate(hid_t loc_id, char *name, hid_t lcpl_id, hid_t gcpl_id, hid_t gapl_id)
  hid_t     H5Gopen( hid_t loc_id, char * name, hid_t gapl_id)
  herr_t    H5Gget_info( hid_t group_id, H5G_info_t *group_info)
  herr_t    H5Gget_info_by_name( hid_t loc_id, char *group_name, H5G_info_t *group_info, hid_t lapl_id)
  hid_t     H5Gget_create_plist(hid_t group_id)


  # === H5I - Identifier and reflection interface =============================

  H5I_type_t H5Iget_type(hid_t obj_id)
  ssize_t    H5Iget_name( hid_t obj_id, char *name, size_t size)
  hid_t      H5Iget_file_id(hid_t obj_id)
  int        H5Idec_ref(hid_t obj_id) nogil
  int        H5Iget_ref(hid_t obj_id)
  int        H5Iinc_ref(hid_t obj_id)
  htri_t     H5Iis_valid( hid_t obj_id )


  # === H5L - Links interface =================================================

  herr_t    H5Lmove(hid_t src_loc, char *src_name, hid_t dst_loc, char *dst_name, hid_t lcpl_id, hid_t lapl_id)
  herr_t    H5Lcopy(hid_t src_loc, char *src_name, hid_t dst_loc, char *dst_name, hid_t lcpl_id, hid_t lapl_id)
  herr_t    H5Lcreate_hard(hid_t cur_loc, char *cur_name, hid_t dst_loc, char *dst_name, hid_t lcpl_id, hid_t lapl_id)
  herr_t    H5Lcreate_soft(char *link_target, hid_t link_loc_id, char *link_name, hid_t lcpl_id, hid_t lapl_id)
  herr_t    H5Ldelete(hid_t loc_id, char *name, hid_t lapl_id)
  herr_t    H5Ldelete_by_idx(hid_t loc_id, char *group_name, H5_index_t idx_type, H5_iter_order_t order, hsize_t n, hid_t lapl_id)
  herr_t    H5Lget_val(hid_t loc_id, char *name, void *bufout, size_t size, hid_t lapl_id)
  herr_t    H5Lget_val_by_idx(hid_t loc_id, char *group_name,  H5_index_t idx_type, H5_iter_order_t order, hsize_t n, void *bufout, size_t size, hid_t lapl_id)
  htri_t    H5Lexists(hid_t loc_id, char *name, hid_t lapl_id)
  herr_t    H5Lget_info(hid_t loc_id, char *name, H5L_info_t *linfo, hid_t lapl_id)
  herr_t    H5Lget_info_by_idx(hid_t loc_id, char *group_name, H5_index_t idx_type, H5_iter_order_t order, hsize_t n, H5L_info_t *linfo, hid_t lapl_id)
  ssize_t   H5Lget_name_by_idx(hid_t loc_id, char *group_name, H5_index_t idx_type, H5_iter_order_t order, hsize_t n, char *name, size_t size, hid_t lapl_id)
  herr_t    H5Literate(hid_t grp_id, H5_index_t idx_type, H5_iter_order_t order, hsize_t *idx, H5L_iterate_t op, void *op_data)
  herr_t    H5Literate_by_name(hid_t loc_id, char *group_name, H5_index_t idx_type, H5_iter_order_t order, hsize_t *idx, H5L_iterate_t op, void *op_data, hid_t lapl_id)
  herr_t    H5Lvisit(hid_t grp_id, H5_index_t idx_type, H5_iter_order_t order, H5L_iterate_t op, void *op_data)
  herr_t    H5Lvisit_by_name(hid_t loc_id, char *group_name, H5_index_t idx_type, H5_iter_order_t order, H5L_iterate_t op, void *op_data, hid_t lapl_id)
  herr_t    H5Lunpack_elink_val(void *ext_linkval, size_t link_size, unsigned *flags, const char **filename, const char **obj_path)
  herr_t    H5Lcreate_external(char *file_name, char *obj_name, hid_t link_loc_id, char *link_name, hid_t lcpl_id, hid_t lapl_id)


  # === H5O - General object operations =======================================

  hid_t     H5Oopen(hid_t loc_id, char *name, hid_t lapl_id)
  hid_t     H5Oopen_by_addr(hid_t loc_id, haddr_t addr)
  hid_t     H5Oopen_by_idx(hid_t loc_id, char *group_name, H5_index_t idx_type, H5_iter_order_t order, hsize_t n, hid_t lapl_id)

  herr_t    H5Oget_info(hid_t loc_id, H5O_info_t *oinfo)
  herr_t    H5Oget_info_by_name(hid_t loc_id, char *name, H5O_info_t *oinfo, hid_t lapl_id)
  herr_t    H5Oget_info_by_idx(hid_t loc_id, char *group_name,  H5_index_t idx_type, H5_iter_order_t order, hsize_t n, H5O_info_t *oinfo, hid_t lapl_id)

  herr_t    H5Olink(hid_t obj_id, hid_t new_loc_id, char *new_name, hid_t lcpl_id, hid_t lapl_id)
  herr_t    H5Ocopy(hid_t src_loc_id, char *src_name, hid_t dst_loc_id,  char *dst_name, hid_t ocpypl_id, hid_t lcpl_id)

  herr_t    H5Oincr_refcount(hid_t object_id)
  herr_t    H5Odecr_refcount(hid_t object_id)

  herr_t    H5Oset_comment(hid_t obj_id, char *comment)
  herr_t    H5Oset_comment_by_name(hid_t loc_id, char *name,  char *comment, hid_t lapl_id)
  ssize_t   H5Oget_comment(hid_t obj_id, char *comment, size_t bufsize)
  ssize_t   H5Oget_comment_by_name(hid_t loc_id, char *name, char *comment, size_t bufsize, hid_t lapl_id)

  herr_t    H5Ovisit(hid_t obj_id, H5_index_t idx_type, H5_iter_order_t order,  H5O_iterate_t op, void *op_data)
  herr_t    H5Ovisit_by_name(hid_t loc_id, char *obj_name, H5_index_t idx_type, H5_iter_order_t order, H5O_iterate_t op, void *op_data, hid_t lapl_id)

  herr_t    H5Oclose(hid_t object_id)

  # Version-limited functions
  htri_t H5Oexists_by_name(hid_t loc_id, char * name, hid_t lapl_id )


  # === H5P - Property list API ===============================================

  # General operations
  hid_t     H5Pcreate(hid_t plist_id)
  hid_t     H5Pcopy(hid_t plist_id)
  hid_t     H5Pget_class(hid_t plist_id)
  herr_t    H5Pclose(hid_t plist_id)
  htri_t    H5Pequal( hid_t id1, hid_t id2 )
  herr_t    H5Pclose_class(hid_t id)

  # File creation
  herr_t    H5Pget_version(hid_t plist, unsigned int *super_, unsigned int* freelist,  unsigned int *stab, unsigned int *shhdr)
  herr_t    H5Pset_userblock(hid_t plist, hsize_t size)
  herr_t    H5Pget_userblock(hid_t plist, hsize_t * size)
  herr_t    H5Pset_sizes(hid_t plist, size_t sizeof_addr, size_t sizeof_size)
  herr_t    H5Pget_sizes(hid_t plist, size_t *sizeof_addr, size_t *sizeof_size)
  herr_t    H5Pset_sym_k(hid_t plist, unsigned int ik, unsigned int lk)
  herr_t    H5Pget_sym_k(hid_t plist, unsigned int *ik, unsigned int *lk)
  herr_t    H5Pset_istore_k(hid_t plist, unsigned int ik)
  herr_t    H5Pget_istore_k(hid_t plist, unsigned int *ik)
  herr_t    H5Pset_file_space_strategy(hid_t fcpl, H5F_fspace_strategy_t strategy, hbool_t persist, hsize_t threshold)
  herr_t    H5Pget_file_space_strategy(hid_t fcpl, H5F_fspace_strategy_t *strategy, hbool_t *persist, hsize_t *threshold)
  herr_t    H5Pset_file_space_page_size(hid_t plist_id, hsize_t fsp_size)
  herr_t    H5Pget_file_space_page_size(hid_t plist_id, hsize_t *fsp_size)

  # File access
  herr_t    H5Pset_fclose_degree(hid_t fapl_id, H5F_close_degree_t fc_degree)
  herr_t    H5Pget_fclose_degree(hid_t fapl_id, H5F_close_degree_t *fc_degree)
  herr_t    H5Pset_fapl_core( hid_t fapl_id, size_t increment, hbool_t backing_store)
  herr_t    H5Pget_fapl_core( hid_t fapl_id, size_t *increment, hbool_t *backing_store)
  herr_t    H5Pset_fapl_family ( hid_t fapl_id,  hsize_t memb_size, hid_t memb_fapl_id )
  herr_t    H5Pget_fapl_family ( hid_t fapl_id, hsize_t *memb_size, hid_t *memb_fapl_id )
  herr_t    H5Pset_family_offset ( hid_t fapl_id, hsize_t offset)
  herr_t    H5Pget_family_offset ( hid_t fapl_id, hsize_t *offset)
  ROS3 herr_t    H5Pget_fapl_ros3(hid_t fapl_id, H5FD_ros3_fapl_t *fa_out)
  ROS3 herr_t    H5Pset_fapl_ros3(hid_t fapl_id, H5FD_ros3_fapl_t *fa)
  ROS3 1.14.2 herr_t H5Pget_fapl_ros3_token(hid_t fapl_id, size_t size, char *token)
  ROS3 1.14.2 herr_t H5Pset_fapl_ros3_token(hid_t fapl_id, const char *token)
  DIRECT_VFD herr_t H5Pget_fapl_direct(hid_t fapl_id, size_t *boundary, size_t *block_size, size_t *cbuf_size);
  DIRECT_VFD herr_t H5Pset_fapl_direct(hid_t fapl_id, size_t alignment, size_t block_size, size_t cbuf_size);
  herr_t    H5Pset_fapl_log(hid_t fapl_id, char *logfile, unsigned int flags, size_t buf_size)
  herr_t    H5Pset_fapl_multi(hid_t fapl_id, H5FD_mem_t *memb_map, hid_t *memb_fapl, const char * const *memb_name, haddr_t *memb_addr, hbool_t relax)
  herr_t    H5Pset_cache(hid_t plist_id, int mdc_nelmts, int rdcc_nelmts,  size_t rdcc_nbytes, double rdcc_w0)
  herr_t    H5Pget_cache(hid_t plist_id, int *mdc_nelmts, size_t *rdcc_nelmts, size_t *rdcc_nbytes, double *rdcc_w0)
  herr_t    H5Pset_fapl_sec2(hid_t fapl_id)
  herr_t    H5Pset_fapl_stdio(hid_t fapl_id)
  herr_t    H5Pset_fapl_split(hid_t fapl_id, const char *meta_ext, hid_t meta_plist_id, const char *raw_ext, hid_t raw_plist_id)
  herr_t    H5Pset_driver(hid_t plist_id, hid_t driver_id, void *driver_info)
  hid_t     H5Pget_driver(hid_t fapl_id)
  void*     H5Pget_driver_info(hid_t plist_id)
  herr_t    H5Pget_mdc_config(hid_t plist_id, H5AC_cache_config_t *config_ptr)
  herr_t    H5Pset_mdc_config(hid_t plist_id, H5AC_cache_config_t *config_ptr)
  herr_t    H5Pset_meta_block_size(hid_t fapl_id, hsize_t size)
  herr_t    H5Pget_meta_block_size(hid_t fapl_id, hsize_t * size)
  herr_t H5Pset_file_image(hid_t plist_id, void *buf_ptr, size_t buf_len)
  herr_t H5Pset_page_buffer_size(hid_t plist_id, size_t buf_size, unsigned min_meta_per, unsigned min_raw_per)
  herr_t H5Pget_page_buffer_size(hid_t plist_id, size_t *buf_size, unsigned *min_meta_per, unsigned *min_raw_per)
  1.10.7-1.10.99 herr_t H5Pget_file_locking(hid_t fapl_id, hbool_t *use_file_locking, hbool_t *ignore_when_disabled)
  1.12.1 herr_t H5Pget_file_locking(hid_t fapl_id, hbool_t *use_file_locking, hbool_t *ignore_when_disabled)
  1.10.7-1.10.99 herr_t H5Pset_file_locking(hid_t fapl_id, hbool_t use_file_locking, hbool_t ignore_when_disabled)
  1.12.1 herr_t H5Pset_file_locking(hid_t fapl_id, hbool_t use_file_locking, hbool_t ignore_when_disabled)


  # Dataset creation
  herr_t        H5Pset_layout(hid_t plist, int layout)
  H5D_layout_t  H5Pget_layout(hid_t plist)
  herr_t        H5Pset_chunk(hid_t plist, int ndims, hsize_t * dim)
  int           H5Pget_chunk(hid_t plist, int max_ndims, hsize_t * dims )
  herr_t        H5Pset_deflate( hid_t plist, int level)
  herr_t        H5Pset_fill_value(hid_t plist_id, hid_t type_id, void *value )
  herr_t        H5Pget_fill_value(hid_t plist_id, hid_t type_id, void *value )
  herr_t        H5Pfill_value_defined(hid_t plist_id, H5D_fill_value_t *status )
  herr_t        H5Pset_fill_time(hid_t plist_id, H5D_fill_time_t fill_time )
  herr_t        H5Pget_fill_time(hid_t plist_id, H5D_fill_time_t *fill_time )
  herr_t        H5Pset_alloc_time(hid_t plist_id, H5D_alloc_time_t alloc_time )
  herr_t        H5Pget_alloc_time(hid_t plist_id, H5D_alloc_time_t *alloc_time )
  herr_t        H5Pset_filter(hid_t plist, H5Z_filter_t filter, unsigned int flags, size_t cd_nelmts, unsigned int* cd_values )
  htri_t        H5Pall_filters_avail(hid_t dcpl_id)
  int           H5Pget_nfilters(hid_t plist)
  H5Z_filter_t  H5Pget_filter(hid_t plist, unsigned int filter_number,   unsigned int *flags, size_t *cd_nelmts,  unsigned int* cd_values, size_t namelen, char* name, unsigned int* filter_config)
  herr_t        H5Pget_filter_by_id( hid_t plist_id, H5Z_filter_t filter,  unsigned int *flags, size_t *cd_nelmts,  unsigned int* cd_values, size_t namelen, char* name, unsigned int* filter_config)
  herr_t        H5Pmodify_filter(hid_t plist, H5Z_filter_t filter, unsigned int flags, size_t cd_nelmts, unsigned int *cd_values)
  herr_t        H5Premove_filter(hid_t plist, H5Z_filter_t filter )
  herr_t        H5Pset_fletcher32(hid_t plist)
  herr_t        H5Pset_shuffle(hid_t plist_id)
  herr_t        H5Pset_szip(hid_t plist, unsigned int options_mask, unsigned int pixels_per_block)
  herr_t        H5Pset_scaleoffset(hid_t plist, H5Z_SO_scale_type_t scale_type, int scale_factor)
  herr_t        H5Pset_external(hid_t plist_id, const char *name, off_t offset, hsize_t size)
  int           H5Pget_external_count(hid_t plist_id)
  herr_t        H5Pget_external(hid_t plist, unsigned idx, size_t name_size, char *name, off_t *offset, hsize_t *size)
  ssize_t H5Pget_virtual_dsetname(hid_t dcpl_id, size_t index, char *name, size_t size)
  ssize_t H5Pget_virtual_filename(hid_t dcpl_id, size_t index, char *name, size_t size)
  herr_t H5Pget_virtual_count(hid_t dcpl_id, size_t *count)
  herr_t H5Pset_virtual(hid_t dcpl_id, hid_t vspace_id, const char *src_file_name, const char *src_dset_name, hid_t src_space_id)
  hid_t H5Pget_virtual_vspace(hid_t dcpl_id, size_t index)
  hid_t H5Pget_virtual_srcspace(hid_t dcpl_id, size_t index)

  # Dataset access
  herr_t    H5Pset_edc_check(hid_t plist, H5Z_EDC_t check)
  H5Z_EDC_t H5Pget_edc_check(hid_t plist)
  herr_t    H5Pset_chunk_cache( hid_t dapl_id, size_t rdcc_nslots, size_t rdcc_nbytes, double rdcc_w0 )
  herr_t    H5Pget_chunk_cache( hid_t dapl_id, size_t *rdcc_nslots, size_t *rdcc_nbytes, double *rdcc_w0 )
  herr_t H5Pget_efile_prefix(hid_t dapl_id, char *prefix, ssize_t size)
  herr_t H5Pset_efile_prefix(hid_t dapl_id, char *prefix)
  herr_t H5Pset_virtual_view(hid_t plist_id, H5D_vds_view_t view)
  herr_t H5Pget_virtual_view(hid_t plist_id, H5D_vds_view_t *view)
  herr_t H5Pset_virtual_printf_gap(hid_t plist_id, hsize_t gap_size)
  herr_t H5Pget_virtual_printf_gap(hid_t plist_id, hsize_t *gap_size)
  ssize_t H5Pget_virtual_prefix(hid_t dapl_id, char *prefix, ssize_t size)
  herr_t H5Pset_virtual_prefix(hid_t dapl_id, char *prefix)

  MPI herr_t H5Pset_dxpl_mpio( hid_t dxpl_id, H5FD_mpio_xfer_t xfer_mode )
  MPI herr_t H5Pget_dxpl_mpio( hid_t dxpl_id, H5FD_mpio_xfer_t* xfer_mode )

  # Other properties
  herr_t    H5Pset_sieve_buf_size(hid_t fapl_id, size_t size)
  herr_t    H5Pget_sieve_buf_size(hid_t fapl_id, size_t *size)

  herr_t    H5Pset_nlinks(hid_t plist_id, size_t nlinks)
  herr_t    H5Pget_nlinks(hid_t plist_id, size_t *nlinks)
  herr_t    H5Pset_elink_prefix(hid_t plist_id, char *prefix)
  ssize_t   H5Pget_elink_prefix(hid_t plist_id, char *prefix, size_t size)
  hid_t     H5Pget_elink_fapl(hid_t lapl_id)
  herr_t    H5Pset_elink_fapl(hid_t lapl_id, hid_t fapl_id)
  herr_t    H5Pget_elink_acc_flags(hid_t lapl_id, unsigned int *flags)
  herr_t    H5Pset_elink_acc_flags(hid_t lapl_id, unsigned int flags)

  herr_t    H5Pset_create_intermediate_group(hid_t plist_id, unsigned crt_intmd)
  herr_t    H5Pget_create_intermediate_group(hid_t plist_id, unsigned *crt_intmd)

  herr_t    H5Pset_copy_object(hid_t plist_id, unsigned crt_intmd)
  herr_t    H5Pget_copy_object(hid_t plist_id, unsigned *crt_intmd)

  herr_t    H5Pset_char_encoding(hid_t plist_id, H5T_cset_t encoding)
  herr_t    H5Pget_char_encoding(hid_t plist_id, H5T_cset_t *encoding)

  herr_t    H5Pset_attr_creation_order(hid_t plist_id, unsigned crt_order_flags)
  herr_t    H5Pget_attr_creation_order(hid_t plist_id, unsigned *crt_order_flags)
  herr_t    H5Pset_obj_track_times( hid_t ocpl_id, hbool_t track_times )
  herr_t    H5Pget_obj_track_times( hid_t ocpl_id, hbool_t *track_times )

  herr_t    H5Pset_local_heap_size_hint(hid_t plist_id, size_t size_hint)
  herr_t    H5Pget_local_heap_size_hint(hid_t plist_id, size_t *size_hint)
  herr_t    H5Pset_link_phase_change(hid_t plist_id, unsigned max_compact, unsigned min_dense)
  herr_t    H5Pget_link_phase_change(hid_t plist_id, unsigned *max_compact , unsigned *min_dense)
  herr_t    H5Pset_attr_phase_change(hid_t ocpl_id, unsigned max_compact, unsigned min_dense)
  herr_t    H5Pget_attr_phase_change(hid_t ocpl_id, unsigned *max_compact , unsigned *min_dense)
  herr_t    H5Pset_est_link_info(hid_t plist_id, unsigned est_num_entries, unsigned est_name_len)
  herr_t    H5Pget_est_link_info(hid_t plist_id, unsigned *est_num_entries , unsigned *est_name_len)
  herr_t    H5Pset_link_creation_order(hid_t plist_id, unsigned crt_order_flags)
  herr_t    H5Pget_link_creation_order(hid_t plist_id, unsigned *crt_order_flags)

  herr_t    H5Pset_libver_bounds(hid_t fapl_id, H5F_libver_t libver_low, H5F_libver_t libver_high)
  herr_t    H5Pget_libver_bounds(hid_t fapl_id, H5F_libver_t *libver_low, H5F_libver_t *libver_high)

  herr_t    H5Pset_alignment(hid_t plist_id, hsize_t threshold, hsize_t alignment)
  herr_t    H5Pget_alignment(hid_t plist_id, hsize_t *threshold, hsize_t *alignment)

  # MPI functions
  MPI herr_t H5Pset_fapl_mpio(hid_t fapl_id, MPI_Comm comm, MPI_Info info)
  MPI herr_t H5Pget_fapl_mpio(hid_t fapl_id, MPI_Comm *comm, MPI_Info *info)

  # === H5PL - Plugin Interface ===================================================

  herr_t H5PLappend(const char *search_path);
  herr_t H5PLprepend(const char *search_path);
  herr_t H5PLreplace(const char *search_path, unsigned int index);
  herr_t H5PLinsert(const char *search_path, unsigned int index);
  herr_t H5PLremove(unsigned int index);
  ssize_t H5PLget(unsigned int index, char *path_buf, size_t buf_size);
  herr_t H5PLsize(unsigned int *num_paths);

# === H5R - Reference API ===================================================

  herr_t    H5Rcreate(void *ref, hid_t loc_id, char *name, H5R_type_t ref_type,  hid_t space_id)
  hid_t     H5Rdereference(hid_t obj_id, hid_t oapl_id, H5R_type_t ref_type, void *ref)
  hid_t     H5Rget_region(hid_t dataset, H5R_type_t ref_type, void *ref)
  herr_t    H5Rget_obj_type(hid_t id, H5R_type_t ref_type, void *ref, H5O_type_t *obj_type)
  ssize_t   H5Rget_name(hid_t loc_id, H5R_type_t ref_type, void *ref, char *name, size_t size)


  # === H5S - Dataspaces ========================================================

  # Basic operations
  hid_t     H5Screate(H5S_class_t type)
  hid_t     H5Scopy(hid_t space_id )
  herr_t    H5Sclose(hid_t space_id)

  # Simple dataspace operations
  hid_t     H5Screate_simple(int rank, hsize_t *dims, hsize_t *maxdims)
  htri_t    H5Sis_simple(hid_t space_id)
  herr_t    H5Soffset_simple(hid_t space_id, hssize_t *offset )

  int       H5Sget_simple_extent_ndims(hid_t space_id)
  int       H5Sget_simple_extent_dims(hid_t space_id, hsize_t *dims, hsize_t *maxdims)
  hssize_t  H5Sget_simple_extent_npoints(hid_t space_id)
  H5S_class_t H5Sget_simple_extent_type(hid_t space_id)

  # Extents
  herr_t    H5Sextent_copy(hid_t dest_space_id, hid_t source_space_id )
  herr_t    H5Sset_extent_simple(hid_t space_id, int rank, hsize_t *current_size, hsize_t *maximum_size )
  herr_t    H5Sset_extent_none(hid_t space_id)

  # Dataspace selection
  H5S_sel_type H5Sget_select_type(hid_t space_id)
  hssize_t  H5Sget_select_npoints(hid_t space_id)
  herr_t    H5Sget_select_bounds(hid_t space_id, hsize_t *start, hsize_t *end)

  herr_t    H5Sselect_all(hid_t space_id)
  herr_t    H5Sselect_none(hid_t space_id)
  htri_t    H5Sselect_valid(hid_t space_id)

  hssize_t  H5Sget_select_elem_npoints(hid_t space_id)
  herr_t    H5Sget_select_elem_pointlist(hid_t space_id, hsize_t startpoint,  hsize_t numpoints, hsize_t *buf)
  herr_t    H5Sselect_elements(hid_t space_id, H5S_seloper_t op,  size_t num_elements, const hsize_t *coord)

  hssize_t  H5Sget_select_hyper_nblocks(hid_t space_id )
  herr_t    H5Sget_select_hyper_blocklist(hid_t space_id,  hsize_t startblock, hsize_t numblocks, hsize_t *buf )
  herr_t    H5Sselect_hyperslab(hid_t space_id, H5S_seloper_t op,  hsize_t *start, hsize_t *_stride, hsize_t *count, hsize_t *_block)


  herr_t    H5Sencode(hid_t obj_id, void *buf, size_t *nalloc)
  hid_t     H5Sdecode(void *buf)

  htri_t H5Sis_regular_hyperslab(hid_t spaceid)
  htri_t H5Sget_regular_hyperslab(hid_t spaceid, hsize_t* start, hsize_t* stride, hsize_t* count, hsize_t* block)

  1.10.7 htri_t H5Sselect_shape_same(hid_t space1_id, hid_t space2_id)


  # === H5T - Datatypes =========================================================

  # General operations
  hid_t         H5Tcreate(H5T_class_t type, size_t size)
  hid_t         H5Topen(hid_t loc, char* name, hid_t tapl_id)
  htri_t        H5Tcommitted(hid_t type)
  hid_t         H5Tcopy(hid_t type_id)
  htri_t        H5Tequal(hid_t type_id1, hid_t type_id2 )
  herr_t        H5Tlock(hid_t type_id)
  H5T_class_t   H5Tget_class(hid_t type_id)
  size_t        H5Tget_size(hid_t type_id)
  hid_t         H5Tget_super(hid_t type)
  htri_t        H5Tdetect_class(hid_t type_id, H5T_class_t dtype_class)
  herr_t        H5Tclose(hid_t type_id)
  hid_t         H5Tget_native_type(hid_t type_id, H5T_direction_t direction)
  herr_t        H5Tcommit(hid_t loc_id, char *name, hid_t dtype_id, hid_t lcpl_id, hid_t tcpl_id, hid_t tapl_id)
  hid_t         H5Tget_create_plist(hid_t dtype_id)

  hid_t         H5Tdecode(unsigned char *buf)
  herr_t        H5Tencode(hid_t obj_id, unsigned char *buf, size_t *nalloc)

  H5T_conv_t    H5Tfind(hid_t src_id, hid_t dst_id, H5T_cdata_t **pcdata)
  herr_t        H5Tconvert(hid_t src_id, hid_t dst_id, size_t nelmts, void *buf, void *background, hid_t plist_id) nogil
  herr_t        H5Tregister(H5T_pers_t pers, char *name, hid_t src_id, hid_t dst_id, H5T_conv_t func) nogil
  herr_t        H5Tunregister(H5T_pers_t pers, char *name, hid_t src_id, hid_t dst_id, H5T_conv_t func) nogil

  # Atomic datatypes
  herr_t        H5Tset_size(hid_t type_id, size_t size)

  H5T_order_t   H5Tget_order(hid_t type_id)
  herr_t        H5Tset_order(hid_t type_id, H5T_order_t order)

  hsize_t       H5Tget_precision(hid_t type_id)
  herr_t        H5Tset_precision(hid_t type_id, size_t prec)

  int           H5Tget_offset(hid_t type_id)
  herr_t        H5Tset_offset(hid_t type_id, size_t offset)

  herr_t        H5Tget_pad(hid_t type_id, H5T_pad_t * lsb, H5T_pad_t * msb )
  herr_t        H5Tset_pad(hid_t type_id, H5T_pad_t lsb, H5T_pad_t msb )

  H5T_sign_t    H5Tget_sign(hid_t type_id)
  herr_t        H5Tset_sign(hid_t type_id, H5T_sign_t sign)

  herr_t        H5Tget_fields(hid_t type_id, size_t *spos, size_t *epos, size_t *esize, size_t *mpos, size_t *msize )
  herr_t        H5Tset_fields(hid_t type_id, size_t spos, size_t epos, size_t esize, size_t mpos, size_t msize )

  size_t        H5Tget_ebias(hid_t type_id)
  herr_t        H5Tset_ebias(hid_t type_id, size_t ebias)
  H5T_norm_t    H5Tget_norm(hid_t type_id)
  herr_t        H5Tset_norm(hid_t type_id, H5T_norm_t norm)
  H5T_pad_t     H5Tget_inpad(hid_t type_id)
  herr_t        H5Tset_inpad(hid_t type_id, H5T_pad_t inpad)
  H5T_cset_t    H5Tget_cset(hid_t type_id)
  herr_t        H5Tset_cset(hid_t type_id, H5T_cset_t cset)
  H5T_str_t     H5Tget_strpad(hid_t type_id)
  herr_t        H5Tset_strpad(hid_t type_id, H5T_str_t strpad)

  # VLENs
  hid_t         H5Tvlen_create(hid_t base_type_id)
  htri_t        H5Tis_variable_str(hid_t dtype_id)

  # Compound data types
  int           H5Tget_nmembers(hid_t type_id)
  H5T_class_t   H5Tget_member_class(hid_t type_id, int member_no)
  char*         H5Tget_member_name(hid_t type_id, unsigned membno)
  hid_t         H5Tget_member_type(hid_t type_id, unsigned membno)
  int           H5Tget_member_offset(hid_t type_id, int membno)
  int           H5Tget_member_index(hid_t type_id, char* name)
  herr_t        H5Tinsert(hid_t parent_id, char *name, size_t offset, hid_t member_id)
  herr_t        H5Tpack(hid_t type_id)

  # Enumerated types
  hid_t         H5Tenum_create(hid_t base_id)
  herr_t        H5Tenum_insert(hid_t type, char *name, void *value)
  herr_t        H5Tenum_nameof( hid_t type, void *value, char *name, size_t size )
  herr_t        H5Tenum_valueof( hid_t type, char *name, void *value )
  herr_t        H5Tget_member_value(hid_t type,  unsigned int memb_no, void *value )

  # Array data types
  hid_t         H5Tarray_create(hid_t base_id, int ndims, hsize_t *dims)
  int           H5Tget_array_ndims(hid_t type_id)
  int           H5Tget_array_dims(hid_t type_id, hsize_t *dims)

  # Opaque data types
  herr_t    H5Tset_tag(hid_t type_id, char* tag)
  char*     H5Tget_tag(hid_t type_id)


  # === H5Z - Filters =========================================================

  htri_t    H5Zfilter_avail(H5Z_filter_t id_)
  herr_t    H5Zget_filter_info(H5Z_filter_t filter_, unsigned int *filter_config_flags)
  herr_t    H5Zregister(const void *cls)
  herr_t    H5Zunregister(H5Z_filter_t id_)

hdf5_hl:

  # === H5DS - Dimension Scales API =============================================

  herr_t  H5DSattach_scale(hid_t did, hid_t dsid, unsigned int idx)
  herr_t  H5DSdetach_scale(hid_t did, hid_t dsid, unsigned int idx)
  htri_t  H5DSis_attached(hid_t did, hid_t dsid, unsigned int idx)
  herr_t  H5DSset_scale(hid_t dsid, char *dimname)
  int     H5DSget_num_scales(hid_t did, unsigned int dim)

  herr_t  H5DSset_label(hid_t did, unsigned int idx, char *label)
  ssize_t H5DSget_label(hid_t did, unsigned int idx, char *label, size_t size)

  ssize_t H5DSget_scale_name(hid_t did, char *name, size_t size)
  htri_t  H5DSis_scale(hid_t did)
  herr_t  H5DSiterate_scales(hid_t did, unsigned int dim, int *idx, H5DS_iterate_t visitor, void *visitor_data)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/api_types_ext.pxd0000644000175000017500000000334214675110407017647 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

# === Standard C library types and functions ==================================

include 'config.pxi'

IF MPI:
    from mpi4py.libmpi cimport MPI_Comm, MPI_Info

from libc.stdlib cimport malloc, free
from libc.string cimport strlen, strchr, strcpy, strncpy, strcmp,\
                         strdup, strdup, memcpy, memset
ctypedef long size_t
from libc.time cimport time_t

from libc.stdint cimport int8_t, uint8_t, int16_t, uint16_t, int32_t, uint32_t, int64_t, uint64_t

cdef extern from *:
    """
    #if !(defined(_WIN32) || defined(MS_WINDOWS) || defined(_MSC_VER))
      #include 
    #endif
    """
    ctypedef long ssize_t

# Can't use Cython defs because they keep moving them around
cdef extern from "Python.h":
    ctypedef void PyObject
    ctypedef ssize_t Py_ssize_t
    ctypedef size_t Py_uintptr_t

    PyObject* PyErr_Occurred()
    void PyErr_SetString(object type, char *message)
    object PyBytes_FromStringAndSize(char *v, Py_ssize_t len)

# === Compatibility definitions and macros for h5py ===========================

cdef extern from "api_compat.h":

    size_t h5py_size_n64
    size_t h5py_size_n128
    size_t h5py_offset_n64_real
    size_t h5py_offset_n64_imag
    size_t h5py_offset_n128_real
    size_t h5py_offset_n128_imag

    IF COMPLEX256_SUPPORT:
        size_t h5py_size_n256
        size_t h5py_offset_n256_real
        size_t h5py_offset_n256_imag

cdef extern from "lzf_filter.h":

    int H5PY_FILTER_LZF
    int register_lzf() except *
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/api_types_hdf5.pxd0000644000175000017500000010067614675110407017705 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .api_types_ext cimport *

include "config.pxi"

cdef extern from "hdf5.h":
  # Basic types
  ctypedef long int hid_t
  ctypedef int hbool_t
  ctypedef int herr_t
  ctypedef int htri_t
  ctypedef long long hsize_t
  ctypedef signed long long hssize_t
  ctypedef signed long long haddr_t
  ctypedef long int off_t

  ctypedef struct hvl_t:
    size_t len                 # Length of VL data (in base type units)
    void *p                    # Pointer to VL data

  int HADDR_UNDEF

  ctypedef enum H5_iter_order_t:
    H5_ITER_UNKNOWN = -1,       # Unknown order
    H5_ITER_INC,                # Increasing order
    H5_ITER_DEC,                # Decreasing order
    H5_ITER_NATIVE,             # No particular order, whatever is fastest
    H5_ITER_N                   # Number of iteration orders

  ctypedef enum H5_index_t:
    H5_INDEX_UNKNOWN = -1,      # Unknown index type
    H5_INDEX_NAME,              # Index on names
    H5_INDEX_CRT_ORDER,         # Index on creation order
    H5_INDEX_N                  # Number of indices defined

# === H5D - Dataset API =======================================================

  ctypedef enum H5D_layout_t:
      H5D_LAYOUT_ERROR    = -1,
      H5D_COMPACT         = 0,
      H5D_CONTIGUOUS      = 1,
      H5D_CHUNKED         = 2,
      H5D_VIRTUAL         = 3,
      H5D_NLAYOUTS        = 4

  ctypedef enum H5D_vds_view_t:
      H5D_VDS_ERROR           = -1,
      H5D_VDS_FIRST_MISSING   = 0,
      H5D_VDS_LAST_AVAILABLE  = 1

  ctypedef enum H5D_alloc_time_t:
    H5D_ALLOC_TIME_ERROR    =-1,
    H5D_ALLOC_TIME_DEFAULT  =0,
    H5D_ALLOC_TIME_EARLY    =1,
    H5D_ALLOC_TIME_LATE        =2,
    H5D_ALLOC_TIME_INCR        =3

  ctypedef enum H5D_space_status_t:
    H5D_SPACE_STATUS_ERROR            =-1,
    H5D_SPACE_STATUS_NOT_ALLOCATED    =0,
    H5D_SPACE_STATUS_PART_ALLOCATED    =1,
    H5D_SPACE_STATUS_ALLOCATED        =2

  ctypedef enum H5D_fill_time_t:
    H5D_FILL_TIME_ERROR    =-1,
    H5D_FILL_TIME_ALLOC =0,
    H5D_FILL_TIME_NEVER    =1,
    H5D_FILL_TIME_IFSET    =2

  ctypedef enum H5D_fill_value_t:
    H5D_FILL_VALUE_ERROR        =-1,
    H5D_FILL_VALUE_UNDEFINED    =0,
    H5D_FILL_VALUE_DEFAULT      =1,
    H5D_FILL_VALUE_USER_DEFINED =2

  ctypedef  herr_t (*H5D_operator_t)(void *elem, hid_t type_id, unsigned ndim,
                    hsize_t *point, void *operator_data) except -1


  IF HDF5_VERSION >= (1, 12, 3) or (HDF5_VERSION >= (1, 10, 10) and HDF5_VERSION < (1, 10, 99)):
    ctypedef int (*H5D_chunk_iter_op_t)(const hsize_t *offset, unsigned filter_mask,
                                        haddr_t addr, hsize_t size, void *op_data) except -1

# === H5F - File API ==========================================================

  # File constants
  cdef enum:
    H5F_ACC_TRUNC
    H5F_ACC_RDONLY
    H5F_ACC_RDWR
    H5F_ACC_EXCL
    H5F_ACC_DEBUG
    H5F_ACC_CREAT
    H5F_ACC_SWMR_WRITE
    H5F_ACC_SWMR_READ

  cdef enum:
    H5LT_FILE_IMAGE_OPEN_RW
    H5LT_FILE_IMAGE_DONT_COPY
    H5LT_FILE_IMAGE_DONT_RELEASE

  # The difference between a single file and a set of mounted files
  cdef enum H5F_scope_t:
    H5F_SCOPE_LOCAL     = 0,    # specified file handle only
    H5F_SCOPE_GLOBAL    = 1,    # entire virtual file
    H5F_SCOPE_DOWN      = 2     # for internal use only

  cdef enum H5F_close_degree_t:
    H5F_CLOSE_DEFAULT = 0,
    H5F_CLOSE_WEAK    = 1,
    H5F_CLOSE_SEMI    = 2,
    H5F_CLOSE_STRONG  = 3

  cdef enum H5F_fspace_strategy_t:
    H5F_FSPACE_STRATEGY_FSM_AGGR = 0,  # FSM, Aggregators, VFD
    H5F_FSPACE_STRATEGY_PAGE     = 1,  # Paged FSM, VFD
    H5F_FSPACE_STRATEGY_AGGR     = 2,  # Aggregators, VFD
    H5F_FSPACE_STRATEGY_NONE     = 3   # VFD

  int H5F_OBJ_FILE
  int H5F_OBJ_DATASET
  int H5F_OBJ_GROUP
  int H5F_OBJ_DATATYPE
  int H5F_OBJ_ATTR
  int H5F_OBJ_ALL
  int H5F_OBJ_LOCAL
  hsize_t H5F_UNLIMITED

  IF HDF5_VERSION < (1,11,4):
    ctypedef enum H5F_libver_t:
      H5F_LIBVER_EARLIEST = 0,        # Use the earliest possible format for storing objects
      H5F_LIBVER_V18 = 1,
      H5F_LIBVER_V110 = 2,
      H5F_LIBVER_NBOUNDS
    int H5F_LIBVER_LATEST  # Use the latest possible format available for storing objects

  IF HDF5_VERSION >= (1, 11, 4) and HDF5_VERSION < (1, 14, 0):
    ctypedef enum H5F_libver_t:
      H5F_LIBVER_EARLIEST = 0,        # Use the earliest possible format for storing objects
      H5F_LIBVER_V18 = 1,
      H5F_LIBVER_V110 = 2,
      H5F_LIBVER_V112 = 3,
      H5F_LIBVER_NBOUNDS
    int H5F_LIBVER_LATEST  # Use the latest possible format available for storing objects

  IF HDF5_VERSION >= (1, 14, 0):
    ctypedef enum H5F_libver_t:
      H5F_LIBVER_EARLIEST = 0,        # Use the earliest possible format for storing objects
      H5F_LIBVER_V18 = 1,
      H5F_LIBVER_V110 = 2,
      H5F_LIBVER_V112 = 3,
      H5F_LIBVER_V114 = 4,
      H5F_LIBVER_NBOUNDS
    int H5F_LIBVER_LATEST  # Use the latest possible format available for storing objects

# === H5FD - Low-level file descriptor API ====================================

  ctypedef enum H5FD_mem_t:
    H5FD_MEM_NOLIST    = -1,
    H5FD_MEM_DEFAULT    = 0,
    H5FD_MEM_SUPER      = 1,
    H5FD_MEM_BTREE      = 2,
    H5FD_MEM_DRAW       = 3,
    H5FD_MEM_GHEAP      = 4,
    H5FD_MEM_LHEAP      = 5,
    H5FD_MEM_OHDR       = 6,
    H5FD_MEM_NTYPES

  # HDF5 uses a clever scheme wherein these are actually init() calls
  # Hopefully Cython won't have a problem with this.
  # Thankfully they are defined but -1 if unavailable
  hid_t H5FD_CORE
  hid_t H5FD_FAMILY
  hid_t H5FD_LOG
  hid_t H5FD_MPIO
  hid_t H5FD_MULTI
  hid_t H5FD_SEC2
  hid_t H5FD_DIRECT
  hid_t H5FD_STDIO
  IF UNAME_SYSNAME == "Windows":
    hid_t H5FD_WINDOWS
  hid_t H5FD_ROS3

  int H5FD_LOG_LOC_READ   # 0x0001
  int H5FD_LOG_LOC_WRITE  # 0x0002
  int H5FD_LOG_LOC_SEEK   # 0x0004
  int H5FD_LOG_LOC_IO     # (H5FD_LOG_LOC_READ|H5FD_LOG_LOC_WRITE|H5FD_LOG_LOC_SEEK)

  # Flags for tracking number of times each byte is read/written
  int H5FD_LOG_FILE_READ  # 0x0008
  int H5FD_LOG_FILE_WRITE # 0x0010
  int H5FD_LOG_FILE_IO    # (H5FD_LOG_FILE_READ|H5FD_LOG_FILE_WRITE)

  # Flag for tracking "flavor" (type) of information stored at each byte
  int H5FD_LOG_FLAVOR     # 0x0020

  # Flags for tracking total number of reads/writes/seeks
  int H5FD_LOG_NUM_READ   # 0x0040
  int H5FD_LOG_NUM_WRITE  # 0x0080
  int H5FD_LOG_NUM_SEEK   # 0x0100
  int H5FD_LOG_NUM_IO     # (H5FD_LOG_NUM_READ|H5FD_LOG_NUM_WRITE|H5FD_LOG_NUM_SEEK)

  # Flags for tracking time spent in open/read/write/seek/close
  int H5FD_LOG_TIME_OPEN  # 0x0200        # Not implemented yet
  int H5FD_LOG_TIME_READ  # 0x0400        # Not implemented yet
  int H5FD_LOG_TIME_WRITE # 0x0800        # Partially implemented (need to track total time)
  int H5FD_LOG_TIME_SEEK  # 0x1000        # Partially implemented (need to track total time & track time for seeks during reading)
  int H5FD_LOG_TIME_CLOSE # 0x2000        # Fully implemented
  int H5FD_LOG_TIME_IO    # (H5FD_LOG_TIME_OPEN|H5FD_LOG_TIME_READ|H5FD_LOG_TIME_WRITE|H5FD_LOG_TIME_SEEK|H5FD_LOG_TIME_CLOSE)

  # Flag for tracking allocation of space in file
  int H5FD_LOG_ALLOC      # 0x4000
  int H5FD_LOG_ALL        # (H5FD_LOG_ALLOC|H5FD_LOG_TIME_IO|H5FD_LOG_NUM_IO|H5FD_LOG_FLAVOR|H5FD_LOG_FILE_IO|H5FD_LOG_LOC_IO)

  ctypedef enum H5FD_mpio_xfer_t:
    H5FD_MPIO_INDEPENDENT = 0,
    H5FD_MPIO_COLLECTIVE

  # File driver identifier type and values
  IF HDF5_VERSION >= (1, 14, 0):
    ctypedef int H5FD_class_value_t

    H5FD_class_value_t H5_VFD_INVALID      # -1
    H5FD_class_value_t H5_VFD_SEC2         # 0
    H5FD_class_value_t H5_VFD_CORE         # 1
    H5FD_class_value_t H5_VFD_LOG          # 2
    H5FD_class_value_t H5_VFD_FAMILY       # 3
    H5FD_class_value_t H5_VFD_MULTI        # 4
    H5FD_class_value_t H5_VFD_STDIO        # 5
    H5FD_class_value_t H5_VFD_SPLITTER     # 6
    H5FD_class_value_t H5_VFD_MPIO         # 7
    H5FD_class_value_t H5_VFD_DIRECT       # 8
    H5FD_class_value_t H5_VFD_MIRROR       # 9
    H5FD_class_value_t H5_VFD_HDFS         # 10
    H5FD_class_value_t H5_VFD_ROS3         # 11
    H5FD_class_value_t H5_VFD_SUBFILING    # 12
    H5FD_class_value_t H5_VFD_IOC          # 13
    H5FD_class_value_t H5_VFD_ONION        # 14

  # Class information for each file driver
  IF HDF5_VERSION < (1, 14, 0):
    ctypedef struct H5FD_class_t:
      const char *name
      haddr_t maxaddr
      H5F_close_degree_t fc_degree
      herr_t  (*terminate)()
      hsize_t (*sb_size)(H5FD_t *file)
      herr_t  (*sb_encode)(H5FD_t *file, char *name, unsigned char *p)
      herr_t  (*sb_decode)(H5FD_t *f, const char *name, const unsigned char *p)
      size_t  fapl_size
      void *  (*fapl_get)(H5FD_t *file) except *
      void *  (*fapl_copy)(const void *fapl) except *
      herr_t  (*fapl_free)(void *fapl) except -1
      size_t  dxpl_size
      void *  (*dxpl_copy)(const void *dxpl)
      herr_t  (*dxpl_free)(void *dxpl)
      H5FD_t *(*open)(const char *name, unsigned flags, hid_t fapl, haddr_t maxaddr) except *
      herr_t  (*close)(H5FD_t *file) except -1
      int     (*cmp)(const H5FD_t *f1, const H5FD_t *f2)
      herr_t  (*query)(const H5FD_t *f1, unsigned long *flags)
      herr_t  (*get_type_map)(const H5FD_t *file, H5FD_mem_t *type_map)
      haddr_t (*alloc)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl_id, hsize_t size)
      herr_t  (*free)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl_id, haddr_t addr, hsize_t size)
      haddr_t (*get_eoa)(const H5FD_t *file, H5FD_mem_t type) noexcept
      herr_t  (*set_eoa)(H5FD_t *file, H5FD_mem_t type, haddr_t addr) noexcept
      haddr_t (*get_eof)(const H5FD_t *file, H5FD_mem_t type) except -1
      herr_t  (*get_handle)(H5FD_t *file, hid_t fapl, void**file_handle)
      herr_t  (*read)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl, haddr_t addr, size_t size, void *buffer) except *
      herr_t  (*write)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl, haddr_t addr, size_t size, const void *buffer) except *
      herr_t  (*flush)(H5FD_t *file, hid_t dxpl_id, hbool_t closing) except -1
      herr_t  (*truncate)(H5FD_t *file, hid_t dxpl_id, hbool_t closing) except -1
      herr_t  (*lock)(H5FD_t *file, hbool_t rw)
      herr_t  (*unlock)(H5FD_t *file)
      H5FD_mem_t fl_map[H5FD_MEM_NTYPES]
  ELSE:
    unsigned H5FD_CLASS_VERSION  # File driver struct version

    ctypedef struct H5FD_class_t:
      unsigned version  # File driver class struct version number
      H5FD_class_value_t value
      const char *name
      haddr_t maxaddr
      H5F_close_degree_t fc_degree
      herr_t  (*terminate)()
      hsize_t (*sb_size)(H5FD_t *file)
      herr_t  (*sb_encode)(H5FD_t *file, char *name, unsigned char *p)
      herr_t  (*sb_decode)(H5FD_t *f, const char *name, const unsigned char *p)
      size_t  fapl_size
      void *  (*fapl_get)(H5FD_t *file) except *
      void *  (*fapl_copy)(const void *fapl) except *
      herr_t  (*fapl_free)(void *fapl) except -1
      size_t  dxpl_size
      void *  (*dxpl_copy)(const void *dxpl)
      herr_t  (*dxpl_free)(void *dxpl)
      H5FD_t *(*open)(const char *name, unsigned flags, hid_t fapl, haddr_t maxaddr) except *
      herr_t  (*close)(H5FD_t *file) except -1
      int     (*cmp)(const H5FD_t *f1, const H5FD_t *f2)
      herr_t  (*query)(const H5FD_t *f1, unsigned long *flags)
      herr_t  (*get_type_map)(const H5FD_t *file, H5FD_mem_t *type_map)
      haddr_t (*alloc)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl_id, hsize_t size)
      herr_t  (*free)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl_id, haddr_t addr, hsize_t size)
      haddr_t (*get_eoa)(const H5FD_t *file, H5FD_mem_t type) noexcept
      herr_t  (*set_eoa)(H5FD_t *file, H5FD_mem_t type, haddr_t addr) noexcept
      haddr_t (*get_eof)(const H5FD_t *file, H5FD_mem_t type) except -1
      herr_t  (*get_handle)(H5FD_t *file, hid_t fapl, void**file_handle)
      herr_t  (*read)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl, haddr_t addr, size_t size, void *buffer) except *
      herr_t  (*write)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl, haddr_t addr, size_t size, const void *buffer) except *
      herr_t  (*flush)(H5FD_t *file, hid_t dxpl_id, hbool_t closing) except -1
      herr_t  (*truncate)(H5FD_t *file, hid_t dxpl_id, hbool_t closing) except -1
      herr_t  (*lock)(H5FD_t *file, hbool_t rw)
      herr_t  (*unlock)(H5FD_t *file)
      H5FD_mem_t fl_map[H5FD_MEM_NTYPES]

  # The main datatype for each driver
  ctypedef struct H5FD_t:
    hid_t driver_id             # driver ID for this file
    const H5FD_class_t cls      # constant class info
    unsigned long fileno        # File 'serial' number
    unsigned access_flags       # File access flags (from create or open)
    unsigned long feature_flags # VFL Driver feature Flags
    haddr_t maxaddr             # For this file, overrides class
    haddr_t base_addr           # Base address for HDF5 data w/in file

    # Space allocation management fields
    hsize_t threshold           # Threshold for alignment
    hsize_t alignment           # Allocation alignment
    hbool_t paged_aggr          # Paged aggregation for file space is enabled or not

  ctypedef struct H5FD_ros3_fapl_t:
    int32_t version
    hbool_t authenticate
    char    aws_region[33]
    char    secret_id[129]
    char    secret_key[129]

  unsigned int H5FD_CURR_ROS3_FAPL_T_VERSION # version of struct

  IF HDF5_VERSION >= (1, 14, 2):
    size_t H5FD_ROS3_MAX_SECRET_TOK_LEN
# === H5G - Groups API ========================================================

  ctypedef enum H5G_link_t:
    H5G_LINK_ERROR      = -1,
    H5G_LINK_HARD       = 0,
    H5G_LINK_SOFT       = 1

  cdef enum H5G_obj_t:
    H5G_UNKNOWN = -1,           # Unknown object type
    H5G_LINK,                   # Object is a symbolic link
    H5G_GROUP,                  # Object is a group
    H5G_DATASET,                # Object is a dataset
    H5G_TYPE,                   # Object is a named data type

  ctypedef struct H5G_stat_t:
    unsigned long fileno[2]
    unsigned long objno[2]
    unsigned int nlink
    H5G_obj_t type              # new in HDF5 1.6
    time_t mtime
    size_t linklen
    #H5O_stat_t ohdr            # Object header information. New in HDF5 1.6

  ctypedef herr_t (*H5G_iterate_t)(hid_t group, char *name, void* op_data) except 2

  ctypedef enum H5G_storage_type_t:
      H5G_STORAGE_TYPE_UNKNOWN = -1,
      H5G_STORAGE_TYPE_SYMBOL_TABLE,
      H5G_STORAGE_TYPE_COMPACT,
      H5G_STORAGE_TYPE_DENSE

  ctypedef struct H5G_info_t:
      H5G_storage_type_t     storage_type
      hsize_t     nlinks
      int64_t     max_corder

# === H5I - Identifier and reflection interface ===============================

  int H5I_INVALID_HID

  IF HDF5_VERSION < VOL_MIN_HDF5_VERSION:
    ctypedef enum H5I_type_t:
      H5I_UNINIT       = -2,  # uninitialized Group
      H5I_BADID        = -1,  # invalid Group
      H5I_FILE        = 1,    # type ID for File objects
      H5I_GROUP,              # type ID for Group objects
      H5I_DATATYPE,           # type ID for Datatype objects
      H5I_DATASPACE,          # type ID for Dataspace objects
      H5I_DATASET,            # type ID for Dataset objects
      H5I_ATTR,               # type ID for Attribute objects
      H5I_REFERENCE,          # type ID for Reference objects
      H5I_VFL,                # type ID for virtual file layer
      H5I_GENPROP_CLS,        # type ID for generic property list classes
      H5I_GENPROP_LST,        # type ID for generic property lists
      H5I_ERROR_CLASS,        # type ID for error classes
      H5I_ERROR_MSG,          # type ID for error messages
      H5I_ERROR_STACK,        # type ID for error stacks
      H5I_NTYPES              # number of valid groups, MUST BE LAST!

  ELSE:
    ctypedef enum H5I_type_t:
      H5I_UNINIT      = -2,     # uninitialized type
      H5I_BADID       = -1,     # invalid Type
      H5I_FILE        = 1,      # type ID for File objects
      H5I_GROUP,                # type ID for Group objects
      H5I_DATATYPE,             # type ID for Datatype objects
      H5I_DATASPACE,            # type ID for Dataspace object
      H5I_DATASET,              # type ID for Dataset object
      H5I_MAP,                  # type ID for Map object
      H5I_ATTR,                 # type ID for Attribute objects
      H5I_VFL,                  # type ID for virtual file layer
      H5I_VOL,                  # type ID for virtual object layer
      H5I_GENPROP_CLS,          # type ID for generic property list classes
      H5I_GENPROP_LST,          # type ID for generic property lists
      H5I_ERROR_CLASS,          # type ID for error classes
      H5I_ERROR_MSG,            # type ID for error messages
      H5I_ERROR_STACK,          # type ID for error stacks
      H5I_SPACE_SEL_ITER,       # type ID for dataspace selection iterator
      H5I_NTYPES                # number of library types, MUST BE LAST!

# === H5L/H5O - Links interface (1.8.X only) ======================================

  unsigned int H5L_MAX_LINK_NAME_LEN #  ((uint32_t) (-1)) (4GB - 1)

  # Link class types.
  # * Values less than 64 are reserved for the HDF5 library's internal use.
  # * Values 64 to 255 are for "user-defined" link class types; these types are
  # * defined by HDF5 but their behavior can be overridden by users.
  # * Users who want to create new classes of links should contact the HDF5
  # * development team at hdfhelp@ncsa.uiuc.edu .
  # * These values can never change because they appear in HDF5 files.
  #
  ctypedef enum H5L_type_t:
    H5L_TYPE_ERROR = (-1),      #  Invalid link type id
    H5L_TYPE_HARD = 0,          #  Hard link id
    H5L_TYPE_SOFT = 1,          #  Soft link id
    H5L_TYPE_EXTERNAL = 64,     #  External link id
    H5L_TYPE_MAX = 255          #  Maximum link type id

  #  Information struct for link (for H5Lget_info/H5Lget_info_by_idx)
  cdef union _add_u:
    haddr_t address   #  Address hard link points to
    size_t val_size   #  Size of a soft link or UD link value

  ctypedef struct H5L_info_t:
    H5L_type_t  type            #  Type of link
    hbool_t     corder_valid    #  Indicate if creation order is valid
    int64_t     corder          #  Creation order
    H5T_cset_t  cset            #  Character set of link name
    _add_u u

  #  Prototype for H5Literate/H5Literate_by_name() operator
  ctypedef herr_t (*H5L_iterate_t) (hid_t group, char *name, H5L_info_t *info,
                    void *op_data) except 2

  ctypedef uint32_t H5O_msg_crt_idx_t

  ctypedef enum H5O_type_t:
    H5O_TYPE_UNKNOWN = -1,      # Unknown object type
    H5O_TYPE_GROUP,             # Object is a group
    H5O_TYPE_DATASET,           # Object is a dataset
    H5O_TYPE_NAMED_DATATYPE,    # Object is a named data type
    H5O_TYPE_NTYPES             # Number of different object types (must be last!)

  unsigned int H5O_COPY_SHALLOW_HIERARCHY_FLAG    # (0x0001u) Copy only immediate members
  unsigned int H5O_COPY_EXPAND_SOFT_LINK_FLAG     # (0x0002u) Expand soft links into new objects
  unsigned int H5O_COPY_EXPAND_EXT_LINK_FLAG      # (0x0004u) Expand external links into new objects
  unsigned int H5O_COPY_EXPAND_REFERENCE_FLAG     # (0x0008u) Copy objects that are pointed by references
  unsigned int H5O_COPY_WITHOUT_ATTR_FLAG         # (0x0010u) Copy object without copying attributes
  unsigned int H5O_COPY_PRESERVE_NULL_FLAG        # (0x0020u) Copy NULL messages (empty space)
  unsigned int H5O_COPY_ALL                       # (0x003Fu) All object copying flags (for internal checking)

  # --- Components for the H5O_info_t struct ----------------------------------

  ctypedef struct space:
    hsize_t total           #  Total space for storing object header in file
    hsize_t meta            #  Space within header for object header metadata information
    hsize_t mesg            #  Space within header for actual message information
    hsize_t free            #  Free space within object header

  ctypedef struct mesg:
    unsigned long present   #  Flags to indicate presence of message type in header
    unsigned long shared    #  Flags to indicate message type is shared in header

  ctypedef struct hdr:
    unsigned version        #  Version number of header format in file
    unsigned nmesgs         #  Number of object header messages
    unsigned nchunks        #  Number of object header chunks
    unsigned flags          #  Object header status flags
    space space
    mesg mesg

  ctypedef struct H5_ih_info_t:
    hsize_t     index_size,  # btree and/or list
    hsize_t     heap_size

  cdef struct meta_size:
    H5_ih_info_t   obj,    #        v1/v2 B-tree & local/fractal heap for groups, B-tree for chunked datasets
    H5_ih_info_t   attr    #        v2 B-tree & heap for attributes

  ctypedef struct H5O_info_t:
    unsigned long   fileno      #  File number that object is located in
    haddr_t         addr        #  Object address in file
    H5O_type_t      type        #  Basic object type (group, dataset, etc.)
    unsigned        rc          #  Reference count of object
    time_t          atime       #  Access time
    time_t          mtime       #  Modification time
    time_t          ctime       #  Change time
    time_t          btime       #  Birth time
    hsize_t         num_attrs   #  # of attributes attached to object
    hdr             hdr
    meta_size       meta_size

  ctypedef herr_t (*H5O_iterate_t)(hid_t obj, char *name, H5O_info_t *info,
                    void *op_data) except -1

# === H5P - Property list API =================================================

  int H5P_DEFAULT

  # Property list classes
  hid_t H5P_NO_CLASS
  hid_t H5P_FILE_CREATE
  hid_t H5P_FILE_ACCESS
  hid_t H5P_DATASET_CREATE
  hid_t H5P_DATASET_ACCESS
  hid_t H5P_DATASET_XFER

  hid_t H5P_OBJECT_CREATE
  hid_t H5P_OBJECT_COPY
  hid_t H5P_LINK_CREATE
  hid_t H5P_LINK_ACCESS
  hid_t H5P_GROUP_CREATE
  hid_t H5P_DATATYPE_CREATE
  hid_t H5P_CRT_ORDER_TRACKED
  hid_t H5P_CRT_ORDER_INDEXED

# === H5R - Reference API =====================================================

  size_t H5R_DSET_REG_REF_BUF_SIZE
  size_t H5R_OBJ_REF_BUF_SIZE

  ctypedef enum H5R_type_t:
    H5R_BADTYPE = (-1),
    H5R_OBJECT,
    H5R_DATASET_REGION,
    H5R_INTERNAL,
    H5R_MAXTYPE

# === H5S - Dataspaces ========================================================

  int H5S_ALL, H5S_MAX_RANK
  hsize_t H5S_UNLIMITED

  # Codes for defining selections
  ctypedef enum H5S_seloper_t:
    H5S_SELECT_NOOP      = -1,
    H5S_SELECT_SET       = 0,
    H5S_SELECT_OR,
    H5S_SELECT_AND,
    H5S_SELECT_XOR,
    H5S_SELECT_NOTB,
    H5S_SELECT_NOTA,
    H5S_SELECT_APPEND,
    H5S_SELECT_PREPEND,
    H5S_SELECT_INVALID    # Must be the last one

  ctypedef enum H5S_class_t:
    H5S_NO_CLASS         = -1,  #/*error
    H5S_SCALAR           = 0,   #/*scalar variable
    H5S_SIMPLE           = 1,   #/*simple data space
    H5S_NULL             = 2,   # NULL data space

  ctypedef enum H5S_sel_type:
    H5S_SEL_ERROR    = -1,         #Error
    H5S_SEL_NONE    = 0,        #Nothing selected
    H5S_SEL_POINTS    = 1,        #Sequence of points selected
    H5S_SEL_HYPERSLABS  = 2,    #"New-style" hyperslab selection defined
    H5S_SEL_ALL        = 3,        #Entire extent selected
    H5S_SEL_N        = 4            #/*THIS MUST BE LAST

# === H5T - Datatypes =========================================================

  # --- Enumerated constants --------------------------------------------------

  # Byte orders
  ctypedef enum H5T_order_t:
    H5T_ORDER_ERROR      = -1,  # error
    H5T_ORDER_LE         = 0,   # little endian
    H5T_ORDER_BE         = 1,   # bit endian
    H5T_ORDER_VAX        = 2,   # VAX mixed endian
    H5T_ORDER_NONE       = 3    # no particular order (strings, bits,..)

  # HDF5 signed enums
  ctypedef enum H5T_sign_t:
    H5T_SGN_ERROR        = -1,  # error
    H5T_SGN_NONE         = 0,   # this is an unsigned type
    H5T_SGN_2            = 1,   # two's complement
    H5T_NSGN             = 2    # this must be last!

  ctypedef enum H5T_norm_t:
    H5T_NORM_ERROR       = -1,
    H5T_NORM_IMPLIED     = 0,
    H5T_NORM_MSBSET      = 1,
    H5T_NORM_NONE        = 2

  ctypedef enum H5T_cset_t:
    H5T_CSET_ERROR       = -1,  # error
    H5T_CSET_ASCII       = 0,   # US ASCII
    H5T_CSET_UTF8        = 1,   # UTF-8 Unicode encoding

  ctypedef enum H5T_str_t:
    H5T_STR_ERROR        = -1,
    H5T_STR_NULLTERM     = 0,
    H5T_STR_NULLPAD      = 1,
    H5T_STR_SPACEPAD     = 2

  # Atomic datatype padding
  ctypedef enum H5T_pad_t:
    H5T_PAD_ZERO        = 0,
    H5T_PAD_ONE         = 1,
    H5T_PAD_BACKGROUND  = 2

  # HDF5 type classes
  cdef enum H5T_class_t:
    H5T_NO_CLASS         = -1,  # error
    H5T_INTEGER          = 0,   # integer types
    H5T_FLOAT            = 1,   # floating-point types
    H5T_TIME             = 2,   # date and time types
    H5T_STRING           = 3,   # character string types
    H5T_BITFIELD         = 4,   # bit field types
    H5T_OPAQUE           = 5,   # opaque types
    H5T_COMPOUND         = 6,   # compound types
    H5T_REFERENCE        = 7,   # reference types
    H5T_ENUM             = 8,   # enumeration types
    H5T_VLEN             = 9,   # variable-length types
    H5T_ARRAY            = 10,  # array types
    H5T_NCLASSES                # this must be last

  # Native search direction
  cdef enum H5T_direction_t:
    H5T_DIR_DEFAULT,
    H5T_DIR_ASCEND,
    H5T_DIR_DESCEND

  # For vlen strings
  cdef size_t H5T_VARIABLE

  # --- Predefined datatypes --------------------------------------------------

  cdef hid_t H5T_NATIVE_B8
  cdef hid_t H5T_NATIVE_B16
  cdef hid_t H5T_NATIVE_B32
  cdef hid_t H5T_NATIVE_B64
  cdef hid_t H5T_NATIVE_CHAR
  cdef hid_t H5T_NATIVE_SCHAR
  cdef hid_t H5T_NATIVE_UCHAR
  cdef hid_t H5T_NATIVE_SHORT
  cdef hid_t H5T_NATIVE_USHORT
  cdef hid_t H5T_NATIVE_INT
  cdef hid_t H5T_NATIVE_UINT
  cdef hid_t H5T_NATIVE_LONG
  cdef hid_t H5T_NATIVE_ULONG
  cdef hid_t H5T_NATIVE_LLONG
  cdef hid_t H5T_NATIVE_ULLONG
  cdef hid_t H5T_NATIVE_FLOAT
  cdef hid_t H5T_NATIVE_DOUBLE
  cdef hid_t H5T_NATIVE_LDOUBLE
  IF HDF5_VERSION > (1, 14, 3):
    cdef hid_t H5T_NATIVE_FLOAT16

  # "Standard" types
  cdef hid_t H5T_STD_I8LE
  cdef hid_t H5T_STD_I16LE
  cdef hid_t H5T_STD_I32LE
  cdef hid_t H5T_STD_I64LE
  cdef hid_t H5T_STD_U8LE
  cdef hid_t H5T_STD_U16LE
  cdef hid_t H5T_STD_U32LE
  cdef hid_t H5T_STD_U64LE
  cdef hid_t H5T_STD_B8LE
  cdef hid_t H5T_STD_B16LE
  cdef hid_t H5T_STD_B32LE
  cdef hid_t H5T_STD_B64LE
  cdef hid_t H5T_IEEE_F32LE
  cdef hid_t H5T_IEEE_F64LE
  cdef hid_t H5T_STD_I8BE
  cdef hid_t H5T_STD_I16BE
  cdef hid_t H5T_STD_I32BE
  cdef hid_t H5T_STD_I64BE
  cdef hid_t H5T_STD_U8BE
  cdef hid_t H5T_STD_U16BE
  cdef hid_t H5T_STD_U32BE
  cdef hid_t H5T_STD_U64BE
  cdef hid_t H5T_STD_B8BE
  cdef hid_t H5T_STD_B16BE
  cdef hid_t H5T_STD_B32BE
  cdef hid_t H5T_STD_B64BE
  cdef hid_t H5T_IEEE_F32BE
  cdef hid_t H5T_IEEE_F64BE
  IF HDF5_VERSION > (1, 14, 3):
    cdef hid_t H5T_IEEE_F16BE
    cdef hid_t H5T_IEEE_F16LE


  cdef hid_t H5T_NATIVE_INT8
  cdef hid_t H5T_NATIVE_UINT8
  cdef hid_t H5T_NATIVE_INT16
  cdef hid_t H5T_NATIVE_UINT16
  cdef hid_t H5T_NATIVE_INT32
  cdef hid_t H5T_NATIVE_UINT32
  cdef hid_t H5T_NATIVE_INT64
  cdef hid_t H5T_NATIVE_UINT64

  # Unix time types
  cdef hid_t H5T_UNIX_D32LE
  cdef hid_t H5T_UNIX_D64LE
  cdef hid_t H5T_UNIX_D32BE
  cdef hid_t H5T_UNIX_D64BE

  # String types
  cdef hid_t H5T_FORTRAN_S1
  cdef hid_t H5T_C_S1

  # References
  cdef hid_t H5T_STD_REF_OBJ
  cdef hid_t H5T_STD_REF_DSETREG

  # Type-conversion infrastructure

  ctypedef enum H5T_pers_t:
    H5T_PERS_DONTCARE	= -1,
    H5T_PERS_HARD	= 0,	    # /*hard conversion function		     */
    H5T_PERS_SOFT	= 1 	    # /*soft conversion function		     */

  ctypedef enum H5T_cmd_t:
    H5T_CONV_INIT	= 0,	#/*query and/or initialize private data	     */
    H5T_CONV_CONV	= 1, 	#/*convert data from source to dest datatype */
    H5T_CONV_FREE	= 2	    #/*function is being removed from path	     */

  ctypedef enum H5T_bkg_t:
    H5T_BKG_NO		= 0, 	#/*background buffer is not needed, send NULL */
    H5T_BKG_TEMP	= 1,	#/*bkg buffer used as temp storage only       */
    H5T_BKG_YES		= 2	    #/*init bkg buf with data before conversion   */

  ctypedef struct H5T_cdata_t:
    H5T_cmd_t		command     # /*what should the conversion function do?    */
    H5T_bkg_t		need_bkg   #/*is the background buffer needed?	     */
    hbool_t		recalc	        # /*recalculate private data		     */
    void		*priv	        # /*private data				     */

  ctypedef struct hvl_t:
      size_t len # /* Length of VL data (in base type units) */
      void *p    #/* Pointer to VL data */

  ctypedef herr_t (*H5T_conv_t)(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata,
      size_t nelmts, size_t buf_stride, size_t bkg_stride, void *buf,
      void *bkg, hid_t dset_xfer_plist) except -1

# === H5Z - Filters ===========================================================

  ctypedef int H5Z_filter_t

  int H5Z_CLASS_T_VERS
  int H5Z_FILTER_ERROR
  int H5Z_FILTER_NONE
  int H5Z_FILTER_ALL
  int H5Z_FILTER_DEFLATE
  int H5Z_FILTER_SHUFFLE
  int H5Z_FILTER_FLETCHER32
  int H5Z_FILTER_SZIP
  int H5Z_FILTER_NBIT
  int H5Z_FILTER_SCALEOFFSET
  int H5Z_FILTER_RESERVED
  int H5Z_FILTER_MAX
  int H5Z_MAX_NFILTERS

  int H5Z_FLAG_DEFMASK
  int H5Z_FLAG_MANDATORY
  int H5Z_FLAG_OPTIONAL

  int H5Z_FLAG_INVMASK
  int H5Z_FLAG_REVERSE
  int H5Z_FLAG_SKIP_EDC

  int H5_SZIP_ALLOW_K13_OPTION_MASK   #1
  int H5_SZIP_CHIP_OPTION_MASK        #2
  int H5_SZIP_EC_OPTION_MASK          #4
  int H5_SZIP_NN_OPTION_MASK          #32
  int H5_SZIP_MAX_PIXELS_PER_BLOCK    #32

  int H5Z_SO_INT_MINBITS_DEFAULT

  int H5Z_FILTER_CONFIG_ENCODE_ENABLED #(0x0001)
  int H5Z_FILTER_CONFIG_DECODE_ENABLED #(0x0002)

  cdef enum H5Z_EDC_t:
      H5Z_ERROR_EDC       = -1,
      H5Z_DISABLE_EDC     = 0,
      H5Z_ENABLE_EDC      = 1,
      H5Z_NO_EDC          = 2

  cdef enum H5Z_SO_scale_type_t:
      H5Z_SO_FLOAT_DSCALE = 0,
      H5Z_SO_FLOAT_ESCALE = 1,
      H5Z_SO_INT          = 2

# === H5A - Attributes API ====================================================

  ctypedef herr_t (*H5A_operator_t)(hid_t loc_id, char *attr_name, void* operator_data) except 2

  ctypedef struct H5A_info_t:
    hbool_t corder_valid          # Indicate if creation order is valid
    H5O_msg_crt_idx_t corder      # Creation order
    H5T_cset_t        cset        # Character set of attribute name
    hsize_t           data_size   # Size of raw data

  ctypedef herr_t (*H5A_operator2_t)(hid_t location_id, char *attr_name,
          H5A_info_t *ainfo, void *op_data) except 2



#  === H5AC - Attribute Cache configuration API ================================


  unsigned int H5AC__CURR_CACHE_CONFIG_VERSION  # 	1
  # I don't really understand why this works, but
  # https://groups.google.com/forum/?fromgroups#!topic/cython-users/-fLG08E5lYM
  # suggests it and it _does_ work
  enum: H5AC__MAX_TRACE_FILE_NAME_LEN	#	1024

  unsigned int H5AC_METADATA_WRITE_STRATEGY__PROCESS_0_ONLY   # 0
  unsigned int H5AC_METADATA_WRITE_STRATEGY__DISTRIBUTED      # 1


  cdef extern from "H5Cpublic.h":
  # === H5C - Cache configuration API ================================
    cdef enum H5C_cache_incr_mode:
      H5C_incr__off,
      H5C_incr__threshold,


    cdef enum H5C_cache_flash_incr_mode:
      H5C_flash_incr__off,
      H5C_flash_incr__add_space


    cdef enum H5C_cache_decr_mode:
      H5C_decr__off,
      H5C_decr__threshold,
      H5C_decr__age_out,
      H5C_decr__age_out_with_threshold

    ctypedef struct H5AC_cache_config_t:
      #     /* general configuration fields: */
      int version
      hbool_t rpt_fcn_enabled
      hbool_t evictions_enabled
      hbool_t set_initial_size
      size_t initial_size
      double min_clean_fraction
      size_t max_size
      size_t min_size
      long int epoch_length
      #    /* size increase control fields: */
      H5C_cache_incr_mode incr_mode
      double lower_hr_threshold
      double increment
      hbool_t apply_max_increment
      size_t max_increment
      H5C_cache_flash_incr_mode flash_incr_mode
      double flash_multiple
      double flash_threshold
      # /* size decrease control fields: */
      H5C_cache_decr_mode decr_mode
      double upper_hr_threshold
      double decrement
      hbool_t apply_max_decrement
      size_t max_decrement
      int epochs_before_eviction
      hbool_t apply_empty_reserve
      double empty_reserve
      # /* parallel configuration fields: */
      int dirty_bytes_threshold
      #  int metadata_write_strategy # present in 1.8.6 and higher




cdef extern from "hdf5_hl.h":
# === H5DS - Dimension Scales API =============================================

  ctypedef herr_t  (*H5DS_iterate_t)(hid_t dset, unsigned dim, hid_t scale, void *visitor_data) except 2
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5.pxd0000644000175000017500000000126014045746670015313 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

cdef class H5PYConfig:

    cdef readonly object _r_name
    cdef readonly object _i_name
    cdef readonly object _f_name
    cdef readonly object _t_name
    cdef readonly object API_16
    cdef readonly object API_18
    cdef readonly object _bytestrings
    cdef readonly object _track_order
    cdef readonly object _default_file_mode

cpdef H5PYConfig get_config()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5.pyx0000644000175000017500000001520114675110407015330 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

include "config.pxi"

from warnings import warn
from .defs cimport *
from ._objects import phil, with_phil
from .h5py_warnings import H5pyDeprecationWarning

ITER_INC    = H5_ITER_INC     # Increasing order
ITER_DEC    = H5_ITER_DEC     # Decreasing order
ITER_NATIVE = H5_ITER_NATIVE  # No particular order, whatever is fastest

INDEX_NAME      = H5_INDEX_NAME       # Index on names
INDEX_CRT_ORDER = H5_INDEX_CRT_ORDER  # Index on creation order

HDF5_VERSION_COMPILED_AGAINST = HDF5_VERSION
NUMPY_VERSION_COMPILED_AGAINST = NUMPY_BUILD_VERSION
CYTHON_VERSION_COMPILED_WITH = CYTHON_BUILD_VERSION


class ByteStringContext:

    def __init__(self):
        self._readbytes = False

    def __bool__(self):
        return self._readbytes

    def __nonzero__(self):
        return self.__bool__()

    def __enter__(self):
        self._readbytes = True

    def __exit__(self, *args):
        self._readbytes = False


cdef class H5PYConfig:

    """
        Provides runtime access to global library settings.  You retrieve the
        master copy of this object by calling h5py.get_config().

        complex_names (tuple, r/w)
            Settable 2-tuple controlling how complex numbers are saved.
            Defaults to ('r','i').

        bool_names (tuple, r/w)
            Settable 2-tuple controlling the HDF5 enum names used for boolean
            values.  Defaults to ('FALSE', 'TRUE') for values 0 and 1.
    """

    def __init__(self):
        self._r_name = b'r'
        self._i_name = b'i'
        self._f_name = b'FALSE'
        self._t_name = b'TRUE'
        self._bytestrings = ByteStringContext()
        self._track_order = False

    @property
    def complex_names(self):
        """ Settable 2-tuple controlling how complex numbers are saved.

        Format is (real_name, imag_name), defaulting to ('r','i').
        """
        with phil:
            import sys
            def handle_val(val):
                return val.decode('utf8')
            return (handle_val(self._r_name), handle_val(self._i_name))

    @complex_names.setter
    def complex_names(self, val):
        with phil:
            def handle_val(val):
                if isinstance(val, unicode):
                    return val.encode('utf8')
                elif isinstance(val, bytes):
                    return val
                else:
                    return bytes(val)
            try:
                if len(val) != 2:
                    raise TypeError()
                r = handle_val(val[0])
                i = handle_val(val[1])
            except Exception:
                raise TypeError("complex_names must be a length-2 sequence of strings (real, img)")
            self._r_name = r
            self._i_name = i

    @property
    def bool_names(self):
        """ Settable 2-tuple controlling HDF5 ENUM names for boolean types.

        Format is (false_name, real_name), defaulting to ('FALSE', 'TRUE').
        """

        with phil:
            return (self._f_name, self._t_name)

    @bool_names.setter
    def bool_names(self, val):
        with phil:
            try:
                if len(val) != 2: raise TypeError()
                f = str(val[0])
                t = str(val[1])
            except Exception:
                raise TypeError("bool_names must be a length-2 sequence of of names (false, true)")
            self._f_name = f
            self._t_name = t

    @property
    def read_byte_strings(self):
        """ Returns a context manager which forces all strings to be returned
        as byte strings. """
        with phil:
            return self._bytestrings

    @property
    def mpi(self):
        """ Boolean indicating if Parallel HDF5 is available """
        return MPI

    @property
    def ros3(self):
        """ Boolean indicating if ROS3 VDS is available """
        return ROS3

    @property
    def direct_vfd(self):
        """ Boolean indicating if DIRECT VFD is available """
        return DIRECT_VFD

    @property
    def swmr_min_hdf5_version(self):
        """ Tuple indicating the minimum HDF5 version required for SWMR features"""
        warn(
            "h5py.get_config().swmr_min_hdf5_version is deprecated. "
            "This version of h5py does not support older HDF5 without SWMR.",
            category=H5pyDeprecationWarning,
        )
        return (1, 9, 178)

    @property
    def vds_min_hdf5_version(self):
        """Tuple indicating the minimum HDF5 version required for virtual dataset (VDS) features"""
        warn(
            "h5py.get_config().vds_min_hdf5_version is deprecated. "
            "This version of h5py does not support older HDF5 without VDS.",
            category=H5pyDeprecationWarning,
        )
        return (1, 9, 233)

    @property
    def track_order(self):
        """ Default value for track_order argument of
        File.open()/Group.create_group()/Group.create_dataset() """
        return self._track_order

    @track_order.setter
    def track_order(self, val):
        self._track_order = val

    @property
    def default_file_mode(self):
        """Default mode for h5py.File()"""
        warn(
            "h5py.get_config().default_file_mode is deprecated. "
            "The default mode is now always 'r' (read-only).",
            category=H5pyDeprecationWarning,
        )
        return 'r'

    @default_file_mode.setter
    def default_file_mode(self, val):
        if val == 'r':
            warn(
                "Setting h5py.default_file_mode is deprecated. "
                "'r' (read-only) is the default from h5py 3.0.",
                category=H5pyDeprecationWarning,
            )
        elif val in {'r+', 'x', 'w-', 'w', 'a'}:
            raise ValueError(
                "Using default_file_mode other than 'r' is no longer "
                "supported. Pass the mode to h5py.File() instead."

            )
        else:
            raise ValueError("Invalid mode; must be one of r, r+, w, w-, x, a")

cdef H5PYConfig cfg = H5PYConfig()

cpdef H5PYConfig get_config():
    """() => H5PYConfig

    Get a reference to the global library configuration object.
    """
    return cfg

@with_phil
def get_libversion():
    """ () => TUPLE (major, minor, release)

        Retrieve the HDF5 library version as a 3-tuple.
    """
    cdef unsigned int major
    cdef unsigned int minor
    cdef unsigned int release
    cdef herr_t retval

    H5get_libversion(&major, &minor, &release)

    return (major, minor, release)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5a.pxd0000644000175000017500000000060414045746670015455 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

from ._objects cimport ObjectID

cdef class AttrID(ObjectID):
    pass
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5a.pyx0000644000175000017500000003061014675110407015472 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Provides access to the low-level HDF5 "H5A" attribute interface.
"""

# C-level imports
from ._objects cimport pdefault
from .h5t cimport TypeID, typewrap, py_create
from .h5s cimport SpaceID
from .h5p cimport PropID
from numpy cimport import_array, ndarray, PyArray_DATA
from .utils cimport check_numpy_read, check_numpy_write, emalloc, efree
from ._proxy cimport attr_rw

cimport cython
#Python level imports
from ._objects import phil, with_phil

# Initialization
import_array()

# === General attribute operations ============================================

# --- create, create_by_name ---

@cython.binding(False)
@with_phil
def create(ObjectID loc not None, char* name, TypeID tid not None,
    SpaceID space not None, *, char* obj_name='.', PropID lapl=None):
    """(ObjectID loc, STRING name, TypeID tid, SpaceID space, **kwds) => AttrID

    Create a new attribute, attached to an existing object.

    STRING obj_name (".")
        Attach attribute to this group member instead

    PropID lapl
        Link access property list for obj_name
    """

    return AttrID(H5Acreate_by_name(loc.id, obj_name, name, tid.id,
            space.id, H5P_DEFAULT, H5P_DEFAULT, pdefault(lapl)))


# --- open, open_by_name, open_by_idx ---
@cython.binding(False)
@with_phil
def open(ObjectID loc not None, char* name=NULL, int index=-1, *,
    char* obj_name='.', int index_type=H5_INDEX_NAME, int order=H5_ITER_INC,
    PropID lapl=None):
    """(ObjectID loc, STRING name=, INT index=, **kwds) => AttrID

    Open an attribute attached to an existing object.  You must specify
    exactly one of either name or idx.  Keywords are:

    STRING obj_name (".")
        Attribute is attached to this group member

    PropID lapl (None)
        Link access property list for obj_name

    INT index_type (h5.INDEX_NAME)

    INT order (h5.ITER_INC)

    """
    if (name == NULL and index < 0) or (name != NULL and index >= 0):
        raise TypeError("Exactly one of name or idx must be specified")

    if name != NULL:
        return AttrID(H5Aopen_by_name(loc.id, obj_name, name,
                        H5P_DEFAULT, pdefault(lapl)))
    else:
        return AttrID(H5Aopen_by_idx(loc.id, obj_name,
            index_type, order, index,
            H5P_DEFAULT, pdefault(lapl)))


# --- exists, exists_by_name ---

@with_phil
def exists(ObjectID loc not None, char* name, *,
            char* obj_name=".", PropID lapl=None):
    """(ObjectID loc, STRING name, **kwds) => BOOL

    Determine if an attribute is attached to this object.  Keywords:

    STRING obj_name (".")
        Look for attributes attached to this group member

    PropID lapl (None):
        Link access property list for obj_name
    """
    return H5Aexists_by_name(loc.id, obj_name, name, pdefault(lapl))


# --- rename, rename_by_name ---

@with_phil
def rename(ObjectID loc not None, char* name, char* new_name, *,
    char* obj_name='.', PropID lapl=None):
    """(ObjectID loc, STRING name, STRING new_name, **kwds)

    Rename an attribute.  Keywords:

    STRING obj_name (".")
        Attribute is attached to this group member

    PropID lapl (None)
        Link access property list for obj_name
    """
    H5Arename_by_name(loc.id, obj_name, name, new_name, pdefault(lapl))


@cython.binding(False)
@with_phil
def delete(ObjectID loc not None, char* name=NULL, int index=-1, *,
    char* obj_name='.', int index_type=H5_INDEX_NAME, int order=H5_ITER_INC,
    PropID lapl=None):
    """(ObjectID loc, STRING name=, INT index=, **kwds)

    Remove an attribute from an object.  Specify exactly one of "name"
    or "index". Keyword-only arguments:

    STRING obj_name (".")
        Attribute is attached to this group member

    PropID lapl (None)
        Link access property list for obj_name

    INT index_type (h5.INDEX_NAME)

    INT order (h5.ITER_INC)
    """
    if name != NULL and index < 0:
        H5Adelete_by_name(loc.id, obj_name, name, pdefault(lapl))
    elif name == NULL and index >= 0:
        H5Adelete_by_idx(loc.id, obj_name, index_type,
            order, index, pdefault(lapl))
    else:
        raise TypeError("Exactly one of index or name must be specified.")


@with_phil
def get_num_attrs(ObjectID loc not None):
    """(ObjectID loc) => INT

    Determine the number of attributes attached to an HDF5 object.
    """
    return H5Aget_num_attrs(loc.id)


cdef class AttrInfo:

    cdef H5A_info_t info

    @property
    def corder_valid(self):
        """Indicates if the creation order is valid"""
        return self.info.corder_valid

    @property
    def corder(self):
        """Creation order"""
        return self.info.corder

    @property
    def cset(self):
        """Character set of attribute name (integer typecode from h5t)"""
        return self.info.cset

    @property
    def data_size(self):
        """Size of raw data"""
        return self.info.data_size

    def _hash(self):
        return hash((self.corder_valid, self.corder, self.cset, self.data_size))


@cython.binding(False)
@with_phil
def get_info(ObjectID loc not None, char* name=NULL, int index=-1, *,
            char* obj_name='.', PropID lapl=None,
            int index_type=H5_INDEX_NAME, int order=H5_ITER_INC):
    """(ObjectID loc, STRING name=, INT index=, **kwds) => AttrInfo

    Get information about an attribute, in one of two ways:

    1. If you have the attribute identifier, just pass it in
    2. If you have the parent object, supply it and exactly one of
       either name or index.

    STRING obj_name (".")
        Use this group member instead

    PropID lapl (None)
        Link access property list for obj_name

    INT index_type (h5.INDEX_NAME)
        Which index to use

    INT order (h5.ITER_INC)
        What order the index is in
    """
    cdef AttrInfo info = AttrInfo()

    if name == NULL and index < 0:
        H5Aget_info(loc.id, &info.info)
    elif name != NULL and index >= 0:
        raise TypeError("At most one of name and index may be specified")
    elif name != NULL:
        H5Aget_info_by_name(loc.id, obj_name, name, &info.info, pdefault(lapl))
    elif index >= 0:
        H5Aget_info_by_idx(loc.id, obj_name, index_type,
            order, index, &info.info, pdefault(lapl))

    return info

# === Iteration routines ======================================================

cdef class _AttrVisitor:
    cdef object func
    cdef object retval
    def __init__(self, func):
        self.func = func
        self.retval = None


cdef herr_t cb_attr_iter(hid_t loc_id, const char* attr_name, const H5A_info_t *ainfo, void* vis_in) except 2 with gil:
    cdef _AttrVisitor vis = <_AttrVisitor>vis_in
    cdef AttrInfo info = AttrInfo()
    info.info = ainfo[0]
    vis.retval = vis.func(attr_name, info)
    if vis.retval is not None:
        return 1
    return 0


cdef herr_t cb_attr_simple(hid_t loc_id, const char* attr_name, const H5A_info_t *ainfo, void* vis_in) except 2 with gil:
    cdef _AttrVisitor vis = <_AttrVisitor>vis_in
    vis.retval = vis.func(attr_name)
    if vis.retval is not None:
        return 1
    return 0


@with_phil
def iterate(ObjectID loc not None, object func, int index=0, *,
    int index_type=H5_INDEX_NAME, int order=H5_ITER_INC, bint info=0):
    """(ObjectID loc, CALLABLE func, INT index=0, **kwds) => 

    Iterate a callable (function, method or callable object) over the
    attributes attached to this object.  You callable should have the
    signature::

        func(STRING name) => Result

    or if the keyword argument "info" is True::

        func(STRING name, AttrInfo info) => Result

    Returning None continues iteration; returning anything else aborts
    iteration and returns that value.  Keywords:

    BOOL info (False)
        Callback is func(STRING name, AttrInfo info), not func(STRING name)

    INT index_type (h5.INDEX_NAME)
        Which index to use

    INT order (h5.ITER_INC)
        Index order to use
    """
    if index < 0:
        raise ValueError("Starting index must be a non-negative integer.")

    cdef hsize_t i = index
    cdef _AttrVisitor vis = _AttrVisitor(func)
    cdef H5A_operator2_t cfunc

    if info:
        cfunc = cb_attr_iter
    else:
        cfunc = cb_attr_simple

    H5Aiterate(loc.id, index_type, order,
        &i, cfunc, vis)

    return vis.retval



# === Attribute class & methods ===============================================

cdef class AttrID(ObjectID):

    """
        Logical representation of an HDF5 attribute identifier.

        Objects of this class can be used in any HDF5 function call
        which expects an attribute identifier.  Additionally, all ``H5A*``
        functions which always take an attribute instance as the first
        argument are presented as methods of this class.

        * Hashable: No
        * Equality: Identifier comparison
    """

    @property
    def name(self):
        """The attribute's name"""
        with phil:
            return self.get_name()

    @property
    def shape(self):
        """A Numpy-style shape tuple representing the attribute's dataspace"""
        cdef SpaceID space
        with phil:
            space = self.get_space()
            return space.get_simple_extent_dims()

    @property
    def dtype(self):
        """A Numpy-stype dtype object representing the attribute's datatype"""
        cdef TypeID tid
        with phil:
            tid = self.get_type()
            return tid.py_dtype()


    @with_phil
    def read(self, ndarray arr not None, TypeID mtype=None):
        """(NDARRAY arr, TypeID mtype=None)

        Read the attribute data into the given Numpy array.  Note that the
        Numpy array must have the same shape as the HDF5 attribute, and a
        conversion-compatible datatype.

        The Numpy array must be writable and C-contiguous.  If this is not
        the case, the read will fail with an exception.

        If provided, the HDF5 TypeID mtype will override the array's dtype.
        """
        cdef hid_t space_id
        space_id = 0

        try:
            space_id = H5Aget_space(self.id)
            check_numpy_write(arr, space_id)

            if mtype is None:
                mtype = py_create(arr.dtype)

            attr_rw(self.id, mtype.id, PyArray_DATA(arr), 1)

        finally:
            if space_id:
                H5Sclose(space_id)


    @with_phil
    def write(self, ndarray arr not None, TypeID mtype=None):
        """(NDARRAY arr)

        Write the contents of a Numpy array to the attribute.  Note that
        the Numpy array must have the same shape as the HDF5 attribute, and
        a conversion-compatible datatype.

        The Numpy array must be C-contiguous.  If this is not the case,
        the write will fail with an exception.
        """
        cdef hid_t space_id
        space_id = 0

        try:
            space_id = H5Aget_space(self.id)
            check_numpy_read(arr, space_id)

            if mtype is None:
                mtype = py_create(arr.dtype)

            attr_rw(self.id, mtype.id, PyArray_DATA(arr), 0)

        finally:
            if space_id:
                H5Sclose(space_id)


    @with_phil
    def get_name(self):
        """() => STRING name

        Determine the name of this attribute.
        """
        cdef int blen
        cdef char* buf
        buf = NULL

        try:
            blen = H5Aget_name(self.id, 0, NULL)
            assert blen >= 0
            buf = emalloc(sizeof(char)*blen+1)
            blen = H5Aget_name(self.id, blen+1, buf)
            strout = buf
        finally:
            efree(buf)

        return strout


    @with_phil
    def get_space(self):
        """() => SpaceID

        Create and return a copy of the attribute's dataspace.
        """
        return SpaceID(H5Aget_space(self.id))


    @with_phil
    def get_type(self):
        """() => TypeID

        Create and return a copy of the attribute's datatype.
        """
        return typewrap(H5Aget_type(self.id))


    @with_phil
    def get_storage_size(self):
        """() => INT

        Get the amount of storage required for this attribute.
        """
        return H5Aget_storage_size(self.id)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5ac.pxd0000644000175000017500000000057714045746670015631 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

cdef class CacheConfig:
    cdef H5AC_cache_config_t cache_config
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5ac.pyx0000644000175000017500000001415714675110407015645 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.


"""
    Low-level HDF5 "H5AC" cache configuration interface.
"""


cdef class CacheConfig:
    """Represents H5AC_cache_config_t objects

    """

    #cdef H5AC_cache_config_t cache_config
    #     /* general configuration fields: */
    def __cinit__(self):
        self.cache_config.version = H5AC__CURR_CACHE_CONFIG_VERSION

    @property
    def version(self):
        return self.cache_config.version

    @version.setter
    def version(self, int val):
        self.cache_config.version = val

    @property
    def rpt_fcn_enabled(self):
        return self.cache_config.rpt_fcn_enabled

    @rpt_fcn_enabled.setter
    def rpt_fcn_enabled(self, hbool_t val):
        self.cache_config.rpt_fcn_enabled = val

    @property
    def evictions_enabled(self):
        return self.cache_config.evictions_enabled

    @evictions_enabled.setter
    def evictions_enabled(self, hbool_t val):
        self.cache_config.evictions_enabled = val

    @property
    def set_initial_size(self):
        return self.cache_config.set_initial_size

    @set_initial_size.setter
    def set_initial_size(self, hbool_t val):
        self.cache_config.set_initial_size = val

    @property
    def initial_size(self):
        return self.cache_config.initial_size

    @initial_size.setter
    def initial_size(self, size_t val):
        self.cache_config.initial_size = val

    @property
    def min_clean_fraction(self):
        return self.cache_config.min_clean_fraction

    @min_clean_fraction.setter
    def min_clean_fraction(self, double val):
        self.cache_config.min_clean_fraction = val

    @property
    def max_size(self):
        return self.cache_config.max_size

    @max_size.setter
    def max_size(self, size_t val):
        self.cache_config.max_size = val

    @property
    def min_size(self):
        return self.cache_config.min_size

    @min_size.setter
    def min_size(self, size_t val):
        self.cache_config.min_size = val

    @property
    def epoch_length(self):
        return self.cache_config.epoch_length

    @epoch_length.setter
    def epoch_length(self, long int val):
        self.cache_config.epoch_length = val

    #    /* size increase control fields: */
    @property
    def incr_mode(self):
        return self.cache_config.incr_mode

    @incr_mode.setter
    def incr_mode(self, H5C_cache_incr_mode val):
        self.cache_config.incr_mode = val

    @property
    def lower_hr_threshold(self):
        return self.cache_config.lower_hr_threshold

    @lower_hr_threshold.setter
    def lower_hr_threshold(self, double val):
        self.cache_config.lower_hr_threshold = val

    @property
    def increment(self):
        return self.cache_config.increment

    @increment.setter
    def increment(self, double val):
        self.cache_config.increment = val

    @property
    def apply_max_increment(self):
        return self.cache_config.apply_max_increment

    @apply_max_increment.setter
    def apply_max_increment(self, hbool_t val):
        self.cache_config.apply_max_increment = val

    @property
    def max_increment(self):
        return self.cache_config.max_increment

    @max_increment.setter
    def max_increment(self, size_t val):
        self.cache_config.max_increment = val

    @property
    def flash_incr_mode(self):
        return self.cache_config.flash_incr_mode

    @flash_incr_mode.setter
    def flash_incr_mode(self, H5C_cache_flash_incr_mode val):
        self.cache_config.flash_incr_mode = val

    @property
    def flash_multiple(self):
        return self.cache_config.flash_multiple

    @flash_multiple.setter
    def flash_multiple(self, double val):
        self.cache_config.flash_multiple = val

    @property
    def flash_threshold(self):
        return self.cache_config.flash_threshold

    @flash_threshold.setter
    def flash_threshold(self, double val):
        self.cache_config.flash_threshold = val

    # /* size decrease control fields: */
    @property
    def decr_mode(self):
        return self.cache_config.decr_mode

    @decr_mode.setter
    def decr_mode(self, H5C_cache_decr_mode val):
        self.cache_config.decr_mode = val

    @property
    def upper_hr_threshold(self):
        return self.cache_config.upper_hr_threshold

    @upper_hr_threshold.setter
    def upper_hr_threshold(self, double val):
        self.cache_config.upper_hr_threshold = val

    @property
    def decrements(self):
        return self.cache_config.decrement

    @decrements.setter
    def decrements(self, double val):
        self.cache_config.decrement = val

    @property
    def apply_max_decrement(self):
        return self.cache_config.apply_max_decrement

    @apply_max_decrement.setter
    def apply_max_decrement(self, hbool_t val):
        self.cache_config.apply_max_decrement = val

    @property
    def max_decrement(self):
        return self.cache_config.max_decrement

    @max_decrement.setter
    def max_decrement(self, size_t val):
        self.cache_config.max_decrement = val

    @property
    def epochs_before_eviction(self):
        return self.cache_config.epochs_before_eviction

    @epochs_before_eviction.setter
    def epochs_before_eviction(self, int val):
        self.cache_config.epochs_before_eviction = val

    @property
    def apply_empty_reserve(self):
        return self.cache_config.apply_empty_reserve

    @apply_empty_reserve.setter
    def apply_empty_reserve(self, hbool_t val):
        self.cache_config.apply_empty_reserve = val

    @property
    def empty_reserve(self):
        return self.cache_config.empty_reserve

    @empty_reserve.setter
    def empty_reserve(self, double val):
        self.cache_config.empty_reserve = val

    # /* parallel configuration fields: */
    @property
    def dirty_bytes_threshold(self):
        return self.cache_config.dirty_bytes_threshold

    @dirty_bytes_threshold.setter
    def dirty_bytes_threshold(self, int val):
        self.cache_config.dirty_bytes_threshold = val
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5d.pxd0000644000175000017500000000062614045746670015464 0ustar00takluyvertakluyver# cython: language_level=3
## This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

from ._objects cimport ObjectID

cdef class DatasetID(ObjectID):
    cdef object _dtype
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739294309.0
h5py-3.13.0/h5py/h5d.pyx0000644000175000017500000005666214752703145015517 0ustar00takluyvertakluyver# cython: language_level=3
## This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.
"""
    Provides access to the low-level HDF5 "H5D" dataset interface.
"""

include "config.pxi"

# Compile-time imports

from collections import namedtuple
cimport cython
from ._objects cimport pdefault
from numpy cimport ndarray, import_array, PyArray_DATA
from .utils cimport  check_numpy_read, check_numpy_write, \
                     convert_tuple, convert_dims, emalloc, efree
from .h5t cimport TypeID, typewrap, py_create
from .h5s cimport SpaceID
from .h5p cimport PropID, propwrap
from ._proxy cimport dset_rw

from ._objects import phil, with_phil
from cpython cimport PyBUF_ANY_CONTIGUOUS, \
                     PyBuffer_Release, \
                     PyBytes_AsString, \
                     PyBytes_FromStringAndSize, \
                     PyObject_GetBuffer


# Initialization
import_array()

# === Public constants and data structures ====================================

COMPACT     = H5D_COMPACT
CONTIGUOUS  = H5D_CONTIGUOUS
CHUNKED     = H5D_CHUNKED

ALLOC_TIME_DEFAULT  = H5D_ALLOC_TIME_DEFAULT
ALLOC_TIME_LATE     = H5D_ALLOC_TIME_LATE
ALLOC_TIME_EARLY    = H5D_ALLOC_TIME_EARLY
ALLOC_TIME_INCR     = H5D_ALLOC_TIME_INCR

SPACE_STATUS_NOT_ALLOCATED  = H5D_SPACE_STATUS_NOT_ALLOCATED
SPACE_STATUS_PART_ALLOCATED = H5D_SPACE_STATUS_PART_ALLOCATED
SPACE_STATUS_ALLOCATED      = H5D_SPACE_STATUS_ALLOCATED

FILL_TIME_ALLOC = H5D_FILL_TIME_ALLOC
FILL_TIME_NEVER = H5D_FILL_TIME_NEVER
FILL_TIME_IFSET = H5D_FILL_TIME_IFSET

FILL_VALUE_UNDEFINED    = H5D_FILL_VALUE_UNDEFINED
FILL_VALUE_DEFAULT      = H5D_FILL_VALUE_DEFAULT
FILL_VALUE_USER_DEFINED = H5D_FILL_VALUE_USER_DEFINED

VIRTUAL = H5D_VIRTUAL
VDS_FIRST_MISSING   = H5D_VDS_FIRST_MISSING
VDS_LAST_AVAILABLE  = H5D_VDS_LAST_AVAILABLE

StoreInfo = namedtuple('StoreInfo',
                       'chunk_offset, filter_mask, byte_offset, size')

# === Dataset chunk iterator ==================================================

IF HDF5_VERSION >= (1, 12, 3) or (HDF5_VERSION >= (1, 10, 10) and HDF5_VERSION < (1, 10, 99)):

    cdef class _ChunkVisitor:
        cdef int rank
        cdef object func
        cdef object retval

        def __init__(self, rank, func):
            self.rank = rank
            self.func = func
            self.retval = None


    cdef int _cb_chunk_info(const hsize_t *offset, unsigned filter_mask, haddr_t addr, hsize_t size, void *op_data) except -1 with gil:
        """Callback function for chunk iteration. (Not to be used directly.)

        This function is called for every chunk with the following information:
            * offset - Logical position of the chunk’s first element in units of dataset elements
            * filter_mask - Bitmask indicating the filters used when the chunk was written
            * addr - Chunk file address
            * size - Chunk size in bytes, 0 if the chunk does not exist

        Chunk information is converted to a StoreInfo namedtuple and passed as input
        to the user-supplied callback object in the "op_data".

        Feature requires: HDF5 1.10.10 or any later 1.10
                          HDF5 1.12.3 or later

        .. versionadded:: 3.8
        """
        cdef _ChunkVisitor visit
        cdef object chunk_info
        cdef tuple cot

        visit = <_ChunkVisitor>op_data
        if addr != HADDR_UNDEF:
            cot = convert_dims(offset, visit.rank)
            chunk_info = StoreInfo(cot, filter_mask, addr, size)
        else:
            chunk_info = StoreInfo(None, filter_mask, None, size)

        visit.retval = visit.func(chunk_info)
        if visit.retval is not None:
            return 1
        return 0

# === Dataset operations ======================================================

@with_phil
def create(ObjectID loc not None, object name, TypeID tid not None,
           SpaceID space not None, PropID dcpl=None, PropID lcpl=None,
           PropID dapl = None):
    """ (objectID loc, STRING name or None, TypeID tid, SpaceID space,
         PropDCID dcpl=None, PropID lcpl=None) => DatasetID

    Create a new dataset.  If "name" is None, the dataset will be
    anonymous.
    """
    cdef hid_t dsid
    cdef char* cname = NULL
    if name is not None:
        cname = name

    if cname != NULL:
        dsid = H5Dcreate(loc.id, cname, tid.id, space.id,
                 pdefault(lcpl), pdefault(dcpl), pdefault(dapl))
    else:
        dsid = H5Dcreate_anon(loc.id, tid.id, space.id,
                 pdefault(dcpl), pdefault(dapl))
    return DatasetID(dsid)

@with_phil
def open(ObjectID loc not None, char* name, PropID dapl=None):
    """ (ObjectID loc, STRING name, PropID dapl=None) => DatasetID

    Open an existing dataset attached to a group or file object, by name.

    If specified, dapl may be a dataset access property list.
    """
    return DatasetID(H5Dopen(loc.id, name, pdefault(dapl)))

# --- Proxy functions for safe(r) threading -----------------------------------


cdef class DatasetID(ObjectID):

    """
        Represents an HDF5 dataset identifier.

        Objects of this class may be used in any HDF5 function which expects
        a dataset identifier.  Also, all H5D* functions which take a dataset
        instance as their first argument are presented as methods of this
        class.

        Properties:
        dtype:  Numpy dtype representing the dataset type
        shape:  Numpy-style shape tuple representing the dataspace
        rank:   Integer giving dataset rank

        * Hashable: Yes, unless anonymous
        * Equality: True HDF5 identity if unless anonymous
    """

    @property
    def dtype(self):
        """ Numpy dtype object representing the dataset type """
        # Dataset type can't change
        cdef TypeID tid
        with phil:
            if self._dtype is None:
                tid = self.get_type()
                self._dtype = tid.dtype
            return self._dtype

    @property
    def shape(self):
        """ Numpy-style shape tuple representing the dataspace """
        # Shape can change (DatasetID.extend), so don't cache it
        cdef SpaceID sid
        with phil:
            sid = self.get_space()
            return sid.get_simple_extent_dims()

    @property
    def rank(self):
        """ Integer giving the dataset rank (0 = scalar) """
        cdef SpaceID sid
        with phil:
            sid = self.get_space()
            return sid.get_simple_extent_ndims()


    @with_phil
    def read(self, SpaceID mspace not None, SpaceID fspace not None,
             ndarray arr_obj not None, TypeID mtype=None,
             PropID dxpl=None):
        """ (SpaceID mspace, SpaceID fspace, NDARRAY arr_obj,
             TypeID mtype=None, PropDXID dxpl=None)

            Read data from an HDF5 dataset into a Numpy array.

            It is your responsibility to ensure that the memory dataspace
            provided is compatible with the shape of the Numpy array.  Since a
            wide variety of dataspace configurations are possible, this is not
            checked.  You can easily crash Python by reading in data from too
            large a dataspace.

            If a memory datatype is not specified, one will be auto-created
            based on the array's dtype.

            The provided Numpy array must be writable and C-contiguous.  If
            this is not the case, ValueError will be raised and the read will
            fail.  Keyword dxpl may be a dataset transfer property list.
        """
        cdef hid_t self_id, mtype_id, mspace_id, fspace_id, plist_id
        cdef void* data
        cdef int oldflags

        if mtype is None:
            mtype = py_create(arr_obj.dtype)
        check_numpy_write(arr_obj, -1)

        self_id = self.id
        mtype_id = mtype.id
        mspace_id = mspace.id
        fspace_id = fspace.id
        plist_id = pdefault(dxpl)
        data = PyArray_DATA(arr_obj)

        dset_rw(self_id, mtype_id, mspace_id, fspace_id, plist_id, data, 1)


    @with_phil
    def write(self, SpaceID mspace not None, SpaceID fspace not None,
              ndarray arr_obj not None, TypeID mtype=None,
              PropID dxpl=None):
        """ (SpaceID mspace, SpaceID fspace, NDARRAY arr_obj,
             TypeID mtype=None, PropDXID dxpl=None)

            Write data from a Numpy array to an HDF5 dataset. Keyword dxpl may
            be a dataset transfer property list.

            It is your responsibility to ensure that the memory dataspace
            provided is compatible with the shape of the Numpy array.  Since a
            wide variety of dataspace configurations are possible, this is not
            checked.  You can easily crash Python by writing data from too
            large a dataspace.

            If a memory datatype is not specified, one will be auto-created
            based on the array's dtype.

            The provided Numpy array must be C-contiguous.  If this is not the
            case, ValueError will be raised and the read will fail.
        """
        cdef hid_t self_id, mtype_id, mspace_id, fspace_id, plist_id
        cdef void* data
        cdef int oldflags

        if mtype is None:
            mtype = py_create(arr_obj.dtype)
        check_numpy_read(arr_obj, -1)

        self_id = self.id
        mtype_id = mtype.id
        mspace_id = mspace.id
        fspace_id = fspace.id
        plist_id = pdefault(dxpl)
        data = PyArray_DATA(arr_obj)

        dset_rw(self_id, mtype_id, mspace_id, fspace_id, plist_id, data, 0)


    @with_phil
    def extend(self, tuple shape):
        """ (TUPLE shape)

            Extend the given dataset so it's at least as big as "shape".  Note
            that a dataset may only be extended up to the maximum dimensions of
            its dataspace, which are fixed when the dataset is created.
        """
        cdef int rank
        cdef hid_t space_id = 0
        cdef hsize_t* dims = NULL

        try:
            space_id = H5Dget_space(self.id)
            rank = H5Sget_simple_extent_ndims(space_id)

            if len(shape) != rank:
                raise TypeError("New shape length (%d) must match dataset rank (%d)" % (len(shape), rank))

            dims = emalloc(sizeof(hsize_t)*rank)
            convert_tuple(shape, dims, rank)
            H5Dextend(self.id, dims)

        finally:
            efree(dims)
            if space_id:
                H5Sclose(space_id)


    @with_phil
    def set_extent(self, tuple shape):
        """ (TUPLE shape)

            Set the size of the dataspace to match the given shape.  If the new
            size is larger in any dimension, it must be compatible with the
            maximum dataspace size.
        """
        cdef int rank
        cdef hid_t space_id = 0
        cdef hsize_t* dims = NULL

        try:
            space_id = H5Dget_space(self.id)
            rank = H5Sget_simple_extent_ndims(space_id)

            if len(shape) != rank:
                raise TypeError("New shape length (%d) must match dataset rank (%d)" % (len(shape), rank))

            dims = emalloc(sizeof(hsize_t)*rank)
            convert_tuple(shape, dims, rank)
            H5Dset_extent(self.id, dims)

        finally:
            efree(dims)
            if space_id:
                H5Sclose(space_id)


    @with_phil
    def get_space(self):
        """ () => SpaceID

            Create and return a new copy of the dataspace for this dataset.
        """
        return SpaceID(H5Dget_space(self.id))


    @with_phil
    def get_space_status(self):
        """ () => INT space_status_code

            Determine if space has been allocated for a dataset.
            Return value is one of:

            * SPACE_STATUS_NOT_ALLOCATED
            * SPACE_STATUS_PART_ALLOCATED
            * SPACE_STATUS_ALLOCATED
        """
        cdef H5D_space_status_t status
        H5Dget_space_status(self.id, &status)
        return status


    @with_phil
    def get_type(self):
        """ () => TypeID

            Create and return a new copy of the datatype for this dataset.
        """
        return typewrap(H5Dget_type(self.id))


    @with_phil
    def get_create_plist(self):
        """ () => PropDCID

            Create an return a new copy of the dataset creation property list
            used when this dataset was created.
        """
        return propwrap(H5Dget_create_plist(self.id))


    @with_phil
    def get_access_plist(self):
        """ () => PropDAID

            Create an return a new copy of the dataset access property list.
        """
        return propwrap(H5Dget_access_plist(self.id))


    @with_phil
    def get_offset(self):
        """ () => LONG offset or None

            Get the offset of this dataset in the file, in bytes, or None if
            it doesn't have one.  This is always the case for datasets which
            use chunked storage, compact datasets, and datasets for which space
            has not yet been allocated in the file.
        """
        cdef haddr_t offset
        offset = H5Dget_offset(self.id)
        if offset == HADDR_UNDEF:
            return None
        return offset


    @with_phil
    def get_storage_size(self):
        """ () => LONG storage_size

            Report the size of storage, in bytes, that is allocated in the
            file for the dataset's raw data. The reported amount is the storage
            allocated in the written file, which will typically differ from the
            space required to hold a dataset in working memory (any associated
            HDF5 metadata is excluded).

            For contiguous datasets, the returned size equals the current
            allocated size of the raw data. For unfiltered chunked datasets, the
            returned size is the number of allocated chunks times the chunk
            size. For filtered chunked datasets, the returned size is the space
            required to store the filtered data.
        """
        return H5Dget_storage_size(self.id)


    @with_phil
    def flush(self):
        """ no return

        Flushes all buffers associated with a dataset to disk.

        This function causes all buffers associated with a dataset to be
        immediately flushed to disk without removing the data from the cache.

        Use this in SWMR write mode to allow readers to be updated with the
        dataset changes.

        Feature requires: 1.9.178 HDF5
        """
        H5Dflush(self.id)

    @with_phil
    def refresh(self):
        """ no return

        Refreshes all buffers associated with a dataset.

        This function causes all buffers associated with a dataset to be
        cleared and immediately re-loaded with updated contents from disk.

        This function essentially closes the dataset, evicts all metadata
        associated with it from the cache, and then re-opens the dataset.
        The reopened dataset is automatically re-registered with the same ID.

        Use this in SWMR read mode to poll for dataset changes.

        Feature requires: 1.9.178 HDF5
        """
        H5Drefresh(self.id)

    def write_direct_chunk(self, offsets, data, filter_mask=0x00000000, PropID dxpl=None):
        """ (offsets, data, uint32_t filter_mask=0x00000000, PropID dxpl=None)

        This function bypasses any filters HDF5 would normally apply to
        written data. However, calling code may apply filters (e.g. gzip
        compression) itself before writing the data.

        `data` is a Python object that implements the Py_buffer interface.
        In case of a ndarray the shape and dtype are ignored. It's the
        user's responsibility to make sure they are compatible with the
        dataset.

        `filter_mask` is a bit field of up to 32 values. It records which
        filters have been applied to this chunk, of the filter pipeline
        defined for that dataset. Each bit set to `1` means that the filter
        in the corresponding position in the pipeline was not applied.
        So the default value of `0` means that all defined filters have
        been applied to the data before calling this function.
        """

        cdef hid_t dset_id
        cdef hid_t dxpl_id
        cdef hid_t space_id = 0
        cdef hsize_t *offset = NULL
        cdef size_t data_size
        cdef int rank
        cdef Py_buffer view

        dset_id = self.id
        dxpl_id = pdefault(dxpl)
        space_id = H5Dget_space(self.id)
        rank = H5Sget_simple_extent_ndims(space_id)

        if len(offsets) != rank:
            raise TypeError("offset length (%d) must match dataset rank (%d)" % (len(offsets), rank))

        try:
            offset = emalloc(sizeof(hsize_t)*rank)
            convert_tuple(offsets, offset, rank)
            PyObject_GetBuffer(data, &view, PyBUF_ANY_CONTIGUOUS)
            H5DOwrite_chunk(dset_id, dxpl_id, filter_mask, offset, view.len, view.buf)
        finally:
            efree(offset)
            PyBuffer_Release(&view)
            if space_id:
                H5Sclose(space_id)


    @cython.boundscheck(False)
    @cython.wraparound(False)
    def read_direct_chunk(self, offsets, PropID dxpl=None, unsigned char[::1] out=None):
        """ (offsets, PropID dxpl=None, out=None)

        Reads data to a bytes array directly from a chunk at position
        specified by the `offsets` argument and bypasses any filters HDF5
        would normally apply to the written data. However, the written data
        may be compressed or not.

        Returns a tuple containing the `filter_mask` and the raw data
        storing this chunk as bytes if `out` is None, else as a memoryview.

        `filter_mask` is a bit field of up to 32 values. It records which
        filters have been applied to this chunk, of the filter pipeline
        defined for that dataset. Each bit set to `1` means that the filter
        in the corresponding position in the pipeline was not applied to
        compute the raw data. So the default value of `0` means that all
        defined filters have been applied to the raw data.

        If the `out` argument is not None, it must be a writeable
        contiguous 1D array-like of bytes (e.g., `bytearray` or
        `numpy.ndarray`) and large enough to contain the whole chunk.
        """
        cdef hid_t dset_id
        cdef hid_t dxpl_id
        cdef hid_t space_id
        cdef hsize_t *offset = NULL
        cdef int rank
        cdef uint32_t filters
        cdef hsize_t chunk_bytes, out_bytes
        cdef int nb_offsets = len(offsets)
        cdef void * chunk_buffer

        dset_id = self.id
        dxpl_id = pdefault(dxpl)
        space_id = H5Dget_space(dset_id)
        rank = H5Sget_simple_extent_ndims(space_id)
        H5Sclose(space_id)

        if nb_offsets != rank:
            raise TypeError(
                f"offsets length ({nb_offsets}) must match dataset rank ({rank})"
            )

        offset = emalloc(sizeof(hsize_t)*rank)
        try:
            convert_tuple(offsets, offset, rank)
            H5Dget_chunk_storage_size(dset_id, offset, &chunk_bytes)

            if out is None:
                retval = PyBytes_FromStringAndSize(NULL, chunk_bytes)
                chunk_buffer = PyBytes_AsString(retval)
            else:
                out_bytes = out.shape[0]  # Fast way to get out length
                if out_bytes < chunk_bytes:
                    raise ValueError(
                        f"out buffer is only {out_bytes} bytes, {chunk_bytes} bytes required"
                    )
                retval = memoryview(out[:chunk_bytes])
                chunk_buffer = &out[0]

            H5Dread_chunk(dset_id, dxpl_id, offset, &filters, chunk_buffer)
        finally:
            efree(offset)

        return filters, retval

    @with_phil
    def get_num_chunks(self, SpaceID space=None):
        """ (SpaceID space=None) => INT num_chunks

        Retrieve the number of chunks that have nonempty intersection with a
        specified dataspace. Currently, this function only gets the number
        of all written chunks, regardless of the dataspace.

        .. versionadded:: 3.0
        """
        cdef hsize_t num_chunks

        if space is None:
            space = self.get_space()
        H5Dget_num_chunks(self.id, space.id, &num_chunks)
        return num_chunks

    @with_phil
    def get_chunk_info(self, hsize_t index, SpaceID space=None):
        """ (hsize_t index, SpaceID space=None) => StoreInfo

        Retrieve storage information about a chunk specified by its index.

        .. versionadded:: 3.0
        """
        cdef haddr_t byte_offset
        cdef hsize_t size
        cdef hsize_t *chunk_offset
        cdef unsigned filter_mask
        cdef hid_t space_id = 0
        cdef int rank

        if space is None:
            space_id = H5Dget_space(self.id)
        else:
            space_id = space.id

        rank = H5Sget_simple_extent_ndims(space_id)
        chunk_offset = emalloc(sizeof(hsize_t) * rank)
        H5Dget_chunk_info(self.id, space_id, index, chunk_offset,
                          &filter_mask, &byte_offset, &size)
        cdef tuple cot = convert_dims(chunk_offset, rank)
        efree(chunk_offset)

        if space is None:
            H5Sclose(space_id)

        return StoreInfo(cot if byte_offset != HADDR_UNDEF else None,
                         filter_mask,
                         byte_offset if byte_offset != HADDR_UNDEF else None,
                         size)


    @with_phil
    def get_chunk_info_by_coord(self, tuple chunk_offset not None):
        """ (TUPLE chunk_offset) => StoreInfo

        Retrieve information about a chunk specified by the array
        address of the chunk’s first element in each dimension.

        .. versionadded:: 3.0
        """
        cdef haddr_t byte_offset
        cdef hsize_t size
        cdef unsigned filter_mask
        cdef hid_t space_id = 0
        cdef int rank
        cdef hsize_t *co = NULL

        space_id = H5Dget_space(self.id)
        rank = H5Sget_simple_extent_ndims(space_id)
        H5Sclose(space_id)
        co = emalloc(sizeof(hsize_t) * rank)
        convert_tuple(chunk_offset, co, rank)
        H5Dget_chunk_info_by_coord(self.id, co,
                                   &filter_mask, &byte_offset,
                                   &size)
        efree(co)

        return StoreInfo(chunk_offset if byte_offset != HADDR_UNDEF else None,
                         filter_mask,
                         byte_offset if byte_offset != HADDR_UNDEF else None,
                         size)

    IF HDF5_VERSION >= (1, 12, 3) or (HDF5_VERSION >= (1, 10, 10) and HDF5_VERSION < (1, 10, 99)):

        @with_phil
        def chunk_iter(self, object func, PropID dxpl=None):
            """(CALLABLE func, PropDXID dxpl=None) => 

            Iterate over each chunk and invoke user-supplied "func" callable object.
            The "func" receives chunk information: logical offset, filter mask,
            file location, and size. Any not-None return value from "func" ends iteration.

            Feature requires: HDF5 1.10.10 or any later 1.10
                            HDF5 1.12.3 or later

            .. versionadded:: 3.8
            """
            cdef int rank
            cdef hid_t space_id
            cdef _ChunkVisitor visit

            space_id = H5Dget_space(self.id)
            rank = H5Sget_simple_extent_ndims(space_id)
            H5Sclose(space_id)
            visit = _ChunkVisitor(rank, func)
            H5Dchunk_iter(self.id, pdefault(dxpl), _cb_chunk_info, visit)

            return visit.retval
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5ds.pxd0000644000175000017500000000047414045746670015650 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/h5ds.pyx0000644000175000017500000001130214350630273015653 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Low-level HDF5 "H5DS" Dimension Scale interface.
"""

# Compile-time imports
from .h5d cimport DatasetID
from .utils cimport emalloc, efree

from ._objects import phil, with_phil

@with_phil
def set_scale(DatasetID dset not None, char* dimname=''):
    """(DatasetID dset, STRING dimname)

    Convert dataset dset to a dimension scale, with optional name dimname.
    """
    H5DSset_scale(dset.id, dimname)

@with_phil
def is_scale(DatasetID dset not None):
    """(DatasetID dset) => BOOL

    Determines whether dset is a dimension scale.
    """
    return (H5DSis_scale(dset.id))

@with_phil
def attach_scale(DatasetID dset not None, DatasetID dscale not None, unsigned
                 int idx):
    """(DatasetID dset, DatasetID dscale, UINT idx)

    Attach Dimension Scale dscale to Dimension idx of Dataset dset.
    """
    H5DSattach_scale(dset.id, dscale.id, idx)

@with_phil
def is_attached(DatasetID dset not None, DatasetID dscale not None,
                unsigned int idx):
    """(DatasetID dset, DatasetID dscale, UINT idx) => BOOL

    Report if Dimension Scale dscale is currently attached to Dimension
    idx of Dataset dset.
    """
    return (H5DSis_attached(dset.id, dscale.id, idx))

@with_phil
def detach_scale(DatasetID dset not None, DatasetID dscale not None,
                 unsigned int idx):
    """(DatasetID dset, DatasetID dscale, UINT idx)

    Detach Dimension Scale dscale from the Dimension idx of Dataset dset.
    """
    H5DSdetach_scale(dset.id, dscale.id, idx)

@with_phil
def get_num_scales(DatasetID dset not None, unsigned int dim):
    """(DatasetID dset, UINT dim) => INT number_of_scales

    Determines how many Dimension Scales are attached to Dimension dim
    of Dataset dset.
    """
    return H5DSget_num_scales(dset.id, dim)

@with_phil
def set_label(DatasetID dset not None, unsigned int idx, char* label):
    """(DatasetID dset, UINT idx, STRING label)

    Set label for the Dimension idx of Dataset dset to the value label.
    """
    H5DSset_label(dset.id, idx, label)

@with_phil
def get_label(DatasetID dset not None, unsigned int idx):
    """(DatasetID dset, UINT idx) => STRING name_of_label

    Read the label for Dimension idx of Dataset dset into buffer label.
    """
    cdef ssize_t size
    cdef char* label
    label = NULL

    size = H5DSget_label(dset.id, idx, NULL, 0)
    if size <= 0:
        return b''
    label = emalloc(sizeof(char)*(size+1))
    try:
        H5DSget_label(dset.id, idx, label, size+1)
        plabel = label
        return plabel
    finally:
        efree(label)

@with_phil
def get_scale_name(DatasetID dscale not None):
    """(DatasetID dscale) => STRING name_of_scale

    Retrieves name of Dimension Scale dscale.
    """
    cdef ssize_t namelen
    cdef char* name = NULL

    namelen = H5DSget_scale_name(dscale.id, NULL, 0)
    if namelen <= 0:
        return b''
    name = emalloc(sizeof(char)*(namelen+1))
    try:
        H5DSget_scale_name(dscale.id, name, namelen+1)
        pname = name
        return pname
    finally:
        efree(name)


cdef class _DimensionScaleVisitor:

    cdef object func
    cdef object retval

    def __init__(self, func):
        self.func = func
        self.retval = None


cdef herr_t cb_ds_iter(hid_t dset, unsigned int dim, hid_t scale, void* vis_in) except 2 with gil:

    cdef _DimensionScaleVisitor vis = <_DimensionScaleVisitor>vis_in

    # we did not retrieve the scale identifier using the normal machinery,
    # so we need to inc_ref it before using it to create a DatasetID.
    H5Iinc_ref(scale)
    vis.retval = vis.func(DatasetID(scale))

    if vis.retval is not None:
        return 1
    return 0

@with_phil
def iterate(DatasetID dset not None, unsigned int dim, object func,
            int startidx=0):
    """ (DatasetID loc, UINT dim, CALLABLE func, UINT startidx=0)
    => Return value from func

    Iterate a callable (function, method or callable object) over the
    members of a group.  Your callable should have the signature::

        func(STRING name) => Result

    Returning None continues iteration; returning anything else aborts
    iteration and returns that value. Keywords:
    """
    if startidx < 0:
        raise ValueError("Starting index must be non-negative")

    cdef int i = startidx
    cdef _DimensionScaleVisitor vis = _DimensionScaleVisitor(func)

    H5DSiterate_scales(dset.id, dim, &i, cb_ds_iter, vis)

    return vis.retval
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5f.pxd0000644000175000017500000000063514045746670015466 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

from ._objects cimport ObjectID
from .h5g cimport GroupID

cdef class FileID(GroupID):
    pass
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1728472512.0
h5py-3.13.0/h5py/h5f.pyx0000644000175000017500000004440014701462700015475 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Low-level operations on HDF5 file objects.
"""

include "config.pxi"

# C level imports
from cpython.buffer cimport PyObject_CheckBuffer, \
                            PyObject_GetBuffer, PyBuffer_Release, \
                            PyBUF_SIMPLE
from ._objects cimport pdefault
from .h5p cimport propwrap, PropFAID, PropFCID
from .h5i cimport wrap_identifier
from .h5ac cimport CacheConfig
from .utils cimport emalloc, efree

# Python level imports
from collections import namedtuple
import gc
from . import _objects, h5o
from ._objects import phil, with_phil

from cpython.bytes cimport PyBytes_FromStringAndSize, PyBytes_AsString

# Initialization

# === Public constants and data structures ====================================

ACC_TRUNC   = H5F_ACC_TRUNC
ACC_EXCL    = H5F_ACC_EXCL
ACC_RDWR    = H5F_ACC_RDWR
ACC_RDONLY  = H5F_ACC_RDONLY
ACC_SWMR_WRITE = H5F_ACC_SWMR_WRITE
ACC_SWMR_READ  = H5F_ACC_SWMR_READ


SCOPE_LOCAL     = H5F_SCOPE_LOCAL
SCOPE_GLOBAL    = H5F_SCOPE_GLOBAL

CLOSE_DEFAULT = H5F_CLOSE_DEFAULT
CLOSE_WEAK  = H5F_CLOSE_WEAK
CLOSE_SEMI  = H5F_CLOSE_SEMI
CLOSE_STRONG = H5F_CLOSE_STRONG

OBJ_FILE    = H5F_OBJ_FILE
OBJ_DATASET = H5F_OBJ_DATASET
OBJ_GROUP   = H5F_OBJ_GROUP
OBJ_DATATYPE = H5F_OBJ_DATATYPE
OBJ_ATTR    = H5F_OBJ_ATTR
OBJ_ALL     = H5F_OBJ_ALL
OBJ_LOCAL   = H5F_OBJ_LOCAL
UNLIMITED   = H5F_UNLIMITED

LIBVER_EARLIEST = H5F_LIBVER_EARLIEST
LIBVER_LATEST = H5F_LIBVER_LATEST
LIBVER_V18 = H5F_LIBVER_V18
LIBVER_V110 = H5F_LIBVER_V110

IF HDF5_VERSION >= VOL_MIN_HDF5_VERSION:
    LIBVER_V112 = H5F_LIBVER_V112

IF HDF5_VERSION >= (1, 13, 0):
    LIBVER_V114 = H5F_LIBVER_V114

FILE_IMAGE_OPEN_RW = H5LT_FILE_IMAGE_OPEN_RW

FSPACE_STRATEGY_FSM_AGGR = H5F_FSPACE_STRATEGY_FSM_AGGR
FSPACE_STRATEGY_PAGE = H5F_FSPACE_STRATEGY_PAGE
FSPACE_STRATEGY_AGGR = H5F_FSPACE_STRATEGY_AGGR
FSPACE_STRATEGY_NONE = H5F_FSPACE_STRATEGY_NONE

# Used in FileID.get_page_buffering_stats()
PageBufStats = namedtuple('PageBufferStats', ['meta', 'raw'])
PageStats = namedtuple('PageStats', ['accesses', 'hits', 'misses', 'evictions', 'bypasses'])


# === File operations =========================================================

@with_phil
def open(char* name, unsigned int flags=H5F_ACC_RDWR, PropFAID fapl=None):
    """(STRING name, UINT flags=ACC_RDWR, PropFAID fapl=None) => FileID

    Open an existing HDF5 file.  Keyword "flags" may be:

    ACC_RDWR
        Open in read-write mode

    ACC_RDONLY
        Open in readonly mode

    Keyword fapl may be a file access property list.
    """
    return FileID(H5Fopen(name, flags, pdefault(fapl)))


@with_phil
def create(char* name, int flags=H5F_ACC_TRUNC, PropFCID fcpl=None,
                                                PropFAID fapl=None):
    """(STRING name, INT flags=ACC_TRUNC, PropFCID fcpl=None,
    PropFAID fapl=None) => FileID

    Create a new HDF5 file.  Keyword "flags" may be:

    ACC_TRUNC
        Truncate an existing file, discarding its data

    ACC_EXCL
        Fail if a conflicting file exists

    To keep the behavior in line with that of Python's built-in functions,
    the default is ACC_TRUNC.  Be careful!
    """
    return FileID(H5Fcreate(name, flags, pdefault(fcpl), pdefault(fapl)))

@with_phil
def open_file_image(image, flags=0):
    """(STRING image, INT flags=0) => FileID

    Load a new HDF5 file into memory.  Keyword "flags" may be:

    FILE_IMAGE_OPEN_RW
        Specifies opening the file image in read/write mode.
    """
    cdef Py_buffer buf

    if not PyObject_CheckBuffer(image):
        raise TypeError("image must support the buffer protocol")

    PyObject_GetBuffer(image, &buf, PyBUF_SIMPLE)
    try:
        return FileID(H5LTopen_file_image(buf.buf, buf.len, flags))
    finally:
        PyBuffer_Release(&buf)


@with_phil
def flush(ObjectID obj not None, int scope=H5F_SCOPE_LOCAL):
    """(ObjectID obj, INT scope=SCOPE_LOCAL)

    Tell the HDF5 library to flush file buffers to disk.  "obj" may
    be the file identifier, or the identifier of any object residing in
    the file.  Keyword "scope" may be:

    SCOPE_LOCAL
        Flush only the given file

    SCOPE_GLOBAL
        Flush the entire virtual file
    """
    H5Fflush(obj.id, scope)


@with_phil
def is_hdf5(char* name):
    """(STRING name) => BOOL

    Determine if a given file is an HDF5 file.  Note this raises an
    exception if the file doesn't exist.
    """
    return (H5Fis_hdf5(name))


@with_phil
def mount(ObjectID loc not None, char* name, FileID fid not None):
    """(ObjectID loc, STRING name, FileID fid)

    Mount an open file on the group "name" under group loc_id.  Note that
    "name" must already exist.
    """
    H5Fmount(loc.id, name, fid.id, H5P_DEFAULT)


@with_phil
def unmount(ObjectID loc not None, char* name):
    """(ObjectID loc, STRING name)

    Unmount a file, mounted at "name" under group loc_id.
    """
    H5Funmount(loc.id, name)


@with_phil
def get_name(ObjectID obj not None):
    """(ObjectID obj) => STRING

    Determine the name of the file in which the specified object resides.
    """
    cdef ssize_t size
    cdef char* name
    name = NULL

    size = H5Fget_name(obj.id, NULL, 0)
    assert size >= 0
    name = emalloc(sizeof(char)*(size+1))
    try:
        H5Fget_name(obj.id, name, size+1)
        pname = name
        return pname
    finally:
        efree(name)


@with_phil
def get_obj_count(object where=OBJ_ALL, int types=H5F_OBJ_ALL):
    """(OBJECT where=OBJ_ALL, types=OBJ_ALL) => INT

    Get the number of open objects.

    where
        Either a FileID instance representing an HDF5 file, or the
        special constant OBJ_ALL, to count objects in all files.

    type
        Specify what kinds of object to include.  May be one of OBJ*,
        or any bitwise combination (e.g. ``OBJ_FILE | OBJ_ATTR``).

        The special value OBJ_ALL matches all object types, and
        OBJ_LOCAL will only match objects opened through a specific
        identifier.
    """
    cdef hid_t where_id
    if isinstance(where, FileID):
        where_id = where.id
    elif isinstance(where, int):
        where_id = where
    else:
        raise TypeError("Location must be a FileID or OBJ_ALL.")

    return H5Fget_obj_count(where_id, types)


@with_phil
def get_obj_ids(object where=OBJ_ALL, int types=H5F_OBJ_ALL):
    """(OBJECT where=OBJ_ALL, types=OBJ_ALL) => LIST

    Get a list of identifier instances for open objects.

    where
        Either a FileID instance representing an HDF5 file, or the
        special constant OBJ_ALL, to list objects in all files.

    type
        Specify what kinds of object to include.  May be one of OBJ*,
        or any bitwise combination (e.g. ``OBJ_FILE | OBJ_ATTR``).

        The special value OBJ_ALL matches all object types, and
        OBJ_LOCAL will only match objects opened through a specific
        identifier.
    """
    cdef int count
    cdef int i
    cdef hid_t where_id
    cdef hid_t *obj_list = NULL
    cdef list py_obj_list = []

    if isinstance(where, FileID):
        where_id = where.id
    else:
        try:
            where_id = int(where)
        except TypeError:
            raise TypeError("Location must be a FileID or OBJ_ALL.")

    try:
        count = H5Fget_obj_count(where_id, types)
        obj_list = emalloc(sizeof(hid_t)*count)

        if count > 0: # HDF5 complains that obj_list is NULL, even if count==0
            # Garbage collection might dealloc a Python object & call H5Idec_ref
            # between getting an HDF5 ID and calling H5Iinc_ref, breaking it.
            # Disable GC until we have inc_ref'd the IDs to keep them alive.
            gc_was_enabled = gc.isenabled()
            gc.disable()
            try:
                H5Fget_obj_ids(where_id, types, count, obj_list)
                for i in range(count):
                    py_obj_list.append(wrap_identifier(obj_list[i]))
                    # The HDF5 function returns a borrowed reference for each hid_t.
                    H5Iinc_ref(obj_list[i])
            finally:
                if gc_was_enabled:
                    gc.enable()

        return py_obj_list

    finally:
        efree(obj_list)


# === FileID implementation ===================================================

cdef class FileID(GroupID):

    """
        Represents an HDF5 file identifier.

        These objects wrap a small portion of the H5F interface; all the
        H5F functions which can take arbitrary objects in addition to
        file identifiers are provided as functions in the h5f module.

        Properties:

        * name:   File name on disk

        Behavior:

        * Hashable: Yes, unique to the file (but not the access mode)
        * Equality: Hash comparison
    """

    @property
    def name(self):
        """ File name on disk (according to h5f.get_name()) """
        with phil:
            return get_name(self)


    @with_phil
    def close(self):
        """()

        Terminate access through this identifier.  Note that depending on
        what property list settings were used to open the file, the
        physical file might not be closed until all remaining open
        identifiers are freed.
        """
        self._close()
        _objects.nonlocal_close()

    @with_phil
    def _close_open_objects(self, int types):
        # Used by File.close(). This avoids the get_obj_ids wrapper, which
        # creates Python objects and increments HDF5 ref counts while we're
        # trying to clean up. E.g. that can be problematic at Python shutdown.
        cdef int count, i
        cdef hid_t *obj_list = NULL

        count = H5Fget_obj_count(self.id, types)
        if count == 0:
            return
        obj_list =  emalloc(sizeof(hid_t) * count)
        try:
            H5Fget_obj_ids(self.id, types, count, obj_list)
            for i in range(count):
                while H5Iis_valid(obj_list[i]):
                    H5Idec_ref(obj_list[i])
        finally:
            efree(obj_list)

    @with_phil
    def reopen(self):
        """() => FileID

        Retrieve another identifier for a file (which must still be open).
        The new identifier is guaranteed to neither be mounted nor contain
        a mounted file.
        """
        return FileID(H5Freopen(self.id))


    @with_phil
    def get_filesize(self):
        """() => LONG size

        Determine the total size (in bytes) of the HDF5 file,
        including any user block.
        """
        cdef hsize_t size
        H5Fget_filesize(self.id, &size)
        return size


    @with_phil
    def get_create_plist(self):
        """() => PropFCID

        Retrieve a copy of the file creation property list used to
        create this file.
        """
        return propwrap(H5Fget_create_plist(self.id))


    @with_phil
    def get_access_plist(self):
        """() => PropFAID

        Retrieve a copy of the file access property list which manages access
        to this file.
        """
        return propwrap(H5Fget_access_plist(self.id))


    @with_phil
    def get_freespace(self):
        """() => LONG freespace

        Determine the amount of free space in this file.  Note that this
        only tracks free space until the file is closed.
        """
        return H5Fget_freespace(self.id)


    @with_phil
    def get_intent(self):
        """ () => INT

        Determine the file's write intent, either of:
        - H5F_ACC_RDONLY
        - H5F_ACC_RDWR
        """
        cdef unsigned int mode
        H5Fget_intent(self.id, &mode)
        return mode


    @with_phil
    def get_vfd_handle(self, fapl=None):
        """ (PropFAID) => INT

        Retrieve the file handle used by the virtual file driver.

        This may not be supported for all file drivers, and the meaning of the
        return value may depend on the file driver.

        The 'family' and 'multi' drivers access multiple files, and a file
        access property list (fapl) can be used to indicate which to access,
        with H5Pset_family_offset or H5Pset_multi_type.
        """
        cdef int *handle
        H5Fget_vfd_handle(self.id, pdefault(fapl), &handle)
        return handle[0]

    @with_phil
    def get_file_image(self):
        """ () => BYTES

        Retrieves a copy of the image of an existing, open file.
        """

        cdef ssize_t size

        size = H5Fget_file_image(self.id, NULL, 0)
        image = PyBytes_FromStringAndSize(NULL, size)

        H5Fget_file_image(self.id, PyBytes_AsString(image), size)

        return image

    IF MPI:

        @with_phil
        def set_mpi_atomicity(self, bint atomicity):
            """ (BOOL atomicity)

            For MPI-IO driver, set to atomic (True), which guarantees sequential
            I/O semantics, or non-atomic (False), which improves  performance.

            Default is False.

            Feature requires: Parallel HDF5
            """
            H5Fset_mpi_atomicity(self.id, atomicity)


        @with_phil
        def get_mpi_atomicity(self):
            """ () => BOOL

            Return atomicity setting for MPI-IO driver.

            Feature requires: Parallel HDF5
            """
            cdef hbool_t atom

            H5Fget_mpi_atomicity(self.id, &atom)
            return atom


    @with_phil
    def get_mdc_hit_rate(self):
        """() => DOUBLE

        Retrieve the cache hit rate

        """
        cdef double hit_rate
        H5Fget_mdc_hit_rate(self.id, &hit_rate)
        return hit_rate


    @with_phil
    def get_mdc_size(self):
        """() => (max_size, min_clean_size, cur_size, cur_num_entries) [SIZE_T, SIZE_T, SIZE_T, INT]

        Obtain current metadata cache size data for specified file.

        """
        cdef size_t max_size
        cdef size_t min_clean_size
        cdef size_t cur_size
        cdef int cur_num_entries


        H5Fget_mdc_size(self.id, &max_size, &min_clean_size, &cur_size, &cur_num_entries)

        return (max_size, min_clean_size, cur_size, cur_num_entries)


    @with_phil
    def reset_mdc_hit_rate_stats(self):
        """no return

        rests the hit-rate statistics

        """
        H5Freset_mdc_hit_rate_stats(self.id)


    @with_phil
    def get_mdc_config(self):
        """() => CacheConfig
        Returns an object that stores all the information about the meta-data cache
        configuration. This config is created for every file in-memory with the default
        cache config values, it is not saved to the hdf5 file.
        """

        cdef CacheConfig config = CacheConfig()

        H5Fget_mdc_config(self.id, &config.cache_config)

        return config

    @with_phil
    def set_mdc_config(self, CacheConfig config not None):
        """(CacheConfig) => None
        Sets the meta-data cache configuration for a file. This config is created for every file
        in-memory with the default config values, it is not saved to the hdf5 file. Any change to
        the configuration lives until the hdf5 file is closed.
        """
        # I feel this should have some sanity checking to make sure that
        H5Fset_mdc_config(self.id, &config.cache_config)

    @with_phil
    def start_swmr_write(self):
        """ no return

        Enables SWMR writing mode for a file.

        This function will activate SWMR writing mode for a file associated
        with file_id. This routine will prepare and ensure the file is safe
        for SWMR writing as follows:

            * Check that the file is opened with write access (H5F_ACC_RDWR).
            * Check that the file is opened with the latest library format
              to ensure data structures with check-summed metadata are used.
            * Check that the file is not already marked in SWMR writing mode.
            * Enable reading retries for check-summed metadata to remedy
              possible checksum failures from reading inconsistent metadata
              on a system that is not atomic.
            * Turn off usage of the library’s accumulator to avoid possible
              ordering problem on a system that is not atomic.
            * Perform a flush of the file’s data buffers and metadata to set
              a consistent state for starting SWMR write operations.

        Library objects are groups, datasets, and committed datatypes. For
        the current implementation, groups and datasets can remain open when
        activating SWMR writing mode, but not committed datatypes. Attributes
        attached to objects cannot remain open.

        Feature requires: 1.9.178 HDF5
        """
        H5Fstart_swmr_write(self.id)

    @with_phil
    def reset_page_buffering_stats(self):
        """ ()

        Reset page buffer statistics for the file.
        """
        H5Freset_page_buffering_stats(self.id)

    @with_phil
    def get_page_buffering_stats(self):
        """ () -> NAMEDTUPLE PageBufStats(NAMEDTUPLE meta=PageStats, NAMEDTUPLE raw=PageStats)

        Retrieve page buffering statistics for the file as the number of
        metadata and raw data accesses, hits, misses, evictions, and
        accesses that bypass the page buffer (bypasses).
        """
        cdef:
            unsigned int accesses[2]
            unsigned int hits[2]
            unsigned int misses[2]
            unsigned int evictions[2]
            unsigned int bypasses[2]

        H5Fget_page_buffering_stats(self.id, &accesses[0], &hits[0],
                                    &misses[0], &evictions[0], &bypasses[0])
        meta = PageStats(int(accesses[0]), int(hits[0]), int(misses[0]),
                         int(evictions[0]), int(bypasses[0]))
        raw = PageStats(int(accesses[1]), int(hits[1]), int(misses[1]),
                        int(evictions[1]), int(bypasses[1]))

        return PageBufStats(meta, raw)

    # === Special methods =====================================================

    # Redefine these methods to use the root group explicitly, because otherwise
    # the setting for link order tracking can be missed.
    @with_phil
    def __iter__(self):
        """ Return an iterator over the names of group members. """
        return iter(h5o.open(self, b'/'))

    @with_phil
    def __reversed__(self):
        """ Return an iterator over group member names in reverse order. """
        return reversed(h5o.open(self, b'/'))
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5fd.pxd0000644000175000017500000000070014045746670015623 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

# This file contains code or comments from the HDF5 library.  See the file
# licenses/hdf5.txt for the full HDF5 software license.

from .defs cimport *
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5fd.pyx0000644000175000017500000002142314675110407015645 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

# This file contains code or comments from the HDF5 library.  See the file
# licenses/hdf5.txt for the full HDF5 software license.

include "config.pxi"

"""
    File driver constants (H5FD*).
"""

# === Multi-file driver =======================================================

MEM_DEFAULT = H5FD_MEM_DEFAULT
MEM_SUPER = H5FD_MEM_SUPER
MEM_BTREE = H5FD_MEM_BTREE
MEM_DRAW = H5FD_MEM_DRAW
MEM_GHEAP = H5FD_MEM_GHEAP
MEM_LHEAP = H5FD_MEM_LHEAP
MEM_OHDR = H5FD_MEM_OHDR
MEM_NTYPES = H5FD_MEM_NTYPES

# === MPI driver ==============================================================

MPIO_INDEPENDENT = H5FD_MPIO_INDEPENDENT
MPIO_COLLECTIVE = H5FD_MPIO_COLLECTIVE

# === Driver types ============================================================

CORE = H5FD_CORE
FAMILY = H5FD_FAMILY
LOG = H5FD_LOG
MPIO = H5FD_MPIO
MPIPOSIX = -1
MULTI = H5FD_MULTI
SEC2 = H5FD_SEC2
DIRECT = H5FD_DIRECT
STDIO = H5FD_STDIO
ROS3D = H5FD_ROS3
IF UNAME_SYSNAME == "Windows":
    WINDOWS = H5FD_WINDOWS
ELSE:
    WINDOWS = -1

# === Logging driver ==========================================================

LOG_LOC_READ  = H5FD_LOG_LOC_READ   # 0x0001
LOG_LOC_WRITE = H5FD_LOG_LOC_WRITE  # 0x0002
LOG_LOC_SEEK  = H5FD_LOG_LOC_SEEK   # 0x0004
LOG_LOC_IO    = H5FD_LOG_LOC_IO     # (H5FD_LOG_LOC_READ|H5FD_LOG_LOC_WRITE|H5FD_LOG_LOC_SEEK)

# Flags for tracking number of times each byte is read/written
LOG_FILE_READ = H5FD_LOG_FILE_READ  # 0x0008
LOG_FILE_WRITE= H5FD_LOG_FILE_WRITE # 0x0010
LOG_FILE_IO   = H5FD_LOG_FILE_IO    # (H5FD_LOG_FILE_READ|H5FD_LOG_FILE_WRITE)

# Flag for tracking "flavor" (type) of information stored at each byte
LOG_FLAVOR    = H5FD_LOG_FLAVOR     # 0x0020

# Flags for tracking total number of reads/writes/seeks
LOG_NUM_READ  = H5FD_LOG_NUM_READ   # 0x0040
LOG_NUM_WRITE = H5FD_LOG_NUM_WRITE  # 0x0080
LOG_NUM_SEEK  = H5FD_LOG_NUM_SEEK   # 0x0100
LOG_NUM_IO    = H5FD_LOG_NUM_IO     # (H5FD_LOG_NUM_READ|H5FD_LOG_NUM_WRITE|H5FD_LOG_NUM_SEEK)

# Flags for tracking time spent in open/read/write/seek/close
LOG_TIME_OPEN = H5FD_LOG_TIME_OPEN  # 0x0200        # Not implemented yet
LOG_TIME_READ = H5FD_LOG_TIME_READ  # 0x0400        # Not implemented yet
LOG_TIME_WRITE= H5FD_LOG_TIME_WRITE # 0x0800        # Partially implemented (need to track total time)
LOG_TIME_SEEK = H5FD_LOG_TIME_SEEK  # 0x1000        # Partially implemented (need to track total time & track time for seeks during reading)
LOG_TIME_CLOSE= H5FD_LOG_TIME_CLOSE # 0x2000        # Fully implemented
LOG_TIME_IO   = H5FD_LOG_TIME_IO    # (H5FD_LOG_TIME_OPEN|H5FD_LOG_TIME_READ|H5FD_LOG_TIME_WRITE|H5FD_LOG_TIME_SEEK|H5FD_LOG_TIME_CLOSE)

# Flag for tracking allocation of space in file
LOG_ALLOC     = H5FD_LOG_ALLOC      # 0x4000
LOG_ALL       = H5FD_LOG_ALL        # (H5FD_LOG_ALLOC|H5FD_LOG_TIME_IO|H5FD_LOG_NUM_IO|H5FD_LOG_FLAVOR|H5FD_LOG_FILE_IO|H5FD_LOG_LOC_IO)


# Implementation of 'fileobj' Virtual File Driver: HDF5 Virtual File
# Layer wrapper over Python file-like object.
# https://support.hdfgroup.org/HDF5/doc1.8/TechNotes/VFL.html

# HDF5 events (read, write, flush, ...) are dispatched via
# H5FD_class_t (struct of callback pointers, H5FD_fileobj_*). This is
# registered as the handler for 'fileobj' driver via H5FDregister.

# File-like object is passed from Python side via FAPL with
# PropFAID.set_fileobj_driver. Then H5FD_fileobj_open callback acts,
# taking file-like object from FAPL and returning struct
# H5FD_fileobj_t (descendant of base H5FD_t) which will hold file
# state. Other callbacks receive H5FD_fileobj_t and operate on
# f.fileobj. If successful, callbacks must return zero; otherwise
# non-zero value.


# H5FD_t of file-like object
ctypedef struct H5FD_fileobj_t:
    H5FD_t base  # must be first
    PyObject* fileobj
    haddr_t eoa


# A minimal subset of callbacks is implemented. Non-essential
# parameters (dxpl, type) are ignored.

from cpython cimport Py_INCREF, Py_DECREF
from libc.stdlib cimport malloc as stdlib_malloc
from libc.stdlib cimport free as stdlib_free
cimport libc.stdio
cimport libc.stdint


cdef void *H5FD_fileobj_fapl_get(H5FD_fileobj_t *f) with gil:
    Py_INCREF(f.fileobj)
    return f.fileobj

cdef void *H5FD_fileobj_fapl_copy(PyObject *old_fa) with gil:
    cdef PyObject *new_fa = old_fa
    Py_INCREF(new_fa)
    return new_fa

cdef herr_t H5FD_fileobj_fapl_free(PyObject *fa) except -1 with gil:
    Py_DECREF(fa)
    return 0

cdef H5FD_fileobj_t *H5FD_fileobj_open(const char *name, unsigned flags, hid_t fapl, haddr_t maxaddr) except * with gil:
    cdef PyObject *fileobj = H5Pget_driver_info(fapl)
    f = stdlib_malloc(sizeof(H5FD_fileobj_t))
    f.fileobj = fileobj
    Py_INCREF(f.fileobj)
    f.eoa = 0
    return f

cdef herr_t H5FD_fileobj_close(H5FD_fileobj_t *f) except -1 with gil:
    Py_DECREF(f.fileobj)
    stdlib_free(f)
    return 0

cdef haddr_t H5FD_fileobj_get_eoa(const H5FD_fileobj_t *f, H5FD_mem_t type) noexcept nogil:
    return f.eoa

cdef herr_t H5FD_fileobj_set_eoa(H5FD_fileobj_t *f, H5FD_mem_t type, haddr_t addr) noexcept nogil:
    f.eoa = addr
    return 0

cdef haddr_t H5FD_fileobj_get_eof(const H5FD_fileobj_t *f, H5FD_mem_t type) except -1 with gil:  # HADDR_UNDEF
    (f.fileobj).seek(0, libc.stdio.SEEK_END)
    return (f.fileobj).tell()

cdef herr_t H5FD_fileobj_read(H5FD_fileobj_t *f, H5FD_mem_t type, hid_t dxpl, haddr_t addr, size_t size, void *buf) except -1 with gil:
    cdef unsigned char[:] mview
    (f.fileobj).seek(addr)
    if hasattr(f.fileobj, 'readinto'):
        mview = (buf)
        (f.fileobj).readinto(mview)
    else:
        b = (f.fileobj).read(size)
        if len(b) == size:
            memcpy(buf, b, size)
        else:
            return 1
    return 0

cdef herr_t H5FD_fileobj_write(H5FD_fileobj_t *f, H5FD_mem_t type, hid_t dxpl, haddr_t addr, size_t size, void *buf) except -1 with gil:
    cdef unsigned char[:] mview
    (f.fileobj).seek(addr)
    mview = buf
    (f.fileobj).write(mview)
    return 0

cdef herr_t H5FD_fileobj_truncate(H5FD_fileobj_t *f, hid_t dxpl, hbool_t closing) except -1 with gil:
    (f.fileobj).truncate(f.eoa)
    return 0

cdef herr_t H5FD_fileobj_flush(H5FD_fileobj_t *f, hid_t dxpl, hbool_t closing) except -1 with gil:
    # TODO: avoid unneeded fileobj.flush() when closing for e.g. TemporaryFile
    (f.fileobj).flush()
    return 0


# Construct H5FD_class_t struct and register 'fileobj' driver.

cdef H5FD_class_t info
memset(&info, 0, sizeof(info))

# Cython doesn't support "except X" in casting definition currently
ctypedef herr_t (*file_free_func_ptr)(void *) except -1

ctypedef herr_t (*file_close_func_ptr)(H5FD_t *) except -1
ctypedef haddr_t (*file_get_eoa_func_ptr)(const H5FD_t *, H5FD_mem_t) noexcept
ctypedef herr_t (*file_set_eof_func_ptr)(H5FD_t *, H5FD_mem_t, haddr_t) noexcept
ctypedef haddr_t (*file_get_eof_func_ptr)(const H5FD_t *, H5FD_mem_t) except -1
ctypedef herr_t (*file_read_func_ptr)(H5FD_t *, H5FD_mem_t, hid_t, haddr_t, size_t, void*) except -1
ctypedef herr_t (*file_write_func_ptr)(H5FD_t *, H5FD_mem_t, hid_t, haddr_t, size_t, const void*) except -1
ctypedef herr_t (*file_truncate_func_ptr)(H5FD_t *, hid_t, hbool_t) except -1
ctypedef herr_t (*file_flush_func_ptr)(H5FD_t *, hid_t, hbool_t) except -1


info.name = 'fileobj'
info.maxaddr = libc.stdint.SIZE_MAX - 1
info.fc_degree = H5F_CLOSE_WEAK
info.fapl_size = sizeof(PyObject *)
info.fapl_get = H5FD_fileobj_fapl_get
info.fapl_copy = H5FD_fileobj_fapl_copy

info.fapl_free = H5FD_fileobj_fapl_free

info.open = H5FD_fileobj_open

info.close = H5FD_fileobj_close
info.get_eoa = H5FD_fileobj_get_eoa
info.set_eoa = H5FD_fileobj_set_eoa
info.get_eof = H5FD_fileobj_get_eof
info.read = H5FD_fileobj_read
info.write = H5FD_fileobj_write
info.truncate = H5FD_fileobj_truncate
info.flush = H5FD_fileobj_flush
# H5FD_FLMAP_DICHOTOMY
info.fl_map = [H5FD_MEM_SUPER,  # default
               H5FD_MEM_SUPER,  # super
               H5FD_MEM_SUPER,  # btree
               H5FD_MEM_DRAW,   # draw
               H5FD_MEM_DRAW,   # gheap
               H5FD_MEM_SUPER,  # lheap
               H5FD_MEM_SUPER   # ohdr
	       ]
IF HDF5_VERSION >= (1, 14, 0):
    info.version = H5FD_CLASS_VERSION

fileobj_driver = H5FDregister(&info)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5g.pxd0000644000175000017500000000063414045746670015466 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

from ._objects cimport ObjectID

cdef class GroupID(ObjectID):

    cdef readonly object links
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5g.pyx0000644000175000017500000003530014675110407015501 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Low-level HDF5 "H5G" group interface.
"""

include "config.pxi"

import sys

# C-level imports
from ._objects cimport pdefault
from .utils cimport emalloc, efree
from .h5p import CRT_ORDER_TRACKED
from .h5p cimport PropID, PropGCID, propwrap
from . cimport _hdf5 # to implement container testing for 1.6
from ._errors cimport set_error_handler, err_cookie

# Python level imports
from ._objects import phil, with_phil


_IS_WINDOWS = sys.platform.startswith("win")

# === Public constants and data structures ====================================

# Enumerated object types for groups "H5G_obj_t"
UNKNOWN  = H5G_UNKNOWN
LINK     = H5G_LINK
GROUP    = H5G_GROUP
DATASET  = H5G_DATASET
TYPE = H5G_TYPE

# Enumerated link types "H5G_link_t"
LINK_ERROR = H5G_LINK_ERROR
LINK_HARD  = H5G_LINK_HARD
LINK_SOFT  = H5G_LINK_SOFT

cdef class GroupStat:
    """Represents the H5G_stat_t structure containing group member info.

    Fields (read-only):

    * fileno:   2-tuple uniquely identifying the current file
    * objno:    2-tuple uniquely identifying this object
    * nlink:    Number of hard links to this object
    * mtime:    Modification time of this object
    * linklen:  Length of the symbolic link name, or 0 if not a link.

    "Uniquely identifying" means unique among currently open files,
    not universally unique.

    * Hashable: Yes
    * Equality: Yes
    """
    cdef H5G_stat_t infostruct

    @property
    def fileno(self):
        return (self.infostruct.fileno[0], self.infostruct.fileno[1])

    @property
    def objno(self):
        return (self.infostruct.objno[0], self.infostruct.objno[1])

    @property
    def nlink(self):
        return self.infostruct.nlink

    @property
    def type(self):
        return self.infostruct.type

    @property
    def mtime(self):
        return self.infostruct.mtime

    @property
    def linklen(self):
        return self.infostruct.linklen

    def _hash(self):
        return hash((self.fileno, self.objno, self.nlink, self.type, self.mtime, self.linklen))


cdef class GroupIter:

    """
        Iterator over the names of group members.  After this iterator is
        exhausted, it releases its reference to the group ID.
    """

    cdef unsigned long idx
    cdef unsigned long nobjs
    cdef GroupID grp
    cdef list names
    cdef bint reversed


    def __init__(self, GroupID grp not None, bint reversed=False):

        self.idx = 0
        self.grp = grp
        self.nobjs = grp.get_num_objs()
        self.names = []
        self.reversed = reversed


    def __iter__(self):

        return self


    def __next__(self):

        if self.idx == self.nobjs:
            self.grp = None
            self.names = None
            raise StopIteration

        if self.idx == 0:
            cpl = self.grp.get_create_plist()
            crt_order = cpl.get_link_creation_order()
            cpl.close()
            if crt_order & CRT_ORDER_TRACKED:
                idx_type = H5_INDEX_CRT_ORDER
            else:
                idx_type = H5_INDEX_NAME

            self.grp.links.iterate(self.names.append,
                                   idx_type=idx_type)
            if self.reversed:
                self.names.reverse()

        retval = self.names[self.idx]
        self.idx += 1

        return retval


# === Basic group management ==================================================

@with_phil
def open(ObjectID loc not None, char* name):
    """(ObjectID loc, STRING name) => GroupID

    Open an existing HDF5 group, attached to some other group.
    """
    return GroupID(H5Gopen(loc.id, name, H5P_DEFAULT))


@with_phil
def create(ObjectID loc not None, object name, PropID lcpl=None,
           PropID gcpl=None):
    """(ObjectID loc, STRING name or None, PropLCID lcpl=None,
        PropGCID gcpl=None)
    => GroupID

    Create a new group, under a given parent group.  If name is None,
    an anonymous group will be created in the file.
    """
    cdef hid_t gid
    cdef char* cname = NULL
    if name is not None:
        cname = name

    if cname != NULL:
        gid = H5Gcreate(loc.id, cname, pdefault(lcpl), pdefault(gcpl), H5P_DEFAULT)
    else:
        gid = H5Gcreate_anon(loc.id, pdefault(gcpl), H5P_DEFAULT)

    return GroupID(gid)


cdef class _GroupVisitor:

    cdef object func
    cdef object retval

    def __init__(self, func):
        self.func = func
        self.retval = None

cdef herr_t cb_group_iter(hid_t gid, char *name, void* vis_in) except 2:

    cdef _GroupVisitor vis = <_GroupVisitor>vis_in

    vis.retval = vis.func(name)

    if vis.retval is not None:
        return 1
    return 0


@with_phil
def iterate(GroupID loc not None, object func, int startidx=0, *,
            char* obj_name='.'):
    """ (GroupID loc, CALLABLE func, UINT startidx=0, **kwds)
    => Return value from func

    Iterate a callable (function, method or callable object) over the
    members of a group.  Your callable should have the signature::

        func(STRING name) => Result

    Returning None continues iteration; returning anything else aborts
    iteration and returns that value. Keywords:

    STRING obj_name (".")
        Iterate over this subgroup instead
    """
    if startidx < 0:
        raise ValueError("Starting index must be non-negative")

    cdef int i = startidx
    cdef _GroupVisitor vis = _GroupVisitor(func)

    H5Giterate(loc.id, obj_name, &i, cb_group_iter, vis)

    return vis.retval


@with_phil
def get_objinfo(ObjectID obj not None, object name=b'.', int follow_link=1):
    """(ObjectID obj, STRING name='.', BOOL follow_link=True) => GroupStat object

    Obtain information about a named object.  If "name" is provided,
    "obj" is taken to be a GroupID object containing the target.
    The return value is a GroupStat object; see that class's docstring
    for a description of its attributes.

    If follow_link is True (default) and the object is a symbolic link,
    the information returned describes its target.  Otherwise the
    information describes the link itself.
    """
    cdef GroupStat statobj
    statobj = GroupStat()
    cdef char* _name
    _name = name

    H5Gget_objinfo(obj.id, _name, follow_link, &statobj.infostruct)

    return statobj


# === Group member management =================================================

cdef class GroupID(ObjectID):

    """
        Represents an HDF5 group identifier

        Python extensions:

        __contains__
            Test for group member ("if name in grpid")

        __iter__
            Get an iterator over member names

        __len__
            Number of members in this group; len(grpid) = N

        The attribute "links" contains a proxy object
        providing access to the H5L family of routines.  See the docs
        for h5py.h5l.LinkProxy for more information.

        * Hashable: Yes, unless anonymous
        * Equality: True HDF5 identity unless anonymous
    """

    def __init__(self, hid_t id_):
        with phil:
            from . import h5l
            self.links = h5l.LinkProxy(id_)


    @with_phil
    def link(self, char* current_name, char* new_name,
             int link_type=H5G_LINK_HARD, GroupID remote=None):
        """(STRING current_name, STRING new_name, INT link_type=LINK_HARD,
        GroupID remote=None)

        Create a new hard or soft link.  current_name identifies
        the link target (object the link will point to).  The new link is
        identified by new_name and (optionally) another group "remote".

        Link types are:

        LINK_HARD
            Hard link to existing object (default)

        LINK_SOFT
            Symbolic link; link target need not exist.
        """
        cdef hid_t remote_id
        if remote is None:
            remote_id = self.id
        else:
            remote_id = remote.id

        H5Glink2(self.id, current_name, link_type, remote_id, new_name)


    @with_phil
    def unlink(self, char* name):
        """(STRING name)

        Remove a link to an object from this group.
        """
        H5Gunlink(self.id, name)


    @with_phil
    def move(self, char* current_name, char* new_name, GroupID remote=None):
        """(STRING current_name, STRING new_name, GroupID remote=None)

        Relink an object.  current_name identifies the object.
        new_name and (optionally) another group "remote" determine
        where it should be moved.
        """
        cdef hid_t remote_id
        if remote is None:
            remote_id = self.id
        else:
            remote_id = remote.id

        H5Gmove2(self.id, current_name, remote_id, new_name)


    @with_phil
    def get_num_objs(self):
        """() => INT number_of_objects

        Get the number of objects directly attached to a given group.
        """
        cdef hsize_t size
        H5Gget_num_objs(self.id, &size)
        return size


    @with_phil
    def get_objname_by_idx(self, hsize_t idx):
        """(INT idx) => STRING

        Get the name of a group member given its zero-based index.
        """
        cdef int size
        cdef char* buf
        buf = NULL

        size = H5Gget_objname_by_idx(self.id, idx, NULL, 0)

        buf = emalloc(sizeof(char)*(size+1))
        try:
            H5Gget_objname_by_idx(self.id, idx, buf, size+1)
            pystring = buf
            return pystring
        finally:
            efree(buf)


    @with_phil
    def get_objtype_by_idx(self, hsize_t idx):
        """(INT idx) => INT object_type_code

        Get the type of an object attached to a group, given its zero-based
        index.  Possible return values are:

        - LINK
        - GROUP
        - DATASET
        - TYPE
        """
        return H5Gget_objtype_by_idx(self.id, idx)


    @with_phil
    def get_linkval(self, char* name):
        """(STRING name) => STRING link_value

        Retrieve the value (target name) of a symbolic link.
        Limited to 2048 characters on Windows.
        """
        cdef char* value
        cdef H5G_stat_t statbuf
        value = NULL

        H5Gget_objinfo(self.id, name, 0, &statbuf)

        if statbuf.type != H5G_LINK:
            raise ValueError('"%s" is not a symbolic link.' % name)

        if _IS_WINDOWS:
            linklen = 2049  # Windows statbuf.linklen seems broken
        else:
            linklen = statbuf.linklen+1
        value = emalloc(sizeof(char)*linklen)
        try:
            H5Gget_linkval(self.id, name, linklen, value)
            value[linklen-1] = c'\0'  # in case HDF5 doesn't null terminate on Windows
            pyvalue = value
            return pyvalue
        finally:
            efree(value)


    @with_phil
    def get_create_plist(self):
        """() => PropGCID

        Retrieve a copy of the group creation property list used to
        create this group.
        """
        return propwrap(H5Gget_create_plist(self.id))


    @with_phil
    def set_comment(self, char* name, char* comment):
        """(STRING name, STRING comment)

        Set the comment on a group member.
        """
        H5Gset_comment(self.id, name, comment)


    @with_phil
    def get_comment(self, char* name):
        """(STRING name) => STRING comment

        Retrieve the comment for a group member.
        """
        cdef int cmnt_len
        cdef char* cmnt
        cmnt = NULL

        cmnt_len = H5Gget_comment(self.id, name, 0, NULL)
        assert cmnt_len >= 0

        cmnt = emalloc(sizeof(char)*(cmnt_len+1))
        try:
            H5Gget_comment(self.id, name, cmnt_len+1, cmnt)
            py_cmnt = cmnt
            return py_cmnt
        finally:
            efree(cmnt)


    # === Special methods =====================================================

    def __contains__(self, name):
        """(STRING name)

        Determine if a group member of the given name is present
        """
        cdef err_cookie old_handler
        cdef err_cookie new_handler
        cdef herr_t retval

        new_handler.func = NULL
        new_handler.data = NULL

        if not self:
            return False

        with phil:
            return _path_valid(self, name)

    def __iter__(self):
        """ Return an iterator over the names of group members. """
        with phil:
            return GroupIter(self)

    def __reversed__(self):
        """ Return an iterator over group member names in reverse order. """
        with phil:
            return GroupIter(self, reversed=True)

    def __len__(self):
        """ Number of group members """
        cdef hsize_t size
        with phil:
            H5Gget_num_objs(self.id, &size)
            return size


@with_phil
def _path_valid(GroupID grp not None, object path not None, PropID lapl=None):
    """ Determine if *path* points to an object in the file.

    If *path* represents an external or soft link, the link's validity is not
    checked.
    """
    from . import h5o

    if isinstance(path, bytes):
        path = path.decode('utf-8')
    else:
        path = unicode(path)

    # Empty names are not allowed by HDF5
    if len(path) == 0:
        return False

    # Note: we cannot use pp.normpath as it resolves ".." components,
    # which don't exist in HDF5

    path_parts = path.split('/')

    # Absolute path (started with slash)
    if path_parts[0] == '':
        current_loc = h5o.open(grp, b'/', lapl=lapl)
    else:
        current_loc = grp

    # HDF5 ignores duplicate or trailing slashes
    path_parts = [x for x in path_parts if x != '']

    # Special case: path was entirely composed of slashes!
    if len(path_parts) == 0:
        path_parts = ['.']  # i.e. the root group

    path_parts = [x.encode('utf-8') for x in path_parts]
    nparts = len(path_parts)

    for idx, p in enumerate(path_parts):

        # Special case; '.' always refers to the present group
        if p == b'.':
            continue

        # Is there any kind of link by that name in this group?
        if not current_loc.links.exists(p, lapl=lapl):
            return False

        # If we're at the last link in the chain, we're done.
        # We don't check to see if the last part points to a valid object;
        # it's enough that it exists.
        if idx == nparts - 1:
            return True

        # Otherwise, does the link point to a real object?
        if not h5o.exists_by_name(current_loc, p, lapl=lapl):
            return False

        # Is that object a group?
        next_loc = h5o.open(current_loc, p, lapl=lapl)
        info = h5o.get_info(next_loc)
        if info.type != H5O_TYPE_GROUP:
            return False

        # Go into that group
        current_loc = next_loc

    return True
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5i.pxd0000644000175000017500000000061214045746670015464 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

from ._objects cimport ObjectID

cpdef ObjectID wrap_identifier(hid_t ident)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5i.pyx0000644000175000017500000000776314675110407015517 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

include "config.pxi"

"""
    Identifier interface for object inspection.
"""

from ._objects import phil, with_phil


# === Public constants and data structures ====================================

INVALID_HID = H5I_INVALID_HID
BADID       = H5I_BADID
FILE        = H5I_FILE
GROUP       = H5I_GROUP
DATASPACE   = H5I_DATASPACE
DATASET     = H5I_DATASET
ATTR        = H5I_ATTR
IF HDF5_VERSION < VOL_MIN_HDF5_VERSION:
  REFERENCE   = H5I_REFERENCE
GENPROP_CLS = H5I_GENPROP_CLS
GENPROP_LST = H5I_GENPROP_LST
DATATYPE    = H5I_DATATYPE

cpdef ObjectID wrap_identifier(hid_t ident):

    cdef H5I_type_t typecode
    cdef ObjectID obj

    typecode = H5Iget_type(ident)
    if typecode == H5I_FILE:
        from . import h5f
        obj = h5f.FileID(ident)
    elif typecode == H5I_DATASET:
        from . import h5d
        obj = h5d.DatasetID(ident)
    elif typecode == H5I_GROUP:
        from . import h5g
        obj = h5g.GroupID(ident)
    elif typecode == H5I_ATTR:
        from . import h5a
        obj = h5a.AttrID(ident)
    elif typecode == H5I_DATATYPE:
        from . import h5t
        obj = h5t.typewrap(ident)
    elif typecode == H5I_GENPROP_LST:
        from . import h5p
        obj = h5p.propwrap(ident)
    else:
        raise ValueError("Unrecognized type code %d" % typecode)

    return obj


# === Identifier API ==========================================================

@with_phil
def get_type(ObjectID obj not None):
    """ (ObjectID obj) => INT type_code

        Determine the HDF5 typecode of an arbitrary HDF5 object.  The return
        value is always one of the type constants defined in this module; if
        the ID is invalid, BADID is returned.
    """
    return H5Iget_type(obj.id)


@with_phil
def get_name(ObjectID obj not None):
    """ (ObjectID obj) => STRING name, or None

        Determine (a) name of an HDF5 object.  Because an object has as many
        names as there are hard links to it, this may not be unique.

        If the identifier is invalid or is not associated with a name
        (in the case of transient datatypes, dataspaces, etc), returns None.

        For some reason, this does not work on dereferenced objects.
    """
    cdef int namelen
    cdef char* name

    try:
        namelen = H5Iget_name(obj.id, NULL, 0)
    except Exception:
        return None

    if namelen == 0:    # 1.6.5 doesn't raise an exception
        return None

    assert namelen > 0
    name = malloc(sizeof(char)*(namelen+1))
    try:
        H5Iget_name(obj.id, name, namelen+1)
        pystring = name
        return pystring
    finally:
        free(name)


@with_phil
def get_file_id(ObjectID obj not None):
    """ (ObjectID obj) => FileID

        Obtain an identifier for the file in which this object resides.
    """
    from . import h5f
    cdef hid_t fid
    fid = H5Iget_file_id(obj.id)
    return h5f.FileID(fid)


@with_phil
def inc_ref(ObjectID obj not None):
    """ (ObjectID obj)

        Increment the reference count for the given object.

        This function is provided for debugging only.  Reference counting
        is automatically synchronized with Python, and you can easily break
        ObjectID instances by abusing this function.
    """
    H5Iinc_ref(obj.id)


@with_phil
def get_ref(ObjectID obj not None):
    """ (ObjectID obj) => INT

        Retrieve the reference count for the given object.
    """
    return H5Iget_ref(obj.id)


@with_phil
def dec_ref(ObjectID obj not None):
    """ (ObjectID obj)

        Decrement the reference count for the given object.

        This function is provided for debugging only.  Reference counting
        is automatically synchronized with Python, and you can easily break
        ObjectID instances by abusing this function.
    """
    H5Idec_ref(obj.id)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5l.pxd0000644000175000017500000000054614045746670015475 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

cdef class LinkProxy:

    cdef hid_t id
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5l.pyx0000644000175000017500000002332614675110407015513 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    API for the "H5L" family of link-related operations.  Defines the class
    LinkProxy, which comes attached to GroupID objects as .links.
"""

from ._objects cimport pdefault
from .h5p cimport PropID
from .h5g cimport GroupID
from .utils cimport emalloc, efree

from ._objects import phil, with_phil


# === Public constants ========================================================

TYPE_HARD = H5L_TYPE_HARD
TYPE_SOFT = H5L_TYPE_SOFT
TYPE_EXTERNAL = H5L_TYPE_EXTERNAL

cdef class LinkInfo:

    cdef H5L_info_t infostruct

    @property
    def type(self):
        """ Integer type code for link (h5l.TYPE_*) """
        return self.infostruct.type

    @property
    def corder_valid(self):
        """ Indicates if the creation order is valid """
        return self.infostruct.corder_valid

    @property
    def corder(self):
        """ Creation order """
        return self.infostruct.corder

    @property
    def cset(self):
        """ Integer type code for character set (h5t.CSET_*) """
        return self.infostruct.cset

    @property
    def u(self):
        """ Either the address of a hard link or the size of a soft/UD link """
        if self.infostruct.type == H5L_TYPE_HARD:
            return self.infostruct.u.address
        else:
            return self.infostruct.u.val_size

cdef class _LinkVisitor:

    """ Helper class for iteration callback """

    cdef object func
    cdef object retval
    cdef LinkInfo info

    def __init__(self, func):
        self.func = func
        self.retval = None
        self.info = LinkInfo()

cdef herr_t cb_link_iterate(hid_t grp, const char* name, const H5L_info_t *istruct, void* data) except 2 with gil:
    # Standard iteration callback for iterate/visit routines

    cdef _LinkVisitor it = <_LinkVisitor?>data
    it.info.infostruct = istruct[0]
    it.retval = it.func(name, it.info)
    if (it.retval is None) or (not it.retval):
        return 0
    return 1

cdef herr_t cb_link_simple(hid_t grp, const char* name, const H5L_info_t *istruct, void* data) except 2 with gil:
    # Simplified iteration callback which only provides the name

    cdef _LinkVisitor it = <_LinkVisitor?>data
    it.retval = it.func(name)
    if (it.retval is None) or (not it.retval):
        return 0
    return 1


cdef class LinkProxy:

    """
        Proxy class which provides access to the HDF5 "H5L" API.

        These come attached to GroupID objects as "obj.links".  Since every
        H5L function operates on at least one group, the methods provided
        operate on their parent group identifier.  For example::

            >>> g = h5g.open(fid, '/')
            >>> g.links.exists("MyGroup")
            True
            >>> g.links.exists("FooBar")
            False

        * Hashable: No
        * Equality: Undefined

        You will note that this class does *not* inherit from ObjectID.
    """

    def __init__(self, hid_t id_):

        # The identifier in question is the hid_t for the parent GroupID.
        # We "borrow" this reference.
        self.id = id_

    def __richcmp__(self, object other, int how):
        return NotImplemented

    def __hash__(self):
        raise TypeError("Link proxies are unhashable; use the parent group instead.")


    @with_phil
    def create_hard(self, char* new_name, GroupID cur_loc not None,
        char* cur_name, PropID lcpl=None, PropID lapl=None):
        """ (STRING new_name, GroupID cur_loc, STRING cur_name,
        PropID lcpl=None, PropID lapl=None)

        Create a new hard link in this group pointing to an existing link
        in another group.
        """
        H5Lcreate_hard(cur_loc.id, cur_name, self.id, new_name,
            pdefault(lcpl), pdefault(lapl))


    @with_phil
    def create_soft(self, char* new_name, char* target,
        PropID lcpl=None, PropID lapl=None):
        """(STRING new_name, STRING target, PropID lcpl=None, PropID lapl=None)

        Create a new soft link in this group, with the given string value.
        The link target does not need to exist.
        """
        H5Lcreate_soft(target, self.id, new_name,
            pdefault(lcpl), pdefault(lapl))


    @with_phil
    def create_external(self, char* link_name, char* file_name, char* obj_name,
        PropID lcpl=None, PropID lapl=None):
        """(STRING link_name, STRING file_name, STRING obj_name,
        PropLCID lcpl=None, PropLAID lapl=None)

        Create a new external link, pointing to an object in another file.
        """
        H5Lcreate_external(file_name, obj_name, self.id, link_name,
            pdefault(lcpl), pdefault(lapl))


    @with_phil
    def get_val(self, char* name, PropID lapl=None):
        """(STRING name, PropLAID lapl=None) => STRING or TUPLE(file, obj)

        Get the string value of a soft link, or a 2-tuple representing
        the contents of an external link.
        """
        cdef hid_t plist = pdefault(lapl)
        cdef H5L_info_t info
        cdef size_t buf_size
        cdef char* buf = NULL
        cdef char* ext_file_name = NULL
        cdef char* ext_obj_name = NULL
        cdef unsigned int wtf = 0

        H5Lget_info(self.id, name, &info, plist)
        if info.type != H5L_TYPE_SOFT and info.type != H5L_TYPE_EXTERNAL:
            raise TypeError("Link must be either a soft or external link")

        buf_size = info.u.val_size
        buf = emalloc(buf_size)
        try:
            H5Lget_val(self.id, name, buf, buf_size, plist)
            if info.type == H5L_TYPE_SOFT:
                py_retval = buf
            else:
                H5Lunpack_elink_val(buf, buf_size, &wtf, &ext_file_name, &ext_obj_name)
                py_retval = (bytes(ext_file_name), bytes(ext_obj_name))
        finally:
            efree(buf)

        return py_retval


    @with_phil
    def move(self, char* src_name, GroupID dst_loc not None, char* dst_name,
        PropID lcpl=None, PropID lapl=None):
        """ (STRING src_name, GroupID dst_loc, STRING dst_name)

        Move a link to a new location in the file.
        """
        H5Lmove(self.id, src_name, dst_loc.id, dst_name, pdefault(lcpl),
                pdefault(lapl))


    @with_phil
    def exists(self, char* name, PropID lapl=None):
        """ (STRING name, PropID lapl=None) => BOOL

            Check if a link of the specified name exists in this group.
        """
        return (H5Lexists(self.id, name, pdefault(lapl)))


    @with_phil
    def get_info(self, char* name, int index=-1, *, PropID lapl=None):
        """(STRING name=, INT index=, **kwds) => LinkInfo instance

        Get information about a link, either by name or its index.

        Keywords:
        """
        cdef LinkInfo info = LinkInfo()
        H5Lget_info(self.id, name, &info.infostruct, pdefault(lapl))
        return info


    @with_phil
    def visit(self, object func, *,
              int idx_type=H5_INDEX_NAME, int order=H5_ITER_INC,
              char* obj_name='.', PropID lapl=None, bint info=0):
        """(CALLABLE func, **kwds) => 

        Iterate a function or callable object over all groups below this
        one.  Your callable should conform to the signature::

            func(STRING name) => Result

        or if the keyword argument "info" is True::

            func(STRING name, LinkInfo info) => Result

        Returning None or a logical False continues iteration; returning
        anything else aborts iteration and returns that value.

        BOOL info (False)
            Provide a LinkInfo instance to callback

        STRING obj_name (".")
            Visit this subgroup instead

        PropLAID lapl (None)
            Link access property list for "obj_name"

        INT idx_type (h5.INDEX_NAME)

        INT order (h5.ITER_INC)
        """
        cdef _LinkVisitor it = _LinkVisitor(func)
        cdef H5L_iterate_t cfunc

        if info:
            cfunc = cb_link_iterate
        else:
            cfunc = cb_link_simple

        H5Lvisit_by_name(self.id, obj_name, idx_type,
            order, cfunc, it, pdefault(lapl))

        return it.retval


    @with_phil
    def iterate(self, object func, *,
              int idx_type=H5_INDEX_NAME, int order=H5_ITER_INC,
              char* obj_name='.', PropID lapl=None, bint info=0,
              hsize_t idx=0):
        """(CALLABLE func, **kwds) => , 

        Iterate a function or callable object over all groups in this
        one.  Your callable should conform to the signature::

            func(STRING name) => Result

        or if the keyword argument "info" is True::

            func(STRING name, LinkInfo info) => Result

        Returning None or a logical False continues iteration; returning
        anything else aborts iteration and returns that value.

        BOOL info (False)
            Provide a LinkInfo instance to callback

        STRING obj_name (".")
            Visit this subgroup instead

        PropLAID lapl (None)
            Link access property list for "obj_name"

        INT idx_type (h5.INDEX_NAME)

        INT order (h5.ITER_INC)

        hsize_t idx (0)
            The index to start iterating at
        """
        cdef _LinkVisitor it = _LinkVisitor(func)
        cdef H5L_iterate_t cfunc

        if info:
            cfunc = cb_link_iterate
        else:
            cfunc = cb_link_simple

        H5Literate_by_name(self.id, obj_name, idx_type,
            order, &idx,
            cfunc, it, pdefault(lapl))

        return it.retval, idx
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5o.pxd0000644000175000017500000000047414045746670015500 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5o.pyx0000644000175000017500000002643614675110407015523 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Module for HDF5 "H5O" functions.
"""

include 'config.pxi'

# Cython C-level imports
from ._objects cimport ObjectID, pdefault
from .h5g cimport GroupID
from .h5i cimport wrap_identifier
from .h5p cimport PropID
from .utils cimport emalloc, efree
cimport cython
# Python level imports:
from ._objects import phil, with_phil


# === Public constants ========================================================

TYPE_GROUP = H5O_TYPE_GROUP
TYPE_DATASET = H5O_TYPE_DATASET
TYPE_NAMED_DATATYPE = H5O_TYPE_NAMED_DATATYPE

COPY_SHALLOW_HIERARCHY_FLAG = H5O_COPY_SHALLOW_HIERARCHY_FLAG
COPY_EXPAND_SOFT_LINK_FLAG  = H5O_COPY_EXPAND_SOFT_LINK_FLAG
COPY_EXPAND_EXT_LINK_FLAG   = H5O_COPY_EXPAND_EXT_LINK_FLAG
COPY_EXPAND_REFERENCE_FLAG  = H5O_COPY_EXPAND_REFERENCE_FLAG
COPY_WITHOUT_ATTR_FLAG      = H5O_COPY_WITHOUT_ATTR_FLAG
COPY_PRESERVE_NULL_FLAG     = H5O_COPY_PRESERVE_NULL_FLAG

# === Giant H5O_info_t structure ==============================================

cdef class _ObjInfoBase:

    cdef H5O_info_t *istr

cdef class _OHdrMesg(_ObjInfoBase):

    @property
    def present(self):
        return self.istr[0].hdr.mesg.present

    @property
    def shared(self):
        return self.istr[0].hdr.mesg.shared

    def _hash(self):
        return hash((self.present, self.shared))

cdef class _OHdrSpace(_ObjInfoBase):

    @property
    def total(self):
        return self.istr[0].hdr.space.total

    @property
    def meta(self):
        return self.istr[0].hdr.space.meta

    @property
    def mesg(self):
        return self.istr[0].hdr.space.mesg

    @property
    def free(self):
        return self.istr[0].hdr.space.free

    def _hash(self):
        return hash((self.total, self.meta, self.mesg, self.free))

cdef class _OHdr(_ObjInfoBase):

    cdef public _OHdrSpace space
    cdef public _OHdrMesg mesg

    @property
    def version(self):
        return self.istr[0].hdr.version

    @property
    def nmesgs(self):
        return self.istr[0].hdr.nmesgs

    @property
    def nchunks(self):
        return self.istr[0].hdr.nchunks

    @property
    def flags(self):
        return self.istr[0].hdr.flags

    def __init__(self):
        self.space = _OHdrSpace()
        self.mesg = _OHdrMesg()

    def _hash(self):
        return hash((self.version, self.nmesgs, self.nchunks, self.flags, self.space, self.mesg))

cdef class _ObjMetaInfo:

    cdef H5_ih_info_t *istr

    @property
    def index_size(self):
        return self.istr[0].index_size

    @property
    def heap_size(self):
        return self.istr[0].heap_size

    def _hash(self):
        return hash((self.index_size, self.heap_size))

cdef class _OMetaSize(_ObjInfoBase):

    cdef public _ObjMetaInfo obj
    cdef public _ObjMetaInfo attr

    def __init__(self):
        self.obj = _ObjMetaInfo()
        self.attr = _ObjMetaInfo()

    def _hash(self):
        return hash((self.obj, self.attr))

cdef class _ObjInfo(_ObjInfoBase):

    @property
    def fileno(self):
        return self.istr[0].fileno

    @property
    def addr(self):
        return self.istr[0].addr

    @property
    def type(self):
        return self.istr[0].type

    @property
    def rc(self):
        return self.istr[0].rc

    @property
    def atime(self):
        return self.istr[0].atime

    @property
    def mtime(self):
        return self.istr[0].mtime

    @property
    def ctime(self):
        return self.istr[0].ctime

    @property
    def btime(self):
        return self.istr[0].btime

    @property
    def num_attrs(self):
        return self.istr[0].num_attrs

    def _hash(self):
        return hash((self.fileno, self.addr, self.type, self.rc, self.atime, self.mtime, self.ctime, self.btime, self.num_attrs))

cdef class ObjInfo(_ObjInfo):

    """
        Represents the H5O_info_t structure
    """

    cdef H5O_info_t infostruct
    cdef public _OHdr hdr
    cdef public _OMetaSize meta_size

    def __init__(self):
        self.hdr = _OHdr()
        self.meta_size = _OMetaSize()

        self.istr = &self.infostruct
        self.hdr.istr = &self.infostruct
        self.hdr.space.istr = &self.infostruct
        self.hdr.mesg.istr = &self.infostruct
        self.meta_size.istr = &self.infostruct
        self.meta_size.obj.istr = &(self.istr[0].meta_size.obj)
        self.meta_size.attr.istr = &(self.istr[0].meta_size.attr)

    def __copy__(self):
        cdef ObjInfo newcopy
        newcopy = ObjInfo()
        newcopy.infostruct = self.infostruct
        return newcopy

@cython.binding(False)
@with_phil
def get_info(ObjectID loc not None, char* name=NULL, int index=-1, *,
        char* obj_name='.', int index_type=H5_INDEX_NAME, int order=H5_ITER_INC,
        PropID lapl=None):
    """(ObjectID loc, STRING name=, INT index=, **kwds) => ObjInfo

    Get information describing an object in an HDF5 file.  Provide the object
    itself, or the containing group and exactly one of "name" or "index".

    STRING obj_name (".")
        When "index" is specified, look in this subgroup instead.
        Otherwise ignored.

    PropID lapl (None)
        Link access property list

    INT index_type (h5.INDEX_NAME)

    INT order (h5.ITER_INC)
    """
    cdef ObjInfo info
    info = ObjInfo()

    if name != NULL and index >= 0:
        raise TypeError("At most one of name or index may be specified")
    elif name != NULL and index < 0:
        H5Oget_info_by_name(loc.id, name, &info.infostruct, pdefault(lapl))
    elif name == NULL and index >= 0:
        H5Oget_info_by_idx(loc.id, obj_name, index_type,
            order, index, &info.infostruct, pdefault(lapl))
    else:
        H5Oget_info(loc.id, &info.infostruct)

    return info

@with_phil
def exists_by_name(ObjectID loc not None, char *name, PropID lapl=None):
    """ (ObjectID loc, STRING name, PropID lapl=None) => BOOL exists

    Determines whether a link resolves to an actual object.
    """
    return H5Oexists_by_name(loc.id, name, pdefault(lapl))


# === General object operations ===============================================

@with_phil
def open(ObjectID loc not None, char* name, PropID lapl=None):
    """(ObjectID loc, STRING name, PropID lapl=None) => ObjectID

    Open a group, dataset, or named datatype attached to an existing group.
    """
    return wrap_identifier(H5Oopen(loc.id, name, pdefault(lapl)))


@with_phil
def link(ObjectID obj not None, GroupID loc not None, char* name,
    PropID lcpl=None, PropID lapl=None):
    """(ObjectID obj, GroupID loc, STRING name, PropID lcpl=None,
    PropID lapl=None)

    Create a new hard link to an object.  Useful for objects created with
    h5g.create_anon() or h5d.create_anon().
    """
    H5Olink(obj.id, loc.id, name, pdefault(lcpl), pdefault(lapl))


@with_phil
def copy(ObjectID src_loc not None, char* src_name, GroupID dst_loc not None,
    char* dst_name, PropID copypl=None, PropID lcpl=None):
    """(ObjectID src_loc, STRING src_name, GroupID dst_loc, STRING dst_name,
    PropID copypl=None, PropID lcpl=None)

    Copy a group, dataset or named datatype from one location to another.  The
    source and destination need not be in the same file.

    The default behavior is a recursive copy of the object and all objects
    below it.  This behavior is modified via the "copypl" property list.
    """
    H5Ocopy(src_loc.id, src_name, dst_loc.id, dst_name, pdefault(copypl),
        pdefault(lcpl))


@with_phil
def set_comment(ObjectID loc not None, char* comment, *, char* obj_name=".",
    PropID lapl=None):
    """(ObjectID loc, STRING comment, **kwds)

    Set the comment for any-file resident object.  Keywords:

    STRING obj_name (".")
        Set comment on this group member instead

    PropID lapl (None)
        Link access property list
    """
    H5Oset_comment_by_name(loc.id, obj_name, comment, pdefault(lapl))


@with_phil
def get_comment(ObjectID loc not None, char* comment, *, char* obj_name=".",
    PropID lapl=None):
    """(ObjectID loc, STRING comment, **kwds)

    Get the comment for any-file resident object.  Keywords:

    STRING obj_name (".")
        Set comment on this group member instead

    PropID lapl (None)
        Link access property list
    """
    cdef ssize_t size
    cdef char* buf

    size = H5Oget_comment_by_name(loc.id, obj_name, NULL, 0, pdefault(lapl))
    buf = emalloc(size+1)
    try:
        H5Oget_comment_by_name(loc.id, obj_name, buf, size+1, pdefault(lapl))
        pstring = buf
    finally:
        efree(buf)

    return pstring


# === Visit routines ==========================================================

cdef class _ObjectVisitor:

    cdef object func
    cdef object retval
    cdef ObjInfo objinfo

    def __init__(self, func):
        self.func = func
        self.retval = None
        self.objinfo = ObjInfo()

cdef herr_t cb_obj_iterate(hid_t obj, const char* name, const H5O_info_t *info, void* data) except -1 with gil:

    cdef _ObjectVisitor visit

    # HDF5 doesn't respect callback return for ".", so skip it
    if strcmp(name, ".") == 0:
        return 0

    visit = <_ObjectVisitor>data
    visit.objinfo.infostruct = info[0]
    visit.retval = visit.func(name, visit.objinfo)

    if visit.retval is not None:
        return 1
    return 0

cdef herr_t cb_obj_simple(hid_t obj, const char* name, const H5O_info_t *info, void* data) except -1 with gil:

    cdef _ObjectVisitor visit

    # Not all versions of HDF5 respect callback value for ".", so skip it
    if strcmp(name, ".") == 0:
        return 0

    visit = <_ObjectVisitor>data
    visit.retval = visit.func(name)

    if visit.retval is not None:
        return 1
    return 0


@with_phil
def visit(ObjectID loc not None, object func, *,
          int idx_type=H5_INDEX_NAME, int order=H5_ITER_INC,
          char* obj_name=".", PropID lapl=None, bint info=0):
    """(ObjectID loc, CALLABLE func, **kwds) => 

    Iterate a function or callable object over all objects below the
    specified one.  Your callable should conform to the signature::

        func(STRING name) => Result

    or if the keyword argument "info" is True::

        func(STRING name, ObjInfo info) => Result

    Returning None continues iteration; returning anything else aborts
    iteration and returns that value.  Keywords:

    BOOL info (False)
        Callback is func(STRING, Objinfo)

    STRING obj_name (".")
        Visit a subgroup of "loc" instead

    PropLAID lapl (None)
        Control how "obj_name" is interpreted

    INT idx_type (h5.INDEX_NAME)
        What indexing strategy to use

    INT order (h5.ITER_INC)
        Order in which iteration occurs

    Compatibility note:  No callback is executed for the starting path ("."),
    as some versions of HDF5 don't correctly handle a return value for this
    case.  This differs from the behavior of the native H5Ovisit, which
    provides a literal "." as the first value.
    """
    cdef _ObjectVisitor visit = _ObjectVisitor(func)
    cdef H5O_iterate_t cfunc

    if info:
        cfunc = cb_obj_iterate
    else:
        cfunc = cb_obj_simple

    H5Ovisit_by_name(loc.id, obj_name, idx_type,
        order, cfunc, visit, pdefault(lapl))

    return visit.retval
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/h5p.pxd0000644000175000017500000000400214507227212015455 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

from ._objects cimport ObjectID

# --- Base classes ---

cdef class PropID(ObjectID):
    """ Base class for all property lists """
    pass

cdef class PropClassID(PropID):
    """ Represents an HDF5 property list class.  These can be either (locked)
        library-defined classes or user-created classes.
    """
    pass

cdef class PropInstanceID(PropID):
    """ Represents an instance of a property list class (i.e. an actual list
        which can be passed on to other API functions).
    """
    pass

cdef class PropCreateID(PropInstanceID):
    """ Base class for all object creation lists.

        Also includes string character set methods.
    """
    pass

cdef class PropCopyID(PropInstanceID):
    """ Property list for copying objects (as in h5o.copy) """

# --- Object creation ---

cdef class PropOCID(PropCreateID):
    """ Object creation property list """
    pass

cdef class PropDCID(PropOCID):
    """ Dataset creation property list """
    pass

cdef class PropFCID(PropOCID):
    """ File creation property list """
    pass


# --- Object access ---

cdef class PropFAID(PropInstanceID):
    """ File access property list """
    pass

cdef class PropDXID(PropInstanceID):
    """ Dataset transfer property list """
    pass

cdef class PropDAID(PropInstanceID):
    """ Dataset access property list"""
    cdef char* _virtual_prefix_buf
    cdef char* _efile_prefix_buf

cdef class PropLCID(PropCreateID):
    """ Link creation property list """
    pass

cdef class PropLAID(PropInstanceID):
    """ Link access property list """
    cdef char* _buf

cdef class PropGCID(PropOCID):
    """ Group creation property list """
    pass

cdef hid_t pdefault(PropID pid)
cdef object propwrap(hid_t id_in)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872510.0
h5py-3.13.0/h5py/h5p.pyx0000644000175000017500000016210714755054376015533 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    HDF5 property list interface.
"""

include "config.pxi"

# C-level imports
from cpython.buffer cimport PyObject_CheckBuffer, \
                            PyObject_GetBuffer, PyBuffer_Release, \
                            PyBUF_SIMPLE
from cpython.long cimport PyLong_AsVoidPtr

from .utils cimport  require_tuple, convert_dims, convert_tuple, \
                    emalloc, efree, \
                    check_numpy_write, check_numpy_read
from numpy cimport ndarray, import_array
from .h5t cimport TypeID, py_create
from .h5s cimport SpaceID
from .h5ac cimport CacheConfig

# Python level imports
from ._objects import phil, with_phil

if MPI:
    from mpi4py.libmpi cimport (
        MPI_Comm, MPI_Info, MPI_Comm_dup, MPI_Info_dup,
        MPI_Comm_free, MPI_Info_free)


# Initialization
import_array()

# === C API ===================================================================

cdef hid_t pdefault(PropID pid):

    if pid is None:
        return H5P_DEFAULT
    return pid.id

cdef object propwrap(hid_t id_in):

    clsid = H5Pget_class(id_in)
    try:
        if H5Pequal(clsid, H5P_FILE_CREATE):
            pcls = PropFCID
        elif H5Pequal(clsid, H5P_FILE_ACCESS):
            pcls = PropFAID
        elif H5Pequal(clsid, H5P_DATASET_CREATE):
            pcls = PropDCID
        elif H5Pequal(clsid, H5P_DATASET_XFER):
            pcls = PropDXID
        elif H5Pequal(clsid, H5P_OBJECT_COPY):
            pcls = PropCopyID
        elif H5Pequal(clsid, H5P_LINK_CREATE):
            pcls = PropLCID
        elif H5Pequal(clsid, H5P_LINK_ACCESS):
            pcls = PropLAID
        elif H5Pequal(clsid, H5P_GROUP_CREATE):
            pcls = PropGCID
        elif H5Pequal(clsid, H5P_DATATYPE_CREATE):
            pcls = PropTCID
        elif H5Pequal(clsid, H5P_DATASET_ACCESS):
            pcls = PropDAID
        elif H5Pequal(clsid, H5P_OBJECT_CREATE):
            pcls = PropOCID

        else:
            raise ValueError("No class found for ID %d" % id_in)

        return pcls(id_in)
    finally:
        H5Pclose_class(clsid)

cdef object lockcls(hid_t id_in):
    cdef PropClassID pid
    pid = PropClassID(id_in)
    pid.locked = 1
    return pid


# === Public constants and data structures ====================================

# Property list classes
# These need to be locked, as the library won't let you close them.


NO_CLASS       = lockcls(H5P_NO_CLASS)
FILE_CREATE    = lockcls(H5P_FILE_CREATE)
FILE_ACCESS    = lockcls(H5P_FILE_ACCESS)
DATASET_CREATE = lockcls(H5P_DATASET_CREATE)
DATASET_XFER   = lockcls(H5P_DATASET_XFER)
DATASET_ACCESS = lockcls(H5P_DATASET_ACCESS)

OBJECT_COPY = lockcls(H5P_OBJECT_COPY)

LINK_CREATE = lockcls(H5P_LINK_CREATE)
LINK_ACCESS = lockcls(H5P_LINK_ACCESS)
GROUP_CREATE = lockcls(H5P_GROUP_CREATE)
OBJECT_CREATE = lockcls(H5P_OBJECT_CREATE)

CRT_ORDER_TRACKED = H5P_CRT_ORDER_TRACKED
CRT_ORDER_INDEXED = H5P_CRT_ORDER_INDEXED

DEFAULT = None   # In the HDF5 header files this is actually 0, which is an
                 # invalid identifier.  The new strategy for default options
                 # is to make them all None, to better match the Python style
                 # for keyword arguments.


# === Property list functional API ============================================

@with_phil
def create(PropClassID cls not None):
    """(PropClassID cls) => PropID

    Create a new property list as an instance of a class; classes are:

    - FILE_CREATE
    - FILE_ACCESS
    - DATASET_CREATE
    - DATASET_XFER
    - DATASET_ACCESS
    - LINK_CREATE
    - LINK_ACCESS
    - GROUP_CREATE
    - OBJECT_COPY
    - OBJECT_CREATE
    """
    cdef hid_t newid
    newid = H5Pcreate(cls.id)
    return propwrap(newid)


# === Class API ===============================================================

cdef class PropID(ObjectID):

    """
        Base class for all property lists and classes
    """


    @with_phil
    def equal(self, PropID plist not None):
        """(PropID plist) => BOOL

        Compare this property list (or class) to another for equality.
        """
        return (H5Pequal(self.id, plist.id))

    def __richcmp__(self, object other, int how):
        cdef bint truthval = 0

        with phil:
            if how != 2 and how != 3:
                return NotImplemented
            if type(self) == type(other):
                truthval = self.equal(other)

            if how == 2:
                return truthval
            return not truthval

    def __hash__(self):
        raise TypeError("Property lists are unhashable")

cdef class PropClassID(PropID):

    """
        An HDF5 property list class.

        * Hashable: Yes, by identifier
        * Equality: Logical H5P comparison
    """

    def __richcmp__(self, object other, int how):
        return PropID.__richcmp__(self, other, how)

    def __hash__(self):
        """ Since classes are library-created and immutable, they are uniquely
            identified by their HDF5 identifiers.
        """
        return hash(self.id)

cdef class PropInstanceID(PropID):

    """
        Base class for property list instance objects.  Provides methods which
        are common across all HDF5 property list classes.

        * Hashable: No
        * Equality: Logical H5P comparison
    """


    @with_phil
    def copy(self):
        """() => PropList newid

         Create a new copy of an existing property list object.
        """
        return type(self)(H5Pcopy(self.id))


    def get_class(self):
        """() => PropClassID

        Determine the class of a property list object.
        """
        return PropClassID(H5Pget_class(self.id))


cdef class PropCreateID(PropInstanceID):

    """
        Generic object creation property list.
    """
    pass


cdef class PropCopyID(PropInstanceID):

    """
        Generic object copy property list
    """


    @with_phil
    def set_copy_object(self, unsigned int flags):
        """(UINT flags)

        Set flags for object copying process.  Legal flags are
        from the h5o.COPY* family:

        h5o.COPY_SHALLOW_HIERARCHY_FLAG
            Copy only immediate members of a group.

        h5o.COPY_EXPAND_SOFT_LINK_FLAG
            Expand soft links into new objects.

        h5o.COPY_EXPAND_EXT_LINK_FLAG
            Expand external link into new objects.

        h5o.COPY_EXPAND_REFERENCE_FLAG
            Copy objects that are pointed to by references.

        h5o.COPY_WITHOUT_ATTR_FLAG
            Copy object without copying attributes.
        """
        H5Pset_copy_object(self.id, flags)


    @with_phil
    def get_copy_object(self):
        """() => UINT flags

        Get copy process flags. Legal flags are h5o.COPY*.
        """
        cdef unsigned int flags
        H5Pget_copy_object(self.id, &flags)
        return flags


# === Concrete list implementations ===========================================

# File creation

cdef class PropFCID(PropOCID):

    """
        File creation property list.
    """


    @with_phil
    def get_version(self):
        """() => TUPLE version_info

        Determine version information of various file attributes.
        Elements are:

        0.  UINT Super block version number
        1.  UINT Freelist version number
        2.  UINT Symbol table version number
        3.  UINT Shared object header version number
        """
        cdef herr_t retval
        cdef unsigned int super_
        cdef unsigned int freelist
        cdef unsigned int stab
        cdef unsigned int shhdr

        H5Pget_version(self.id, &super_, &freelist, &stab, &shhdr)

        return (super_, freelist, stab, shhdr)


    @with_phil
    def set_userblock(self, hsize_t size):
        """(INT/LONG size)

        Set the file user block size, in bytes.
        Must be a power of 2, and at least 512.
        """
        H5Pset_userblock(self.id, size)


    @with_phil
    def get_userblock(self):
        """() => LONG size

        Determine the user block size, in bytes.
        """
        cdef hsize_t size
        H5Pget_userblock(self.id, &size)
        return size


    @with_phil
    def set_sizes(self, size_t addr, size_t size):
        """(UINT addr, UINT size)

        Set the addressing offsets and lengths for objects
        in an HDF5 file, in bytes.
        """
        H5Pset_sizes(self.id, addr, size)


    @with_phil
    def get_sizes(self):
        """() => TUPLE sizes

        Determine addressing offsets and lengths for objects in an
        HDF5 file, in bytes.  Return value is a 2-tuple with values:

        0.  UINT Address offsets
        1.  UINT Lengths
        """
        cdef size_t addr
        cdef size_t size
        H5Pget_sizes(self.id, &addr, &size)
        return (addr, size)


    @with_phil
    def set_link_creation_order(self, unsigned int flags):
        """ (UINT flags)

        Set tracking and indexing of creation order for links added to this group

        flags -- h5p.CRT_ORDER_TRACKED, h5p.CRT_ORDER_INDEXED
        """
        H5Pset_link_creation_order(self.id, flags)


    @with_phil
    def get_link_creation_order(self):
        """ () -> UINT flags

        Get tracking and indexing of creation order for links added to this group
        """
        cdef unsigned int flags
        H5Pget_link_creation_order(self.id, &flags)
        return flags

    @with_phil
    def set_file_space_strategy(self, unsigned int strategy, bint persist,
            unsigned long long threshold):
        """ (UINT strategy, BOOL persist, ULONGLONG threshold)

        Set the file space handling strategy and persisting free-space values.
        """
        H5Pset_file_space_strategy(self.id, strategy,
                persist, threshold)

    @with_phil
    def get_file_space_strategy(self):
        """ () => TUPLE(UINT strategy, BOOL persist, ULONGLONG threshold)

        Retrieve the file space handling strategy, persisting free-space
        condition and threshold value for a file creation property list.
        """
        cdef H5F_fspace_strategy_t strategy
        cdef hbool_t persist
        cdef hsize_t threshold

        H5Pget_file_space_strategy(self.id, &strategy, &persist, &threshold)
        return (strategy, persist, threshold)

    @with_phil
    def set_file_space_page_size(self, hsize_t fsp_size):
        """ (LONG fsp_size)

        Set the file space page size used in paged aggregation and paged
        buffering. Minimum page size is 512 bytes. A value less than 512 will raise
        an error. The size set may not be changed for the life of the file.
        """
        H5Pset_file_space_page_size(self.id, fsp_size)

    @with_phil
    def get_file_space_page_size(self):
        """ () -> LONG fsp_size

        Retrieve the file space page size.
        """
        cdef hsize_t fsp_size
        H5Pget_file_space_page_size(self.id, &fsp_size)
        return fsp_size

# Dataset creation
cdef class PropDCID(PropOCID):

    """
        Dataset creation property list.
    """

    @with_phil
    def set_layout(self, int layout_code):
        """(INT layout_code)

        Set dataset storage strategy; legal values are:

        - h5d.COMPACT
        - h5d.CONTIGUOUS
        - h5d.CHUNKED
        - h5d.VIRTUAL (If using HDF5 library version 1.10 or later)
        """
        H5Pset_layout(self.id, layout_code)


    @with_phil
    def get_layout(self):
        """() => INT layout_code

        Determine the storage strategy of a dataset; legal values are:

        - h5d.COMPACT
        - h5d.CONTIGUOUS
        - h5d.CHUNKED
        - h5d.VIRTUAL (If using HDF5 library version 1.10 or later)
        """
        return H5Pget_layout(self.id)

    @with_phil
    def set_chunk(self, object chunksize):
        """(TUPLE chunksize)

        Set the dataset chunk size.  It's up to you to provide
        values which are compatible with your dataset.
        """
        cdef int rank
        cdef hsize_t* dims
        dims = NULL

        require_tuple(chunksize, 0, -1, b"chunksize")
        rank = len(chunksize)

        dims = emalloc(sizeof(hsize_t)*rank)
        try:
            convert_tuple(chunksize, dims, rank)
            H5Pset_chunk(self.id, rank, dims)
        finally:
            efree(dims)


    @with_phil
    def get_chunk(self):
        """() => TUPLE chunk_dimensions

        Obtain the dataset chunk size, as a tuple.
        """
        cdef int rank
        cdef hsize_t *dims

        rank = H5Pget_chunk(self.id, 0, NULL)
        assert rank >= 0
        dims = emalloc(sizeof(hsize_t)*rank)

        try:
            H5Pget_chunk(self.id, rank, dims)
            tpl = convert_dims(dims, rank)
            return tpl
        finally:
            efree(dims)


    @with_phil
    def set_fill_value(self, ndarray value not None):
        """(NDARRAY value)

        Set the dataset fill value.  The object provided should be an
        0-dimensional NumPy array; otherwise, the value will be read from
        the first element.
        """
        from .h5t import check_string_dtype
        cdef TypeID tid
        cdef char * c_ptr

        check_numpy_read(value, -1)

        # check for strings
        # create correct typeID and pointer to c_str
        string_info = check_string_dtype(value.dtype)
        if string_info is not None:
            # if needed encode fill_value
            fill_value = value.item()
            if not isinstance(fill_value, bytes):
                fill_value = fill_value.encode(string_info.encoding)
            c_ptr = fill_value
            tid = py_create(value.dtype, logical=1)
            H5Pset_fill_value(self.id, tid.id, &c_ptr)
            return

        tid = py_create(value.dtype)
        H5Pset_fill_value(self.id, tid.id, value.data)


    @with_phil
    def get_fill_value(self, ndarray value not None):
        """(NDARRAY value)

        Read the dataset fill value into a NumPy array.  It will be
        converted to match the array dtype.  If the array has nonzero
        rank, only the first element will contain the value.
        """
        from .h5t import check_string_dtype
        cdef TypeID tid
        cdef char * c_ptr = NULL

        check_numpy_write(value, -1)

        # check for vlen strings
        # create correct typeID and convert from c_str pointer to string
        string_info = check_string_dtype(value.dtype)
        if string_info is not None and string_info.length is None:
            tid = py_create(value.dtype, logical=1)
            ret = H5Pget_fill_value(self.id, tid.id, &c_ptr)
            if c_ptr == NULL:
                # If the pointer is NULL (either the value did not get changed,
                # or maybe the 0 length string, it's unclear currently), if
                # PyBytes_FromString is called on the pointer, we get a
                # segfault. If we set the value to empty bytes, then we
                # shouldn't segfault.
                value[0] = b""
                return
            fill_value = c_ptr
            value[0] = fill_value
            return

        tid = py_create(value.dtype)
        H5Pget_fill_value(self.id, tid.id, value.data)

    @with_phil
    def fill_value_defined(self):
        """() => INT fill_status

        Determine the status of the dataset fill value.  Return values are:

        - h5d.FILL_VALUE_UNDEFINED
        - h5d.FILL_VALUE_DEFAULT
        - h5d.FILL_VALUE_USER_DEFINED
        """
        cdef H5D_fill_value_t val
        H5Pfill_value_defined(self.id, &val)
        return val


    @with_phil
    def set_fill_time(self, int fill_time):
        """(INT fill_time)

        Define when fill values are written to the dataset.  Legal
        values (defined in module h5d) are:

        - h5d.FILL_TIME_ALLOC
        - h5d.FILL_TIME_NEVER
        - h5d.FILL_TIME_IFSET
        """
        H5Pset_fill_time(self.id, fill_time)


    @with_phil
    def get_fill_time(self):
        """ () => INT

        Determine when fill values are written to the dataset.  Legal
        values (defined in module h5d) are:

        - h5d.FILL_TIME_ALLOC
        - h5d.FILL_TIME_NEVER
        - h5d.FILL_TIME_IFSET
        """
        cdef H5D_fill_time_t fill_time
        H5Pget_fill_time(self.id, &fill_time)
        return fill_time


    @with_phil
    def set_alloc_time(self, int alloc_time):
        """(INT alloc_time)

        Set the storage space allocation time.  One of h5d.ALLOC_TIME*.
        """
        H5Pset_alloc_time(self.id, alloc_time)


    @with_phil
    def get_alloc_time(self):
        """() => INT alloc_time

        Get the storage space allocation time.  One of h5d.ALLOC_TIME*.
        """
        cdef H5D_alloc_time_t alloc_time
        H5Pget_alloc_time(self.id, &alloc_time)
        return alloc_time


    # === Filter functions ====================================================

    @with_phil
    def set_filter(self, int filter_code, unsigned int flags=0, object values=None):
        """(INT filter_code, UINT flags=0, TUPLE values=None)

        Set a filter in the pipeline.  Params are:

        filter_code
            One of the following:

            - h5z.FILTER_DEFLATE
            - h5z.FILTER_SHUFFLE
            - h5z.FILTER_FLETCHER32
            - h5z.FILTER_SZIP

        flags
            Bit flags (h5z.FLAG*) setting filter properties

        values
            TUPLE of UINTs giving auxiliary data for the filter
        """
        cdef size_t nelements
        cdef unsigned int *cd_values
        cdef int i
        cd_values = NULL

        require_tuple(values, 1, -1, b"values")

        try:
            if values is None or len(values) == 0:
                nelements = 0
                cd_values = NULL
            else:
                nelements = len(values)
                cd_values = emalloc(sizeof(unsigned int)*nelements)

                for i in range(nelements):
                    cd_values[i] = int(values[i])

            H5Pset_filter(self.id, filter_code, flags, nelements, cd_values)
        finally:
            efree(cd_values)


    @with_phil
    def all_filters_avail(self):
        """() => BOOL

        Determine if all the filters in the pipelist are available to
        the library.
        """
        return (H5Pall_filters_avail(self.id))


    @with_phil
    def get_nfilters(self):
        """() => INT

        Determine the number of filters in the pipeline.
        """
        return H5Pget_nfilters(self.id)


    @with_phil
    def get_filter(self, int filter_idx):
        """(UINT filter_idx) => TUPLE filter_info

        Get information about a filter, identified by its index.  Tuple
        elements are:

        0. INT filter code (h5z.FILTER*)
        1. UINT flags (h5z.FLAG*)
        2. TUPLE of UINT values; filter aux data (16 values max)
        3. STRING name of filter (256 chars max)
        """
        cdef list vlist
        cdef int filter_code
        cdef unsigned int flags
        cdef size_t nelements
        cdef unsigned int cd_values[16]
        cdef char name[257]
        cdef int i
        nelements = 16 # HDF5 library actually complains if this is too big.

        if filter_idx < 0:
            raise ValueError("Filter index must be a non-negative integer")

        filter_code = H5Pget_filter(self.id, filter_idx, &flags,
                                         &nelements, cd_values, 256, name, NULL)
        name[256] = c'\0'  # in case it's > 256 chars

        vlist = []
        for i in range(nelements):
            vlist.append(cd_values[i])

        return (filter_code, flags, tuple(vlist), name)


    @with_phil
    def _has_filter(self, int filter_code):
        """(INT filter_code)

        Slow & stupid method to determine if a filter is used in this
        property list.  Used because the HDF5 function H5Pget_filter_by_id
        is broken.
        """
        cdef int i, nfilters
        nfilters = self.get_nfilters()
        for i in range(nfilters):
            if self.get_filter(i)[0] == filter_code:
                return True
        return False


    @with_phil
    def get_filter_by_id(self, int filter_code):
        """(INT filter_code) => TUPLE filter_info or None

        Get information about a filter, identified by its code (one
        of h5z.FILTER*).  If the filter doesn't exist, returns None.
        Tuple elements are:

        0. UINT flags (h5z.FLAG*)
        1. TUPLE of UINT values; filter aux data (16 values max)
        2. STRING name of filter (256 chars max)
        """
        cdef list vlist
        cdef unsigned int flags
        cdef size_t nelements
        cdef unsigned int cd_values[16]
        cdef char name[257]
        cdef herr_t retval
        cdef int i
        nelements = 16 # HDF5 library actually complains if this is too big.

        if not self._has_filter(filter_code):
            # Avoid library segfault
            return None

        retval = H5Pget_filter_by_id(self.id, filter_code,
                                     &flags, &nelements, cd_values, 256, name, NULL)
        assert nelements <= 16

        name[256] = c'\0'  # In case HDF5 doesn't terminate it properly

        vlist = []
        for i in range(nelements):
            vlist.append(cd_values[i])

        return (flags, tuple(vlist), name)


    @with_phil
    def remove_filter(self, int filter_class):
        """(INT filter_class)

        Remove a filter from the pipeline.  The class code is one of
        h5z.FILTER*.
        """
        H5Premove_filter(self.id, filter_class)


    @with_phil
    def set_deflate(self, unsigned int level=5):
        """(UINT level=5)

        Enable deflate (gzip) compression, at the given level.
        Valid levels are 0-9, default is 5.
        """
        H5Pset_deflate(self.id, level)


    @with_phil
    def set_fletcher32(self):
        """()

        Enable Fletcher32 error correction on this list.
        """
        H5Pset_fletcher32(self.id)


    @with_phil
    def set_shuffle(self):
        """()

        Enable to use of the shuffle filter.  Use this immediately before
        the deflate filter to increase the compression ratio.
        """
        H5Pset_shuffle(self.id)


    @with_phil
    def set_szip(self, unsigned int options, unsigned int pixels_per_block):
        """(UINT options, UINT pixels_per_block)

        Enable SZIP compression.  See the HDF5 docs for argument meanings,
        and general restrictions on use of the SZIP format.
        """
        H5Pset_szip(self.id, options, pixels_per_block)


    @with_phil
    def set_scaleoffset(self, H5Z_SO_scale_type_t scale_type, int scale_factor):
        '''(H5Z_SO_scale_type_t scale_type, INT scale_factor)

        Enable scale/offset (usually lossy) compression; lossless (e.g. gzip)
        compression and other filters may be applied on top of this.

        Note that error detection (i.e. fletcher32) cannot precede this in
        the filter chain, or else all reads on lossily-compressed data will
        fail.'''
        H5Pset_scaleoffset(self.id, scale_type, scale_factor)


    # === External dataset functions ===========================================

    @with_phil
    def set_external(self, name, offset, size):
        '''(STR name, UINT offset, UINT size)

        Adds an external file to the list of external files for the dataset.

        The first call sets the external storage property in the property list,
        thus designating that the dataset will be stored in one or more non-HDF5
        file(s) external to the HDF5 file.'''
        H5Pset_external(self.id, name, offset, size)

    @with_phil
    def get_external_count(self):
        """() => INT

        Returns the number of external files for the dataset.
        """
        return (H5Pget_external_count(self.id))

    @with_phil
    def get_external(self, idx=0):
        """(UINT idx=0) => TUPLE external_file_info

        Returns information about the indexed external file.
        Tuple elements are:

        0. STRING name of file (256 chars max)
        1. UINT offset
        2. UINT size
        """
        cdef char name[257]
        cdef off_t offset
        cdef hsize_t size
        cdef herr_t retval

        retval = H5Pget_external(self.id, idx, 256, name, &offset, &size)
        name[256] = c'\0'  # In case HDF5 doesn't terminate name properly

        result = None
        if retval==0:
            result = (name, offset, size)
        return result

    # === Virtual dataset functions ===========================================
    @with_phil
    def set_virtual(self, SpaceID vspace not None, char* src_file_name,
                    char* src_dset_name, SpaceID src_space not None):
        """(SpaceID vspace, STR src_file_name, STR src_dset_name, SpaceID src_space)

        Set the mapping between virtual and source datasets.

        The virtual dataset is described by its virtual dataspace (vspace)
        to the elements. The source dataset is described by the name of the
        file where it is located (src_file_name), the name of the dataset
        (src_dset_name) and its dataspace (src_space).
        """
        H5Pset_virtual(self.id, vspace.id, src_file_name, src_dset_name, src_space.id)

    @with_phil
    def get_virtual_count(self):
        """() => UINT

        Get the number of mappings for the virtual dataset.
        """
        cdef size_t count
        H5Pget_virtual_count(self.id, &count)
        return count

    @with_phil
    def get_virtual_dsetname(self, size_t index=0):
        """(UINT index=0) => STR

        Get the name of a source dataset used in the mapping of the virtual
        dataset at the position index.
        """
        cdef char* name = NULL
        cdef ssize_t size

        size = H5Pget_virtual_dsetname(self.id, index, NULL, 0)
        name = emalloc(size+1)
        try:
            # TODO check return size
            H5Pget_virtual_dsetname(self.id, index, name, size+1)
            src_dset_name = bytes(name).decode('utf-8')
        finally:
            efree(name)

        return src_dset_name

    @with_phil
    def get_virtual_filename(self, size_t index=0):
        """(UINT index=0) => STR

        Get the file name of a source dataset used in the mapping of the
        virtual dataset at the position index.
        """
        cdef char* name = NULL
        cdef ssize_t size

        size = H5Pget_virtual_filename(self.id, index, NULL, 0)
        name = emalloc(size+1)
        try:
            # TODO check return size
            H5Pget_virtual_filename(self.id, index, name, size+1)
            src_fname = bytes(name).decode('utf-8')
        finally:
            efree(name)

        return src_fname

    @with_phil
    def get_virtual_vspace(self, size_t index=0):
        """(UINT index=0) => SpaceID

        Get a dataspace for the selection within the virtual dataset used
        in the mapping.
        """
        return SpaceID(H5Pget_virtual_vspace(self.id, index))

    @with_phil
    def get_virtual_srcspace(self, size_t index=0):
        """(UINT index=0) => SpaceID

        Get a dataspace for the selection within the source dataset used
        in the mapping.
        """
        return SpaceID(H5Pget_virtual_srcspace(self.id, index))

# File access
cdef class PropFAID(PropInstanceID):

    """
        File access property list
    """


    @with_phil
    def set_fclose_degree(self, int close_degree):
        """(INT close_degree)

        Set the file-close degree, which determines library behavior when
        a file is closed when objects are still open.  Legal values:

        * h5f.CLOSE_DEFAULT
        * h5f.CLOSE_WEAK
        * h5f.CLOSE_SEMI
        * h5f.CLOSE_STRONG
        """
        H5Pset_fclose_degree(self.id, close_degree)


    @with_phil
    def get_fclose_degree(self):
        """() => INT close_degree
        - h5fd.
        Get the file-close degree, which determines library behavior when
        a file is closed when objects are still open.  Legal values:

        * h5f.CLOSE_DEFAULT
        * h5f.CLOSE_WEAK
        * h5f.CLOSE_SEMI
        * h5f.CLOSE_STRONG
        """
        cdef H5F_close_degree_t deg
        H5Pget_fclose_degree(self.id, °)
        return deg


    @with_phil
    def set_fapl_core(self, size_t block_size=64*1024, hbool_t backing_store=1):
        """(UINT increment=64k, BOOL backing_store=True)

        Use the h5fd.CORE (memory-resident) file driver.

        block_size
            Chunk size for new memory requests (default 64 KiB)

        backing_store
            If True (default), memory contents are associated with an
            on-disk file, which is updated when the file is closed.
            Set to False for a purely in-memory file.
        """
        H5Pset_fapl_core(self.id, block_size, backing_store)


    @with_phil
    def get_fapl_core(self):
        """() => TUPLE core_settings

        Determine settings for the h5fd.CORE (memory-resident) file driver.
        Tuple elements are:

        0. UINT "increment": Chunk size for new memory requests
        1. BOOL "backing_store": If True, write the memory contents to
           disk when the file is closed.
        """
        cdef size_t increment
        cdef hbool_t backing_store
        H5Pget_fapl_core(self.id, &increment, &backing_store)
        return (increment, (backing_store))


    @with_phil
    def set_fapl_family(self, hsize_t memb_size=2147483647, PropID memb_fapl=None):
        """(UINT memb_size=2**31-1, PropFAID memb_fapl=None)

        Set up the family driver.

        memb_size
            Member file size

        memb_fapl
            File access property list for each member access
        """
        cdef hid_t plist_id
        plist_id = pdefault(memb_fapl)
        H5Pset_fapl_family(self.id, memb_size, plist_id)


    @with_phil
    def get_fapl_family(self):
        """() => TUPLE info

        Determine family driver settings. Tuple values are:

        0. UINT memb_size
        1. PropFAID memb_fapl or None
        """
        cdef hid_t mfapl_id
        cdef hsize_t msize
        cdef PropFAID plist
        plist = None

        H5Pget_fapl_family(self.id, &msize, &mfapl_id)

        if mfapl_id > 0:
            plist = PropFAID(mfapl_id)

        return (msize, plist)


    if ROS3:
        @with_phil
        def set_fapl_ros3(self, char* aws_region="", char* secret_id="",
                          char* secret_key=""):
            """(STRING aws_region, STRING secret_id, STRING secret_key)

            Set up the ros3 driver.
            """
            cdef H5FD_ros3_fapl_t config
            config.version = H5FD_CURR_ROS3_FAPL_T_VERSION
            if len(aws_region) and len(secret_id) and len(secret_key):
                config.authenticate = 1
            else:
                config.authenticate = 0
            config.aws_region = aws_region
            config.secret_id = secret_id
            config.secret_key = secret_key
            H5Pset_fapl_ros3(self.id, &config)


        @with_phil
        def get_fapl_ros3(self):
            """ () => STRUCT config

            Retrieve the ROS3 config
            """
            cdef H5FD_ros3_fapl_t config

            H5Pget_fapl_ros3(self.id, &config)
            return config

        IF HDF5_VERSION >= (1, 14, 2):
            @with_phil
            def get_fapl_ros3_token(self):
                """ () => BYTES token

                Get session token from the file access property list.
                """
                cdef size_t size = 0
                cdef char *token = NULL

                size = H5FD_ROS3_MAX_SECRET_TOK_LEN + 1
                try:
                    token = emalloc(size)
                    token[0] = 0
                    H5Pget_fapl_ros3_token(self.id, size, token)
                    pytoken = token
                finally:
                    efree(token)

                return pytoken


            @with_phil
            def set_fapl_ros3_token(self, char *token=""):
                """ (BYTES token="")

                Set session token in the file access property list.
                """
                H5Pset_fapl_ros3_token(self.id, token)


    @with_phil
    def set_fapl_log(self, char* logfile, unsigned int flags, size_t buf_size):
        """(STRING logfile, UINT flags, UINT buf_size)

        Enable the use of the logging driver.  See the HDF5 documentation
        for details.  Flag constants are stored in module h5fd.
        """
        H5Pset_fapl_log(self.id, logfile, flags, buf_size)


    @with_phil
    def set_fapl_sec2(self):
        """()

        Select the "section-2" driver (h5fd.SEC2).
        """
        H5Pset_fapl_sec2(self.id)

    if DIRECT_VFD:
        @with_phil
        def set_fapl_direct(self, size_t alignment=0, size_t block_size=0, size_t cbuf_size=0):
            """(size_t alignment, size_t block_size, size_t cbuf_size)

            Select the "direct" driver (h5fd.DIRECT).

            Parameters:
                hid_t fapl_id       IN: File access property list identifier
                size_t alignment    IN: Required memory alignment boundary
                size_t block_size   IN: File system block size
                size_t cbuf_size    IN: Copy buffer size

            Properties with value of 0 indicate that the HDF5 library should
            choose the value.
            """
            H5Pset_fapl_direct(self.id, alignment, block_size, cbuf_size)

        @with_phil
        def get_fapl_direct(self):
            """ () => (alignment, block_size, cbuf_size)

            Retrieve the DIRECT VFD config
            """
            cdef size_t alignment
            cdef size_t block_size
            cdef size_t cbuf_size

            H5Pget_fapl_direct(self.id, &alignment, &block_size, &cbuf_size)
            return alignment, block_size, cbuf_size


    @with_phil
    def set_fapl_stdio(self):
        """()

        Select the "stdio" driver (h5fd.STDIO)
        """
        H5Pset_fapl_stdio(self.id)

    @with_phil
    def set_fapl_split(self, const char* meta_ext="-m.h5", PropID meta_plist_id=None, const char* raw_ext="-r.h5", PropID raw_plist_id=None):
        """()

        Select the "split" driver (h5fd.SPLIT)
        """
        H5Pset_fapl_split(self.id, meta_ext, pdefault(meta_plist_id), raw_ext, pdefault(raw_plist_id))


    @with_phil
    def set_driver(self, hid_t driver_id):
        """(INT driver_id)

        Sets the file driver identifier for this file access or data
        transfer property list.
        """
        return H5Pset_driver(self.id, driver_id, NULL)


    @with_phil
    def set_fileobj_driver(self, hid_t driver_id, object fileobj):
        """(INT driver_id, OBJECT fileobj)

        Select the "fileobj" file driver (h5py-specific).
        """
        return H5Pset_driver(self.id, driver_id, fileobj)


    @with_phil
    def get_driver(self):
        """() => INT driver code

        Return an integer identifier for the driver used by this list.
        Although HDF5 implements these as full-fledged objects, they are
        treated as integers by Python.  Built-in drivers identifiers are
        listed in module h5fd; they are:

        - h5fd.CORE
        - h5fd.FAMILY
        - h5fd.LOG
        - h5fd.MPIO
        - h5fd.MULTI
        - h5fd.SEC2
        - h5fd.DIRECT  (if available)
        - h5fd.STDIO
        - h5fd.ROS3    (if available)
        """
        return H5Pget_driver(self.id)


    @with_phil
    def set_cache(self, int mdc, int rdcc, size_t rdcc_nbytes, double rdcc_w0):
        """(INT mdc, INT rdcc, UINT rdcc_nbytes, DOUBLE rdcc_w0)

        Set the metadata (mdc) and raw data chunk (rdcc) cache properties.
        See the HDF5 docs for a full explanation.
        """
        H5Pset_cache(self.id, mdc, rdcc, rdcc_nbytes, rdcc_w0)


    @with_phil
    def get_cache(self):
        """() => TUPLE cache info

        Get the metadata and raw data chunk cache settings.  See the HDF5
        docs for element definitions.  Return is a 4-tuple with entries:

        1. INT mdc:              Number of metadata objects
        2. INT rdcc:             Number of raw data chunks
        3. UINT rdcc_nbytes:     Size of raw data cache
        4. DOUBLE rdcc_w0:       Preemption policy for data cache.
        """
        cdef int mdc
        cdef size_t rdcc, rdcc_nbytes
        cdef double w0

        H5Pget_cache(self.id, &mdc, &rdcc, &rdcc_nbytes, &w0)
        return (mdc, rdcc, rdcc_nbytes, w0)


    @with_phil
    def set_sieve_buf_size(self, size_t size):
        """ (UINT size)

        Set the maximum size of the data sieve buffer (in bytes).  This
        buffer can improve I/O performance for hyperslab I/O, by combining
        reads and writes into blocks of the given size.  The default is 64k.
        """
        H5Pset_sieve_buf_size(self.id, size)


    @with_phil
    def get_sieve_buf_size(self):
        """ () => UINT size

        Get the current maximum size of the data sieve buffer (in bytes).
        """
        cdef size_t size
        H5Pget_sieve_buf_size(self.id, &size)
        return size


    @with_phil
    def set_libver_bounds(self, int low, int high):
        """ (INT low, INT high)

        Set the compatibility level for file format. Legal values are:

        - h5f.LIBVER_EARLIEST
        - h5f.LIBVER_V18
        - h5f.LIBVER_V110
        - h5f.LIBVER_V112 (HDF5 1.11.4 or later)
        - h5f.LIBVER_V114 (HDF5 1.13.0 or later)
        - h5f.LIBVER_LATEST
        """
        H5Pset_libver_bounds(self.id, low, high)

    @with_phil
    def set_meta_block_size(self, size_t size):
        """ (UINT size)

        Set the current minimum size, in bytes, of new metadata block allocations.
        """
        H5Pset_meta_block_size(self.id, size)

    @with_phil
    def get_meta_block_size(self):
        """ () => UINT size

        Get the current minimum size, in bytes, of new metadata block allocations.
        """
        cdef hsize_t size
        H5Pget_meta_block_size(self.id, &size)
        return size

    @with_phil
    def get_libver_bounds(self):
        """ () => (INT low, INT high)

        Get the compatibility level for file format. Returned values are from:

        - h5f.LIBVER_EARLIEST
        - h5f.LIBVER_V18
        - h5f.LIBVER_V110
        - h5f.LIBVER_V112 (HDF5 1.11.4 or later)
        - h5f.LIBVER_V114 (HDF5 1.13.0 or later)
        - h5f.LIBVER_LATEST
        """
        cdef H5F_libver_t low
        cdef H5F_libver_t high
        H5Pget_libver_bounds(self.id, &low, &high)

        return (low, high)

    IF MPI:
        @with_phil
        def set_fapl_mpio(self, comm, info):
            """ (Comm comm, Info info)

            Set MPI-I/O Parallel HDF5 driver.

            Comm: An mpi4py.MPI.Comm instance
            Info: An mpi4py.MPI.Info instance
            """
            from mpi4py.MPI import Comm, Info, _handleof
            assert isinstance(comm, Comm)
            assert isinstance(info, Info)
            cdef Py_uintptr_t _comm = _handleof(comm)
            cdef Py_uintptr_t _info = _handleof(info)
            H5Pset_fapl_mpio(self.id, _comm, _info)

        @with_phil
        def get_fapl_mpio(self):
            """ () => (mpi4py.MPI.Comm, mpi4py.MPI.Info)

            Determine mpio driver MPI information.

            0. The mpi4py.MPI.Comm Communicator
            1. The mpi4py.MPI.Comm Info
            """
            cdef MPI_Comm comm
            cdef MPI_Info info
            from mpi4py.MPI import Comm, Info, _addressof

            H5Pget_fapl_mpio(self.id, &comm, &info)

            # TODO: Do we actually need these dup steps? Could we pass the
            # addresses directly to H5Pget_fapl_mpio?
            pycomm = Comm()
            MPI_Comm_dup(comm, PyLong_AsVoidPtr(_addressof(pycomm)))
            MPI_Comm_free(&comm)

            pyinfo = Info()
            MPI_Info_dup(info, PyLong_AsVoidPtr(_addressof(pyinfo)))
            MPI_Info_free(&info)

            return (pycomm, pyinfo)


        @with_phil
        def set_fapl_mpiposix(self, comm, bint use_gpfs_hints=0):
            """ Obsolete.
            """
            raise RuntimeError("MPI-POSIX driver is broken; removed in h5py 2.3.1")


    @with_phil
    def get_mdc_config(self):
        """() => CacheConfig
        Returns an object that stores all the information about the meta-data cache
        configuration
        """

        cdef CacheConfig config = CacheConfig()

        H5Pget_mdc_config(self.id, &config.cache_config)

        return config


    @with_phil
    def set_mdc_config(self, CacheConfig config not None):
        """(CacheConfig) => None
        Returns an object that stores all the information about the meta-data cache
        configuration
        """
        H5Pset_mdc_config(self.id, &config.cache_config)

    def get_alignment(self):
        """
        Retrieves the current settings for alignment properties from a file access property list.
        """
        cdef hsize_t threshold, alignment
        H5Pget_alignment(self.id, &threshold, &alignment)

        return threshold, alignment

    def set_alignment(self, threshold, alignment):
        """
        Sets alignment properties of a file access property list.
        """
        H5Pset_alignment(self.id, threshold, alignment)

    @with_phil
    def set_file_image(self, image):
        """
        Copy a file image into the property list. Passing None releases
        any image currently loaded. The parameter image must either be
        None or support the buffer protocol.
        """

        cdef Py_buffer buf

        if image is None:
            H5Pset_file_image(self.id, NULL, 0)
            return

        if not PyObject_CheckBuffer(image):
            raise TypeError("image must support the buffer protocol")

        PyObject_GetBuffer(image, &buf, PyBUF_SIMPLE)

        try:
            H5Pset_file_image(self.id, buf.buf, buf.len)
        finally:
            PyBuffer_Release(&buf)

    @with_phil
    def set_page_buffer_size(self, size_t buf_size, unsigned int min_meta_per=0,
                             unsigned int min_raw_per=0):
        """ (LONG buf_size, UINT min_meta_per, UINT min_raw_per)

        Set the maximum size in bytes of the page buffer. The default value is
        zero, meaning that page buffering is disabled. When a non-zero page
        buffer size is set, HDF5 library will enable page buffering if that size
        is larger or equal than a single page size if a paged file space
        strategy was set at file creation.

        The function also allows setting the criteria for metadata and raw data
        page eviction from the buffer. The default values for both are zero.
        """
        H5Pset_page_buffer_size(self.id, buf_size, min_meta_per, min_raw_per)

    @with_phil
    def get_page_buffer_size(self):
        """ () -> (LONG buf_size, UINT min_meta_per, UINT min_raw_per)

        Retrieves the maximum size for the page buffer and the minimum
        percentage for metadata and raw data pages evicition criteria.
        """
        cdef size_t buf_size
        cdef unsigned int min_meta_per, min_raw_per
        H5Pget_page_buffer_size(self.id, &buf_size, &min_meta_per, &min_raw_per)
        return (buf_size, min_meta_per, min_raw_per)

    IF HDF5_VERSION >= (1, 12, 1) or (HDF5_VERSION[:2] == (1, 10) and HDF5_VERSION[2] >= 7):

        @with_phil
        def get_file_locking(self):
            """ () => (BOOL, BOOL)

            Return file locking information as a 2-tuple of boolean:
            (use_file_locking, ignore_when_disabled)
            """
            cdef hbool_t use_file_locking = 0
            cdef hbool_t ignore_when_disabled = 0

            H5Pget_file_locking(self.id, &use_file_locking, &ignore_when_disabled)
            return use_file_locking, ignore_when_disabled

        @with_phil
        def set_file_locking(self, bint use_file_locking, bint ignore_when_disabled):
            """ (BOOL use_file_locking, BOOL ignore_when_disabled)

            Set HDF5 file locking behavior.
            Warning: This setting is overridden by the HDF5_USE_FILE_LOCKING environment variable.
            """
            H5Pset_file_locking(
                self.id, use_file_locking, ignore_when_disabled)


# Link creation
cdef class PropLCID(PropCreateID):

    """ Link creation property list """

    @with_phil
    def set_char_encoding(self, int encoding):
        """ (INT encoding)

        Set the character encoding for link names.  Legal values are:

        - h5t.CSET_ASCII
        - h5t.CSET_UTF8
        """
        H5Pset_char_encoding(self.id, encoding)


    @with_phil
    def get_char_encoding(self):
        """ () => INT encoding

        Get the character encoding for link names.  Legal values are:

        - h5t.CSET_ASCII
        - h5t.CSET_UTF8
        """
        cdef H5T_cset_t encoding
        H5Pget_char_encoding(self.id, &encoding)
        return encoding


    @with_phil
    def set_create_intermediate_group(self, bint create):
        """(BOOL create)

        Set whether missing intermediate groups are automatically created.
        """
        H5Pset_create_intermediate_group(self.id, create)


    @with_phil
    def get_create_intermediate_group(self):
        """() => BOOL

        Determine if missing intermediate groups are automatically created.
        """
        cdef unsigned int create
        H5Pget_create_intermediate_group(self.id, &create)
        return create

# Link access
cdef class PropLAID(PropInstanceID):

    """ Link access property list """

    def __cinit__(self, *args):
        self._buf = NULL

    def __dealloc__(self):
        efree(self._buf)


    @with_phil
    def set_nlinks(self, size_t nlinks):
        """(UINT nlinks)

        Set the maximum traversal depth for soft links
        """
        H5Pset_nlinks(self.id, nlinks)


    @with_phil
    def get_nlinks(self):
        """() => UINT

        Get the maximum traversal depth for soft links
        """
        cdef size_t nlinks
        H5Pget_nlinks(self.id, &nlinks)
        return nlinks


    @with_phil
    def set_elink_prefix(self, char* prefix):
        """(STRING prefix)

        Set the external link prefix.
        """
        cdef size_t size

        # HDF5 requires that we hang on to this buffer
        efree(self._buf)
        size = strlen(prefix)
        self._buf = emalloc(size+1)
        strcpy(self._buf, prefix)

        H5Pset_elink_prefix(self.id, self._buf)


    @with_phil
    def get_elink_prefix(self):
        """() => STRING prefix

        Get the external link prefix
        """
        cdef char* buf = NULL
        cdef ssize_t size

        size = H5Pget_elink_prefix(self.id, NULL, 0)
        buf = emalloc(size+1)
        buf[0] = 0
        try:
            H5Pget_elink_prefix(self.id, buf, size+1)
            pstr = buf
        finally:
            efree(buf)

        return pstr


    @with_phil
    def set_elink_fapl(self, PropID fapl not None):
        """ (PropFAID fapl)

        Set the file access property list used when opening external files.
        """
        H5Pset_elink_fapl(self.id, fapl.id)


    @with_phil
    def get_elink_fapl(self):
        """ () => PropFAID fapl

        Get the file access property list used when opening external files.
        """
        cdef hid_t fid
        fid = H5Pget_elink_fapl(self.id)
        if H5Iget_ref(fid) > 1:
            H5Idec_ref(fid)
        return propwrap(fid)


    @with_phil
    def set_elink_acc_flags(self, unsigned int flags):
        """ (UNIT flags)

        Sets the external link traversal file access flag in a link access property list.
        """
        H5Pset_elink_acc_flags(self.id, flags)


    @with_phil
    def get_elink_acc_flags(self):
        """() => UINT

        Retrieves the external link traversal file access flag from the specified link access property list.
        """
        cdef unsigned int flags
        H5Pget_elink_acc_flags(self.id, &flags)
        return flags


# Datatype creation
cdef class PropTCID(PropOCID):
    """ Datatype creation property list

    No methods yet.
    """

    pass

# Group creation
cdef class PropGCID(PropOCID):
    """ Group creation property list """

    @with_phil
    def set_link_creation_order(self, unsigned int flags):
        """ (UINT flags)

        Set tracking and indexing of creation order for links added to this group

        flags -- h5p.CRT_ORDER_TRACKED, h5p.CRT_ORDER_INDEXED
        """
        H5Pset_link_creation_order(self.id, flags)


    @with_phil
    def get_link_creation_order(self):
        """ () -> UINT flags

        Get tracking and indexing of creation order for links added to this group
        """
        cdef unsigned int flags
        H5Pget_link_creation_order(self.id, &flags)
        return flags


# Object creation property list
cdef class PropOCID(PropCreateID):
    """ Object creation property list

    This seems to be a super class for dataset creation property list
    and group creation property list.

    The documentation is somewhat hazy
    """

    @with_phil
    def set_attr_creation_order(self, unsigned int flags):
        """ (UINT flags)

        Set tracking and indexing of creation order for object attributes

        flags -- h5p.CRT_ORDER_TRACKED, h5p.CRT_ORDER_INDEXED
        """
        H5Pset_attr_creation_order(self.id, flags)


    @with_phil
    def get_attr_creation_order(self):
        """ () -> UINT flags

        Get tracking and indexing of creation order for object attributes
        """
        cdef unsigned int flags
        H5Pget_attr_creation_order(self.id, &flags)
        return flags

    @with_phil
    def set_attr_phase_change(self, max_compact=8, min_dense=6):
        """ (UINT max_compact, UINT min_dense)

        Set threshold value for attribute storage on an object

        max_compact -- maximum number of attributes to be stored in compact storage(default:8)
        must be greater than or equal to min_dense
        min_dense  -- minimum number of attributes to be stored in dense storage(default:6)

        """
        H5Pset_attr_phase_change(self.id, max_compact, min_dense)

    @with_phil
    def get_attr_phase_change(self):
        """ () -> (max_compact, min_dense)

        Retrieves threshold values for attribute storage on an object.

        """
        cdef unsigned int max_compact
        cdef unsigned int min_dense
        H5Pget_attr_phase_change(self.id, &max_compact, &min_dense)
        return (max_compact, min_dense)

    @with_phil
    def set_obj_track_times(self,track_times):
        """Sets the recording of times associated with an object."""
        H5Pset_obj_track_times(self.id,track_times)


    @with_phil
    def get_obj_track_times(self):
        """
        Determines whether times associated with an object are being recorded.
        """

        cdef hbool_t track_times

        H5Pget_obj_track_times(self.id,&track_times)

        return track_times


# Dataset access
cdef class PropDAID(PropInstanceID):

    """ Dataset access property list """

    def __cinit__(self, *args):
        self._efile_prefix_buf = NULL
        self._virtual_prefix_buf = NULL

    def __dealloc__(self):
        efree(self._efile_prefix_buf)
        efree(self._virtual_prefix_buf)

    @with_phil
    def set_chunk_cache(self, size_t rdcc_nslots,size_t rdcc_nbytes, double rdcc_w0):
        """(size_t rdcc_nslots,size_t rdcc_nbytes, double rdcc_w0)

        Sets the raw data chunk cache parameters.
        """
        H5Pset_chunk_cache(self.id,rdcc_nslots,rdcc_nbytes,rdcc_w0)


    @with_phil
    def get_chunk_cache(self):
        """() => TUPLE chunk cache info

        Get the metadata and raw data chunk cache settings.  See the HDF5
        docs for element definitions.  Return is a 3-tuple with entries:

        0. size_t rdcc_nslots: Number of chunk slots in the raw data chunk cache hash table.
        1. size_t rdcc_nbytes: Total size of the raw data chunk cache, in bytes.
        2. DOUBLE rdcc_w0:     Preemption policy.
        """
        cdef size_t rdcc_nslots
        cdef size_t rdcc_nbytes
        cdef double rdcc_w0

        H5Pget_chunk_cache(self.id, &rdcc_nslots, &rdcc_nbytes, &rdcc_w0 )
        return (rdcc_nslots,rdcc_nbytes,rdcc_w0)

    @with_phil
    def get_efile_prefix(self):
        """() => STR

        Get the filesystem path prefix configured for accessing external
        datasets.
        """
        cdef char* cprefix = NULL
        cdef ssize_t size

        size = H5Pget_efile_prefix(self.id, NULL, 0)
        cprefix = emalloc(size+1)
        cprefix[0] = 0
        try:
            # TODO check return size
            H5Pget_efile_prefix(self.id, cprefix, size+1)
            prefix = bytes(cprefix)
        finally:
            efree(cprefix)

        return prefix

    @with_phil
    def set_efile_prefix(self, char* prefix):
        """(STR prefix)

        Set a filesystem path prefix for looking up external datasets.
        This is prepended to all filenames specified in the external dataset.
        """
        cdef size_t size

        # HDF5 requires that we hang on to this buffer
        efree(self._efile_prefix_buf)
        size = strlen(prefix)
        self._efile_prefix_buf = emalloc(size+1)
        strcpy(self._efile_prefix_buf, prefix)

        H5Pset_efile_prefix(self.id, self._efile_prefix_buf)

    # === Virtual dataset functions ===========================================
    @with_phil
    def set_virtual_view(self, unsigned int view):
        """(UINT view)

        Set the view of the virtual dataset (VDS) to include or exclude
        missing mapped elements.

        If view is set to h5d.VDS_FIRST_MISSING, the view includes all data
        before the first missing mapped data. This setting provides a view
        containing only the continuous data starting with the dataset’s
        first data element. Any break in continuity terminates the view.

        If view is set to h5d.VDS_LAST_AVAILABLE, the view includes all
        available mapped data.

        Missing mapped data is filled with the fill value set in the
        virtual dataset's creation property list.
        """
        H5Pset_virtual_view(self.id, view)

    @with_phil
    def get_virtual_view(self):
        """() => UINT view

        Retrieve the view of the virtual dataset.

        Valid values are:

        - h5d.VDS_FIRST_MISSING
        - h5d.VDS_LAST_AVAILABLE
        """
        cdef H5D_vds_view_t view
        H5Pget_virtual_view(self.id, &view)
        return view

    @with_phil
    def set_virtual_printf_gap(self, hsize_t gap_size=0):
        """(LONG gap_size=0)

        Set the maximum number of missing source files and/or datasets
        with the printf-style names when getting the extent of an unlimited
        virtual dataset.

        Instruct the library to stop looking for the mapped data stored in
        the files and/or datasets with the printf-style names after not
        finding gap_size files and/or datasets. The found source files and
        datasets will determine the extent of the unlimited virtual dataset
        with the printf-style mappings. Default value: 0.
        """
        H5Pset_virtual_printf_gap(self.id, gap_size)

    @with_phil
    def get_virtual_printf_gap(self):
        """() => LONG gap_size

        Return the maximum number of missing source files and/or datasets
        with the printf-style names when getting the extent for an
        unlimited virtual dataset.
        """
        cdef hsize_t gap_size
        H5Pget_virtual_printf_gap(self.id, &gap_size)
        return gap_size

    @with_phil
    def get_virtual_prefix(self):
        """() => STR

        Get the filesystem path prefix configured for accessing virtual
        datasets.
        """
        cdef char* cprefix = NULL
        cdef ssize_t size

        size = H5Pget_virtual_prefix(self.id, NULL, 0)
        cprefix = emalloc(size+1)
        cprefix[0] = 0
        try:
            # TODO check return size
            H5Pget_virtual_prefix(self.id, cprefix, size+1)
            prefix = bytes(cprefix)
        finally:
            efree(cprefix)

        return prefix

    @with_phil
    def set_virtual_prefix(self, char* prefix):
        """(STR prefix)

        Set a filesystem path prefix for looking up virtual datasets.
        This is prepended to all filenames specified in the virtual dataset.
        """
        cdef size_t size

        # HDF5 requires that we hang on to this buffer
        efree(self._virtual_prefix_buf)
        size = strlen(prefix)
        self._virtual_prefix_buf = emalloc(size+1)
        strcpy(self._virtual_prefix_buf, prefix)

        H5Pset_virtual_prefix(self.id, self._virtual_prefix_buf)

cdef class PropDXID(PropInstanceID):

    """ Data transfer property list """

    IF MPI:
        def set_dxpl_mpio(self, int xfer_mode):
            """ Set the transfer mode for MPI I/O.
            Must be one of:
            - h5fd.MPIO_INDEPENDENT (default)
            - h5fd.MPIO_COLLECTIVE
            """
            H5Pset_dxpl_mpio(self.id, xfer_mode)

        def get_dxpl_mpio(self):
            """ Get the current transfer mode for MPI I/O.
            Will be one of:
            - h5fd.MPIO_INDEPENDENT (default)
            - h5fd.MPIO_COLLECTIVE
            """
            cdef H5FD_mpio_xfer_t mode
            H5Pget_dxpl_mpio(self.id, &mode)
            return mode
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/h5pl.pxd0000644000175000017500000000114014507227212015631 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

include "config.pxi"
from .defs cimport *

cpdef append(const char* search_path)
cpdef prepend(const char* search_path)
cpdef replace(const char* search_path, unsigned int index)
cpdef insert(const char* search_path, unsigned int index)
cpdef remove(unsigned int index)
cpdef get(unsigned int index)
cpdef size()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5pl.pyx0000644000175000017500000000404414675110407015667 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Provides access to the low-level HDF5 "H5PL" plugins interface.

    These functions are probably not thread safe.
"""

include "config.pxi"
from .utils cimport emalloc, efree

# === C API ===================================================================

cpdef append(const char* search_path):
    """(STRING search_path)

    Add a directory to the end of the plugin search path.
    """
    H5PLappend(search_path)

cpdef prepend(const char* search_path):
    """(STRING search_path)

    Add a directory to the start of the plugin search path.
    """
    H5PLprepend(search_path)

cpdef replace(const char* search_path, unsigned int index):
    """(STRING search_path, UINT index)

    Replace the directory at the given index in the plugin search path.
    """
    H5PLreplace(search_path, index)

cpdef insert(const char* search_path, unsigned int index):
    """(STRING search_path, UINT index)

    Insert a directory at the given index in the plugin search path.
    """
    H5PLinsert(search_path, index)

cpdef remove(unsigned int index):
    """(UINT index)

    Remove the specified entry from the plugin search path.
    """
    H5PLremove(index)

cpdef get(unsigned int index):
    """(UINT index) => STRING

    Get the directory path at the given index (starting from 0) in the
    plugin search path. Returns a Python bytes object.
    """
    cdef size_t n
    cdef char* buf = NULL

    n = H5PLget(index, NULL, 0)
    buf = emalloc(sizeof(char)*(n + 1))
    try:
        H5PLget(index, buf, n + 1)
        return PyBytes_FromStringAndSize(buf, n)
    finally:
        efree(buf)

cpdef size():
    """() => UINT

    Get the number of directories currently in the plugin search path.
    """
    cdef unsigned int n = 0
    H5PLsize(&n)
    return n
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/h5py_warnings.py0000644000175000017500000000101314350630273017413 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    This module contains the warning classes for h5py. These classes are part of
    the public API of h5py, and should be imported from this module.
"""


class H5pyWarning(UserWarning):
    pass


class H5pyDeprecationWarning(H5pyWarning):
    pass
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5r.pxd0000644000175000017500000000146614045746670015505 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

include "config.pxi"
from .defs cimport *

cdef extern from "hdf5.h":

  ctypedef haddr_t hobj_ref_t
IF HDF5_VERSION >= (1, 12, 0):
  cdef struct hdset_reg_ref_t:
    uint8_t data[12] # sizeof(haddr_t) + 4 == sizeof(signed long long) + 4
ELSE:
  ctypedef unsigned char hdset_reg_ref_t[12]

cdef union ref_u:
    hobj_ref_t         obj_ref
    hdset_reg_ref_t    reg_ref

cdef class Reference:

    cdef ref_u ref
    cdef readonly int typecode
    cdef readonly size_t typesize

cdef class RegionReference(Reference):
    pass
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696522337.0
h5py-3.13.0/h5py/h5r.pyx0000644000175000017500000001355614507560141015523 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    H5R API for object and region references.
"""

# Cython C-level imports
from ._objects cimport ObjectID, pdefault
from .h5p cimport PropID

#Python level imports
from ._objects import phil, with_phil


# === Public constants and data structures ====================================

OBJECT = H5R_OBJECT
DATASET_REGION = H5R_DATASET_REGION


# === Reference API ===========================================================

@with_phil
def create(ObjectID loc not None, char* name, int ref_type, ObjectID space=None):
    """(ObjectID loc, STRING name, INT ref_type, SpaceID space=None)
    => ReferenceObject ref

    Create a new reference. The value of ref_type determines the kind
    of reference created:

    OBJECT
        Reference to an object in an HDF5 file.  Parameters "loc"
        and "name" identify the object; "space" is unused.

    DATASET_REGION
        Reference to a dataset region.  Parameters "loc" and
        "name" identify the dataset; the selection on "space"
        identifies the region.
    """
    cdef hid_t space_id
    cdef Reference ref
    if ref_type == H5R_OBJECT:
        ref = Reference()
    elif ref_type == H5R_DATASET_REGION:
        if space is None:   # work around segfault in HDF5
            raise ValueError("Dataspace required for region reference")
        ref = RegionReference()
    else:
        raise ValueError("Unknown reference typecode")

    if space is None:
        space_id = -1
    else:
        space_id = space.id

    H5Rcreate(&ref.ref, loc.id, name, ref_type, space_id)

    return ref


@with_phil
def dereference(Reference ref not None, ObjectID id not None, PropID oapl=None):
    """(Reference ref, ObjectID id) => ObjectID or None

    Open the object pointed to by the reference and return its
    identifier.  The file identifier (or the identifier for any object
    in the file) must also be provided.  Returns None if the reference
    is zero-filled.

    The reference may be either Reference or RegionReference.
    """
    from . import h5i
    if not ref:
        return None
    return h5i.wrap_identifier(H5Rdereference(
        id.id, pdefault(oapl), ref.typecode, &ref.ref
    ))


@with_phil
def get_region(RegionReference ref not None, ObjectID id not None):
    """(Reference ref, ObjectID id) => SpaceID or None

    Retrieve the dataspace selection pointed to by the reference.
    Returns a copy of the dataset's dataspace, with the appropriate
    elements selected.  The file identifier or the identifier of any
    object in the file (including the dataset itself) must also be
    provided.

    The reference object must be a RegionReference.  If it is zero-filled,
    returns None.
    """
    from . import h5s
    if ref.typecode != H5R_DATASET_REGION or not ref:
        return None
    return h5s.SpaceID(H5Rget_region(id.id, ref.typecode, &ref.ref))


@with_phil
def get_obj_type(Reference ref not None, ObjectID id not None):
    """(Reference ref, ObjectID id) => INT obj_code or None

    Determine what type of object the reference points to.  The
    reference may be a Reference or RegionReference.  The file
    identifier or the identifier of any object in the file must also
    be provided.

    The return value is one of:

    - h5o.TYPE_GROUP
    - h5o.TYPE_DATASET
    - h5o.TYPE_NAMED_DATATYPE

    If the reference is zero-filled, returns None.
    """
    cdef H5O_type_t obj_type
    if not ref:
        return None
    H5Rget_obj_type(id.id, ref.typecode, &ref.ref, &obj_type)
    return obj_type


@with_phil
def get_name(Reference ref not None, ObjectID loc not None):
    """(Reference ref, ObjectID loc) => STRING name

    Determine the name of the object pointed to by this reference.
    """
    cdef ssize_t namesize = 0
    cdef char* namebuf = NULL

    namesize = H5Rget_name(loc.id, ref.typecode, &ref.ref, NULL, 0)
    if namesize > 0:
        namebuf = malloc(namesize+1)
        try:
            namesize = H5Rget_name(loc.id, ref.typecode, &ref.ref, namebuf, namesize+1)
            return namebuf
        finally:
            free(namebuf)


cdef class Reference:

    """
        Opaque representation of an HDF5 reference.

        Objects of this class are created exclusively by the library and
        cannot be modified.  The read-only attribute "typecode" determines
        whether the reference is to an object in an HDF5 file (OBJECT)
        or a dataset region (DATASET_REGION).

        The object's truth value indicates whether it contains a nonzero
        reference.  This does not guarantee that is valid, but is useful
        for rejecting "background" elements in a dataset.

        Defined attributes:
          cdef ref_u ref
          cdef readonly int typecode
          cdef readonly size_t typesize
    """

    def __cinit__(self, *args, **kwds):
        self.typecode = H5R_OBJECT
        self.typesize = sizeof(hobj_ref_t)

    def __nonzero__(self):
        cdef int i
        for i in range(self.typesize):
            if (&self.ref)[i] != 0: return True
        return False

    def __repr__(self):
        return "" % ("" if self else " (null)")

cdef class RegionReference(Reference):

    """
        Opaque representation of an HDF5 region reference.

        This is a subclass of Reference which exists mainly for programming
        convenience.
    """

    def __cinit__(self, *args, **kwds):
        self.typecode = H5R_DATASET_REGION
        self.typesize = sizeof(hdset_reg_ref_t)

    def __repr__(self):
        return "" % ("" if self else " (null")
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1672911803.0
h5py-3.13.0/h5py/h5s.pxd0000644000175000017500000000060514355515673015501 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

from ._objects cimport ObjectID

cdef class SpaceID(ObjectID):
    pass
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739891098.0
h5py-3.13.0/h5py/h5s.pyx0000644000175000017500000004344714755120632015530 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Low-level interface to the "H5S" family of data-space functions.
"""

include "config.pxi"

# C-level imports
from .utils cimport  require_tuple, convert_dims, convert_tuple, \
                    emalloc, efree, create_numpy_hsize, create_hsize_array
cimport numpy as cnp

#Python level imports
from ._objects import phil, with_phil

cdef object lockid(hid_t id_):
    cdef SpaceID space
    space = SpaceID(id_)
    space.locked = 1
    return space

# === Public constants and data structures ====================================

#enum H5S_seloper_t:
SELECT_NOOP     = H5S_SELECT_NOOP
SELECT_SET      = H5S_SELECT_SET
SELECT_OR       = H5S_SELECT_OR
SELECT_AND      = H5S_SELECT_AND
SELECT_XOR      = H5S_SELECT_XOR
SELECT_NOTB     = H5S_SELECT_NOTB
SELECT_NOTA     = H5S_SELECT_NOTA
SELECT_APPEND   = H5S_SELECT_APPEND
SELECT_PREPEND  = H5S_SELECT_PREPEND
SELECT_INVALID  = H5S_SELECT_INVALID

ALL = lockid(H5S_ALL)   # This is accepted in lieu of an actual identifier
                        # in functions like H5Dread, so wrap it.
UNLIMITED = H5S_UNLIMITED

#enum H5S_class_t
NO_CLASS = H5S_NO_CLASS
SCALAR   = H5S_SCALAR
SIMPLE   = H5S_SIMPLE
globals()["NULL"] = H5S_NULL  # "NULL" is reserved in Cython

#enum H5S_sel_type
SEL_ERROR       = H5S_SEL_ERROR
SEL_NONE        = H5S_SEL_NONE
SEL_POINTS      = H5S_SEL_POINTS
SEL_HYPERSLABS  = H5S_SEL_HYPERSLABS
SEL_ALL         = H5S_SEL_ALL


# === Basic dataspace operations ==============================================

@with_phil
def create(int class_code):
    """(INT class_code) => SpaceID

    Create a new HDF5 dataspace object, of the given class.
    Legal values are SCALAR and SIMPLE.
    """
    return SpaceID(H5Screate(class_code))


@with_phil
def create_simple(object dims_tpl, object max_dims_tpl=None):
    """(TUPLE dims_tpl, TUPLE max_dims_tpl) => SpaceID

    Create a simple (slab) dataspace from a tuple of dimensions.
    Every element of dims_tpl must be a positive integer.

    You can optionally specify the maximum dataspace size. The
    special value UNLIMITED, as an element of max_dims, indicates
    an unlimited dimension.
    """
    cdef int rank
    cdef hsize_t* dims = NULL
    cdef hsize_t* max_dims = NULL

    require_tuple(dims_tpl, 0, -1, b"dims_tpl")
    rank = len(dims_tpl)
    require_tuple(max_dims_tpl, 1, rank, b"max_dims_tpl")

    try:
        dims = emalloc(sizeof(hsize_t)*rank)
        convert_tuple(dims_tpl, dims, rank)

        if max_dims_tpl is not None:
            max_dims = emalloc(sizeof(hsize_t)*rank)
            convert_tuple(max_dims_tpl, max_dims, rank)

        return SpaceID(H5Screate_simple(rank, dims, max_dims))

    finally:
        efree(dims)
        efree(max_dims)


@with_phil
def decode(buf):
    """(STRING buf) => SpaceID

    Unserialize a dataspace.  Bear in mind you can also use the native
    Python pickling machinery to do this.
    """
    cdef char* buf_ = buf
    return SpaceID(H5Sdecode(buf_))


# === H5S class API ===========================================================

cdef class SpaceID(ObjectID):

    """
        Represents a dataspace identifier.

        Properties:

        shape
            Numpy-style shape tuple with dimensions.

        * Hashable: No
        * Equality: Unimplemented
    """

    @property
    def shape(self):
        """ Numpy-style shape tuple representing dimensions.  () == scalar.
        """
        with phil:
            return self.get_simple_extent_dims()


    @with_phil
    def copy(self):
        """() => SpaceID

        Create a new copy of this dataspace.
        """
        return SpaceID(H5Scopy(self.id))


    @with_phil
    def encode(self):
        """() => STRING

        Serialize a dataspace, including its selection.  Bear in mind you
        can also use the native Python pickling machinery to do this.
        """
        cdef void* buf = NULL
        cdef size_t nalloc = 0

        H5Sencode(self.id, NULL, &nalloc)
        buf = emalloc(nalloc)
        try:
            H5Sencode(self.id, buf, &nalloc)
            pystr = PyBytes_FromStringAndSize(buf, nalloc)
        finally:
            efree(buf)

        return pystr

    def __reduce__(self):
        with phil:
            return (type(self), (-1,), self.encode())


    def __setstate__(self, state):
        cdef char* buf = state
        with phil:
            self.id = H5Sdecode(buf)


    # === Simple dataspaces ===================================================

    @with_phil
    def is_simple(self):
        """() => BOOL is_simple

        Determine if an existing dataspace is "simple" (including scalar
        dataspaces). Currently all HDF5 dataspaces are simple.
        """
        return (H5Sis_simple(self.id))


    @with_phil
    def offset_simple(self, object offset=None):
        """(TUPLE offset=None)

        Set the offset of a dataspace.  The length of the given tuple must
        match the rank of the dataspace. If None is provided (default),
        the offsets on all axes will be set to 0.
        """
        cdef int rank
        cdef int i
        cdef hssize_t *dims = NULL

        try:
            if not H5Sis_simple(self.id):
                raise ValueError("%d is not a simple dataspace" % self.id)

            rank = H5Sget_simple_extent_ndims(self.id)

            require_tuple(offset, 1, rank, b"offset")
            dims = emalloc(sizeof(hssize_t)*rank)
            if(offset is not None):
                convert_tuple(offset, dims, rank)
            else:
                # The HDF5 docs say passing in NULL resets the offset to 0.
                # Instead it raises an exception.  Imagine my surprise. We'll
                # do this manually.
                for i in range(rank):
                    dims[i] = 0

            H5Soffset_simple(self.id, dims)

        finally:
            efree(dims)


    @with_phil
    def get_simple_extent_ndims(self):
        """() => INT rank

        Determine the rank of a "simple" (slab) dataspace.
        """
        return H5Sget_simple_extent_ndims(self.id)


    @with_phil
    def get_simple_extent_dims(self, int maxdims=0):
        """(BOOL maxdims=False) => TUPLE shape

        Determine the shape of a "simple" (slab) dataspace.  If "maxdims"
        is True, retrieve the maximum dataspace size instead.
        """
        cdef int rank
        cdef hsize_t* dims = NULL

        if self.get_simple_extent_type() == H5S_NULL:
            return None

        rank = H5Sget_simple_extent_dims(self.id, NULL, NULL)

        dims = emalloc(sizeof(hsize_t)*rank)
        try:
            if maxdims:
                H5Sget_simple_extent_dims(self.id, NULL, dims)
            else:
                H5Sget_simple_extent_dims(self.id, dims, NULL)

            return convert_dims(dims, rank)

        finally:
            efree(dims)


    @with_phil
    def get_simple_extent_npoints(self):
        """() => LONG npoints

        Determine the total number of elements in a dataspace.
        """
        return H5Sget_simple_extent_npoints(self.id)


    @with_phil
    def get_simple_extent_type(self):
        """() => INT class_code

        Class code is either SCALAR or SIMPLE.
        """
        return H5Sget_simple_extent_type(self.id)


    # === Extents =============================================================

    @with_phil
    def extent_copy(self, SpaceID source not None):
        """(SpaceID source)

        Replace this dataspace's extent with another's, changing its
        typecode if necessary.
        """
        H5Sextent_copy(self.id, source.id)


    @with_phil
    def set_extent_simple(self, object dims_tpl, object max_dims_tpl=None):
        """(TUPLE dims_tpl, TUPLE max_dims_tpl=None)

        Reset the dataspace extent via a tuple of dimensions.
        Every element of dims_tpl must be a positive integer.

        You can optionally specify the maximum dataspace size. The
        special value UNLIMITED, as an element of max_dims, indicates
        an unlimited dimension.
        """
        cdef int rank
        cdef hsize_t* dims = NULL
        cdef hsize_t* max_dims = NULL

        require_tuple(dims_tpl, 0, -1, b"dims_tpl")
        rank = len(dims_tpl)
        require_tuple(max_dims_tpl, 1, rank, b"max_dims_tpl")

        try:
            dims = emalloc(sizeof(hsize_t)*rank)
            convert_tuple(dims_tpl, dims, rank)

            if max_dims_tpl is not None:
                max_dims = emalloc(sizeof(hsize_t)*rank)
                convert_tuple(max_dims_tpl, max_dims, rank)

            H5Sset_extent_simple(self.id, rank, dims, max_dims)

        finally:
            efree(dims)
            efree(max_dims)


    @with_phil
    def set_extent_none(self):
        """()

        Remove the dataspace extent; typecode changes to NO_CLASS.
        """
        H5Sset_extent_none(self.id)


    # === General selection operations ========================================

    @with_phil
    def get_select_type(self):
        """ () => INT select_code

            Determine selection type.  Return values are:

            - SEL_NONE
            - SEL_ALL
            - SEL_POINTS
            - SEL_HYPERSLABS
        """
        return H5Sget_select_type(self.id)


    @with_phil
    def get_select_npoints(self):
        """() => LONG npoints

        Determine the total number of points currently selected.
        Works for all selection techniques.
        """
        return H5Sget_select_npoints(self.id)


    @with_phil
    def get_select_bounds(self):
        """() => (TUPLE start, TUPLE end)

        Determine the bounding box which exactly contains
        the current selection.
        """
        cdef int rank
        cdef hsize_t *start = NULL
        cdef hsize_t *end = NULL

        rank = H5Sget_simple_extent_ndims(self.id)

        if H5Sget_select_npoints(self.id) == 0:
            return None

        start = emalloc(sizeof(hsize_t)*rank)
        end = emalloc(sizeof(hsize_t)*rank)

        try:
            H5Sget_select_bounds(self.id, start, end)

            start_tpl = convert_dims(start, rank)
            end_tpl = convert_dims(end, rank)
            return (start_tpl, end_tpl)

        finally:
            efree(start)
            efree(end)

    IF HDF5_VERSION >= (1, 10, 7):
        @with_phil
        def select_shape_same(self, SpaceID space2):
            """(SpaceID space2) => BOOL

            Check if two selections are the same shape. HDF5 may read data
            faster if the source & destination selections are the same shape.
            """
            return H5Sselect_shape_same(self.id, space2.id)

    @with_phil
    def select_all(self):
        """()

        Select all points in the dataspace.
        """
        H5Sselect_all(self.id)


    @with_phil
    def select_none(self):
        """()

        Deselect entire dataspace.
        """
        H5Sselect_none(self.id)


    @with_phil
    def select_valid(self):
        """() => BOOL

        Determine if the current selection falls within
        the dataspace extent.
        """
        return (H5Sselect_valid(self.id))


    # === Point selection functions ===========================================

    @with_phil
    def get_select_elem_npoints(self):
        """() => LONG npoints

        Determine the number of elements selected in point-selection mode.
        """
        return H5Sget_select_elem_npoints(self.id)


    @with_phil
    def get_select_elem_pointlist(self):
        """() => NDARRAY

        Get a list of all selected elements.  Return is a Numpy array of
        unsigned ints, with shape ``(, buf.data)

        return buf


    @with_phil
    def select_elements(self, object coords, int op=H5S_SELECT_SET):
        """(SEQUENCE coords, INT op=SELECT_SET)

        Select elements by specifying coordinates points.  The argument
        "coords" may be an ndarray or any nested sequence which can be
        converted to an array of uints with the shape::

            (, )

        Examples::

            >>> obj.shape
            (10, 10)
            >>> obj.select_elements([(1,2), (3,4), (5,9)])

        A zero-length selection (i.e. shape ``(0, )``) is not allowed
        by the HDF5 library.
        """
        cdef cnp.ndarray hcoords
        cdef size_t nelements

        # The docs say the selection list should be an hsize_t**, but it seems
        # that HDF5 expects the coordinates to be a static, contiguous
        # array.  We simulate that by creating a contiguous NumPy array of
        # a compatible type and initializing it to the input.

        hcoords = create_hsize_array(coords)

        if hcoords.ndim != 2 or hcoords.shape[1] != H5Sget_simple_extent_ndims(self.id):
            raise ValueError("Coordinate array must have shape (, %d)" % self.get_simple_extent_ndims())

        nelements = hcoords.shape[0]

        H5Sselect_elements(self.id, op, nelements, hcoords.data)


    # === Hyperslab selection functions =======================================

    @with_phil
    def get_select_hyper_nblocks(self):
        """() => LONG nblocks

        Get the number of hyperslab blocks currently selected.
        """
        return H5Sget_select_hyper_nblocks(self.id)


    @with_phil
    def get_select_hyper_blocklist(self):
        """() => NDARRAY

        Get the current hyperslab selection.  The returned array has shape::

            (, 2, )

        and can be interpreted as a nested sequence::

            [ (corner_coordinate_1, opposite_coordinate_1), ... ]

        with length equal to the total number of blocks.
        """
        cdef hsize_t dims[3]  # 0=nblocks 1=(#2), 2=rank
        cdef cnp.ndarray buf

        dims[0] = H5Sget_select_hyper_nblocks(self.id)
        dims[1] = 2
        dims[2] = H5Sget_simple_extent_ndims(self.id)

        buf = create_numpy_hsize(3, dims)

        H5Sget_select_hyper_blocklist(self.id, 0, dims[0], buf.data)

        return buf


    @with_phil
    def select_hyperslab(self, object start, object count, object stride=None,
                         object block=None, int op=H5S_SELECT_SET):
        """(TUPLE start, TUPLE count, TUPLE stride=None, TUPLE block=None,
             INT op=SELECT_SET)

        Select a block region from an existing dataspace.  See the HDF5
        documentation for the meaning of the "block" and "op" keywords.
        """
        cdef int rank
        cdef hsize_t* start_array = NULL
        cdef hsize_t* count_array = NULL
        cdef hsize_t* stride_array = NULL
        cdef hsize_t* block_array = NULL

        # Dataspace rank.  All provided tuples must match this.
        rank = H5Sget_simple_extent_ndims(self.id)

        require_tuple(start, 0, rank, b"start")
        require_tuple(count, 0, rank, b"count")
        require_tuple(stride, 1, rank, b"stride")
        require_tuple(block, 1, rank, b"block")

        try:
            start_array = emalloc(sizeof(hsize_t)*rank)
            count_array = emalloc(sizeof(hsize_t)*rank)
            convert_tuple(start, start_array, rank)
            convert_tuple(count, count_array, rank)

            if stride is not None:
                stride_array = emalloc(sizeof(hsize_t)*rank)
                convert_tuple(stride, stride_array, rank)
            if block is not None:
                block_array = emalloc(sizeof(hsize_t)*rank)
                convert_tuple(block, block_array, rank)

            H5Sselect_hyperslab(self.id, op, start_array,
                                         stride_array, count_array, block_array)

        finally:
            efree(start_array)
            efree(count_array)
            efree(stride_array)
            efree(block_array)

    @with_phil
    def is_regular_hyperslab(self):
        """() => BOOL

        Determine whether a hyperslab selection is regular.
        """
        return H5Sis_regular_hyperslab(self.id)

    @with_phil
    def get_regular_hyperslab(self):
        """() => (TUPLE start, TUPLE stride, TUPLE count, TUPLE block)

        Retrieve a regular hyperslab selection.
        """
        cdef int rank
        cdef hsize_t* start_array = NULL
        cdef hsize_t* count_array = NULL
        cdef hsize_t* stride_array = NULL
        cdef hsize_t* block_array = NULL
        cdef list start = []
        cdef list stride = []
        cdef list count = []
        cdef list block = []
        cdef int i

        rank = H5Sget_simple_extent_ndims(self.id)
        try:
            start_array = emalloc(sizeof(hsize_t)*rank)
            stride_array = emalloc(sizeof(hsize_t)*rank)
            count_array = emalloc(sizeof(hsize_t)*rank)
            block_array = emalloc(sizeof(hsize_t)*rank)
            H5Sget_regular_hyperslab(self.id, start_array, stride_array,
                                     count_array, block_array)

            for i in range(rank):
                start.append(start_array[i])
                stride.append(stride_array[i])
                count.append(count_array[i])
                block.append(block_array[i])

            return (tuple(start), tuple(stride), tuple(count), tuple(block))

        finally:
            efree(start_array)
            efree(stride_array)
            efree(count_array)
            efree(block_array)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1672911803.0
h5py-3.13.0/h5py/h5t.pxd0000644000175000017500000000266414355515673015511 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *

from ._objects cimport ObjectID

cdef class TypeID(ObjectID):

    cdef object py_dtype(self)

# --- Top-level classes ---

cdef class TypeArrayID(TypeID):
    pass

cdef class TypeOpaqueID(TypeID):
    pass

cdef class TypeStringID(TypeID):
    # Both vlen and fixed-len strings
    pass

cdef class TypeVlenID(TypeID):
    # Non-string vlens
    pass

cdef class TypeTimeID(TypeID):
    pass

cdef class TypeBitfieldID(TypeID):
    pass

cdef class TypeReferenceID(TypeID):
    pass

# --- Numeric atomic types ---

cdef class TypeAtomicID(TypeID):
    pass

cdef class TypeIntegerID(TypeAtomicID):
    pass

cdef class TypeFloatID(TypeAtomicID):
    pass

# --- Enums & compound types ---

cdef class TypeCompositeID(TypeID):
    pass

cdef class TypeEnumID(TypeCompositeID):

    cdef int enum_convert(self, long long *buf, int reverse) except -1

cdef class TypeCompoundID(TypeCompositeID):
    pass

# === C API for other modules =================================================

cpdef TypeID typewrap(hid_t id_)
cdef hid_t H5PY_OBJ
cdef char* H5PY_PYTHON_OPAQUE_TAG
cpdef TypeID py_create(object dtype, bint logical=*, bint aligned=*)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/h5t.pyx0000644000175000017500000015770314675110407015532 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    HDF5 "H5T" data-type API

    This module contains the datatype identifier class TypeID, and its
    subclasses which represent things like integer/float/compound identifiers.
    The majority of the H5T API is presented as methods on these identifiers.
"""
# C-level imports
include "config.pxi"
from ._objects cimport pdefault
cimport numpy as cnp
from .h5r cimport Reference, RegionReference
from .h5p cimport PropID, propwrap


from .utils cimport  emalloc, efree, require_tuple, convert_dims,\
                     convert_tuple

# Python imports
import codecs
import platform
import sys
from collections import namedtuple
import numpy as np
from .h5 import get_config

from ._objects import phil, with_phil

cfg = get_config()

_UNAME_MACHINE = platform.uname()[4]
_IS_PPC64 = _UNAME_MACHINE == "ppc64"
_IS_PPC64LE = _UNAME_MACHINE == "ppc64le"

cdef char* H5PY_PYTHON_OPAQUE_TAG = "PYTHON:OBJECT"

# === Custom C API ============================================================

cpdef TypeID typewrap(hid_t id_):

    cdef H5T_class_t cls
    cls = H5Tget_class(id_)

    if cls == H5T_INTEGER:
        pcls = TypeIntegerID
    elif cls == H5T_FLOAT:
        pcls = TypeFloatID
    elif cls == H5T_TIME:
        pcls = TypeTimeID
    elif cls == H5T_STRING:
        pcls = TypeStringID
    elif cls == H5T_BITFIELD:
        pcls = TypeBitfieldID
    elif cls == H5T_OPAQUE:
        pcls = TypeOpaqueID
    elif cls == H5T_COMPOUND:
        pcls = TypeCompoundID
    elif cls == H5T_REFERENCE:
        pcls = TypeReferenceID
    elif cls == H5T_ENUM:
        pcls = TypeEnumID
    elif cls == H5T_VLEN:
        pcls = TypeVlenID
    elif cls == H5T_ARRAY:
        pcls = TypeArrayID
    else:
        pcls = TypeID

    return pcls(id_)

cdef object lockid(hid_t id_in):
    cdef TypeID tid
    tid = typewrap(id_in)
    tid.locked = 1
    return tid

# === Public constants and data structures ====================================


# Enumeration H5T_class_t
NO_CLASS  = H5T_NO_CLASS
INTEGER   = H5T_INTEGER
FLOAT     = H5T_FLOAT
TIME      = H5T_TIME
STRING    = H5T_STRING
BITFIELD  = H5T_BITFIELD
OPAQUE    = H5T_OPAQUE
COMPOUND  = H5T_COMPOUND
REFERENCE = H5T_REFERENCE
ENUM      = H5T_ENUM
VLEN      = H5T_VLEN
ARRAY     = H5T_ARRAY

# Enumeration H5T_sign_t
SGN_NONE   = H5T_SGN_NONE
SGN_2      = H5T_SGN_2

# Enumeration H5T_order_t
ORDER_LE    = H5T_ORDER_LE
ORDER_BE    = H5T_ORDER_BE
ORDER_VAX   = H5T_ORDER_VAX
ORDER_NONE  = H5T_ORDER_NONE

DIR_DEFAULT = H5T_DIR_DEFAULT
DIR_ASCEND  = H5T_DIR_ASCEND
DIR_DESCEND = H5T_DIR_DESCEND

# Enumeration H5T_str_t
STR_NULLTERM = H5T_STR_NULLTERM
STR_NULLPAD  = H5T_STR_NULLPAD
STR_SPACEPAD = H5T_STR_SPACEPAD

# Enumeration H5T_norm_t
NORM_IMPLIED = H5T_NORM_IMPLIED
NORM_MSBSET = H5T_NORM_MSBSET
NORM_NONE = H5T_NORM_NONE

# Enumeration H5T_cset_t:
CSET_ASCII = H5T_CSET_ASCII

# Enumeration H5T_pad_t:
PAD_ZERO = H5T_PAD_ZERO
PAD_ONE = H5T_PAD_ONE
PAD_BACKGROUND = H5T_PAD_BACKGROUND

if sys.byteorder == "little":    # Custom python addition
    ORDER_NATIVE = H5T_ORDER_LE
else:
    ORDER_NATIVE = H5T_ORDER_BE

# For conversion
BKG_NO = H5T_BKG_NO
BKG_TEMP = H5T_BKG_TEMP
BKG_YES = H5T_BKG_YES

# --- Built-in HDF5 datatypes -------------------------------------------------

# IEEE floating-point
IEEE_F32LE = lockid(H5T_IEEE_F32LE)
IEEE_F32BE = lockid(H5T_IEEE_F32BE)
IEEE_F64LE = lockid(H5T_IEEE_F64LE)
IEEE_F64BE = lockid(H5T_IEEE_F64BE)
IF HDF5_VERSION < (1, 14, 4):
    IEEE_F16BE = IEEE_F32BE.copy()
    IEEE_F16BE.set_fields(15, 10, 5, 0, 10)
    IEEE_F16BE.set_size(2)
    IEEE_F16BE.set_ebias(15)
    IEEE_F16BE.lock()

    IEEE_F16LE = IEEE_F16BE.copy()
    IEEE_F16LE.set_order(H5T_ORDER_LE)
    IEEE_F16LE.lock()
ELSE:
    IEEE_F16BE = lockid(H5T_IEEE_F16BE)
    IEEE_F16LE = lockid(H5T_IEEE_F16LE)

# Quad floats
IEEE_F128BE = IEEE_F64BE.copy()
IEEE_F128BE.set_size(16)
IEEE_F128BE.set_precision(128)
IEEE_F128BE.set_fields(127, 112, 15, 0, 112)
IEEE_F128BE.set_ebias(16383)
IEEE_F128BE.lock()

IEEE_F128LE = IEEE_F128BE.copy()
IEEE_F128LE.set_order(H5T_ORDER_LE)
IEEE_F128LE.lock()

# Signed 2's complement integer types
STD_I8LE  = lockid(H5T_STD_I8LE)
STD_I16LE = lockid(H5T_STD_I16LE)
STD_I32LE = lockid(H5T_STD_I32LE)
STD_I64LE = lockid(H5T_STD_I64LE)

STD_I8BE  = lockid(H5T_STD_I8BE)
STD_I16BE = lockid(H5T_STD_I16BE)
STD_I32BE = lockid(H5T_STD_I32BE)
STD_I64BE = lockid(H5T_STD_I64BE)

# Bitfields
STD_B8LE = lockid(H5T_STD_B8LE)
STD_B16LE = lockid(H5T_STD_B16LE)
STD_B32LE = lockid(H5T_STD_B32LE)
STD_B64LE = lockid(H5T_STD_B64LE)

STD_B8BE = lockid(H5T_STD_B8BE)
STD_B16BE = lockid(H5T_STD_B16BE)
STD_B32BE = lockid(H5T_STD_B32BE)
STD_B64BE = lockid(H5T_STD_B64BE)

# Unsigned integers
STD_U8LE  = lockid(H5T_STD_U8LE)
STD_U16LE = lockid(H5T_STD_U16LE)
STD_U32LE = lockid(H5T_STD_U32LE)
STD_U64LE = lockid(H5T_STD_U64LE)

STD_U8BE  = lockid(H5T_STD_U8BE)
STD_U16BE = lockid(H5T_STD_U16BE)
STD_U32BE = lockid(H5T_STD_U32BE)
STD_U64BE = lockid(H5T_STD_U64BE)

# Native types by bytesize
NATIVE_B8 = lockid(H5T_NATIVE_B8)
NATIVE_INT8 = lockid(H5T_NATIVE_INT8)
NATIVE_UINT8 = lockid(H5T_NATIVE_UINT8)
NATIVE_B16 = lockid(H5T_NATIVE_B16)
NATIVE_INT16 = lockid(H5T_NATIVE_INT16)
NATIVE_UINT16 = lockid(H5T_NATIVE_UINT16)
NATIVE_B32 = lockid(H5T_NATIVE_B32)
NATIVE_INT32 = lockid(H5T_NATIVE_INT32)
NATIVE_UINT32 = lockid(H5T_NATIVE_UINT32)
NATIVE_B64 = lockid(H5T_NATIVE_B64)
NATIVE_INT64 = lockid(H5T_NATIVE_INT64)
NATIVE_UINT64 = lockid(H5T_NATIVE_UINT64)
NATIVE_FLOAT = lockid(H5T_NATIVE_FLOAT)
NATIVE_DOUBLE = lockid(H5T_NATIVE_DOUBLE)
NATIVE_LDOUBLE = lockid(H5T_NATIVE_LDOUBLE)

LDOUBLE_LE = NATIVE_LDOUBLE.copy()
LDOUBLE_LE.set_order(H5T_ORDER_LE)
LDOUBLE_LE.lock()

LDOUBLE_BE = NATIVE_LDOUBLE.copy()
LDOUBLE_BE.set_order(H5T_ORDER_BE)
LDOUBLE_BE.lock()

IF HDF5_VERSION > (1, 14, 3):
    if H5T_NATIVE_FLOAT16 != H5I_INVALID_HID:
        NATIVE_FLOAT16 = lockid(H5T_NATIVE_FLOAT16)
    else:
        NATIVE_FLOAT16 = H5I_INVALID_HID

# Unix time types
UNIX_D32LE = lockid(H5T_UNIX_D32LE)
UNIX_D64LE = lockid(H5T_UNIX_D64LE)
UNIX_D32BE = lockid(H5T_UNIX_D32BE)
UNIX_D64BE = lockid(H5T_UNIX_D64BE)

# Reference types
STD_REF_OBJ = lockid(H5T_STD_REF_OBJ)
STD_REF_DSETREG = lockid(H5T_STD_REF_DSETREG)

# Null terminated (C) and Fortran string types
C_S1 = lockid(H5T_C_S1)
FORTRAN_S1 = lockid(H5T_FORTRAN_S1)
VARIABLE = H5T_VARIABLE

# Character sets
CSET_ASCII = H5T_CSET_ASCII
CSET_UTF8 = H5T_CSET_UTF8

# Custom Python object pointer type
cdef hid_t H5PY_OBJ = H5Tcreate(H5T_OPAQUE, sizeof(PyObject*))
H5Tset_tag(H5PY_OBJ, H5PY_PYTHON_OPAQUE_TAG)
H5Tlock(H5PY_OBJ)

PYTHON_OBJECT = lockid(H5PY_OBJ)

# Translation tables for HDF5 -> NumPy dtype conversion
cdef dict _order_map = { H5T_ORDER_NONE: '|', H5T_ORDER_LE: '<', H5T_ORDER_BE: '>'}
cdef dict _sign_map  = { H5T_SGN_NONE: 'u', H5T_SGN_2: 'i' }

# Available floating point types
cdef tuple _get_available_ftypes():
    cdef:
        str floating_typecodes = np.typecodes["Float"]
        str ftc
        cnp.dtype fdtype
        list available_ftypes = []

    for ftc in floating_typecodes:
        fdtype = np.dtype(ftc)
        available_ftypes.append(
            ((fdtype.typeobj), np.finfo(fdtype), fdtype.itemsize)
            )

    return tuple(available_ftypes)

cdef tuple _available_ftypes = _get_available_ftypes()


cdef (int, int, int) _correct_float_info(ftype_, finfo):
    nmant = finfo.nmant
    maxexp = finfo.maxexp
    minexp = finfo.minexp
    # workaround for numpy's buggy finfo on float128 on ppc64 archs
    if ftype_ == np.longdouble and _IS_PPC64:
        # values reported by hdf5
        nmant = 116
        maxexp = 1024
        minexp = -1022
    elif ftype_ == np.longdouble and _IS_PPC64LE:
        # values reported by hdf5
        nmant = 52
        maxexp = 1024
        minexp = -1022
    elif nmant == 63 and finfo.nexp == 15:
        # This is an 80-bit float, correct mantissa size
        nmant += 1

    return nmant, maxexp, minexp


# === General datatype operations =============================================

@with_phil
def create(int classtype, size_t size):
    """(INT classtype, UINT size) => TypeID

    Create a new HDF5 type object.  Legal class values are
    COMPOUND and OPAQUE.  Use enum_create for enums.
    """

    # HDF5 versions 1.6.X segfault with anything else
    if classtype != H5T_COMPOUND and classtype != H5T_OPAQUE:
        raise ValueError("Class must be COMPOUND or OPAQUE.")

    return typewrap(H5Tcreate(classtype, size))


@with_phil
def open(ObjectID group not None, char* name, ObjectID tapl=None):
    """(ObjectID group, STRING name) => TypeID

    Open a named datatype from a file.
    If present, tapl must be a datatype access property list.
    """
    return typewrap(H5Topen(group.id, name, pdefault(tapl)))


@with_phil
def array_create(TypeID base not None, object dims_tpl):
    """(TypeID base, TUPLE dimensions) => TypeArrayID

    Create a new array datatype, using and HDF5 parent type and
    dimensions given via a tuple of positive integers.  "Unlimited"
    dimensions are not allowed.
    """
    cdef hsize_t rank
    cdef hsize_t *dims = NULL

    require_tuple(dims_tpl, 0, -1, b"dims_tpl")
    rank = len(dims_tpl)
    dims = emalloc(sizeof(hsize_t)*rank)

    try:
        convert_tuple(dims_tpl, dims, rank)
        return TypeArrayID(H5Tarray_create(base.id, rank, dims))
    finally:
        efree(dims)


@with_phil
def enum_create(TypeID base not None):
    """(TypeID base) => TypeID

    Create a new enumerated type based on an (integer) parent type.
    """
    return typewrap(H5Tenum_create(base.id))


@with_phil
def vlen_create(TypeID base not None):
    """(TypeID base) => TypeID

    Create a new variable-length datatype, using any HDF5 type as a base.

    Although the Python interface can manipulate these types, there is no
    provision for reading/writing vlen data.
    """
    return typewrap(H5Tvlen_create(base.id))


@with_phil
def decode(char* buf):
    """(STRING buf) => TypeID

    Deserialize an HDF5 type.  You can also do this with the native
    Python pickling machinery.
    """
    return typewrap(H5Tdecode(buf))


# === Base type class =========================================================

cdef class TypeID(ObjectID):

    """
        Base class for type identifiers (implements common operations)

        * Hashable: If committed or locked
        * Equality: Logical H5T comparison
    """

    def __hash__(self):
        with phil:
            if self._hash is None:
                try:
                    # Try to use object header first
                    return ObjectID.__hash__(self)
                except TypeError:
                    # It's a transient type object
                    if self.locked:
                        self._hash = hash(self.encode())
                    else:
                        raise TypeError("Only locked or committed types can be hashed")

            return self._hash


    def __richcmp__(self, object other, int how):
        cdef bint truthval = 0
        with phil:
            if how != 2 and how != 3:
                return NotImplemented
            if isinstance(other, TypeID):
                truthval = self.equal(other)

            if how == 2:
                return truthval
            return not truthval


    def __copy__(self):
        cdef TypeID cpy
        with phil:
            cpy = ObjectID.__copy__(self)
            return cpy


    @property
    def dtype(self):
        """ A Numpy-style dtype object representing this object.
        """
        with phil:
            return self.py_dtype()


    cdef object py_dtype(self):
        raise TypeError("No NumPy equivalent for %s exists" % self.__class__.__name__)


    @with_phil
    def commit(self, ObjectID group not None, char* name, ObjectID lcpl=None):
        """(ObjectID group, STRING name, PropID lcpl=None)

        Commit this (transient) datatype to a named datatype in a file.
        If present, lcpl may be a link creation property list.
        """
        H5Tcommit(group.id, name, self.id, pdefault(lcpl),
            H5P_DEFAULT, H5P_DEFAULT)


    @with_phil
    def committed(self):
        """() => BOOL is_comitted

        Determine if a given type object is named (T) or transient (F).
        """
        return (H5Tcommitted(self.id))


    @with_phil
    def copy(self):
        """() => TypeID

        Create a copy of this type object.
        """
        return typewrap(H5Tcopy(self.id))


    @with_phil
    def equal(self, TypeID typeid):
        """(TypeID typeid) => BOOL

        Logical comparison between datatypes.  Also called by
        Python's "==" operator.
        """
        return (H5Tequal(self.id, typeid.id))


    @with_phil
    def lock(self):
        """()

        Lock this datatype, which makes it immutable and indestructible.
        Once locked, it can't be unlocked.
        """
        H5Tlock(self.id)
        self.locked = 1


    @with_phil
    def get_class(self):
        """() => INT classcode

        Determine the datatype's class code.
        """
        return H5Tget_class(self.id)


    @with_phil
    def set_size(self, size_t size):
        """(UINT size)

        Set the total size of the datatype, in bytes.
        """
        H5Tset_size(self.id, size)


    @with_phil
    def get_size(self):
        """ () => INT size

            Determine the total size of a datatype, in bytes.
        """
        return H5Tget_size(self.id)


    @with_phil
    def get_super(self):
        """() => TypeID

        Determine the parent type of an array, enumeration or vlen datatype.
        """
        return typewrap(H5Tget_super(self.id))


    @with_phil
    def detect_class(self, int classtype):
        """(INT classtype) => BOOL class_is_present

        Determine if a member of the given class exists in a compound
        datatype.  The search is recursive.
        """
        return (H5Tdetect_class(self.id, classtype))


    @with_phil
    def encode(self):
        """() => STRING

        Serialize an HDF5 type.  Bear in mind you can also use the
        native Python pickle/unpickle machinery to do this.  The
        returned string may contain binary values, including NULLs.
        """
        cdef size_t nalloc = 0
        cdef char* buf = NULL

        H5Tencode(self.id, NULL, &nalloc)
        buf = emalloc(sizeof(char)*nalloc)
        try:
            H5Tencode(self.id, buf, &nalloc)
            pystr = PyBytes_FromStringAndSize(buf, nalloc)
        finally:
            efree(buf)

        return pystr

    @with_phil
    def get_create_plist(self):
        """ () => PropTCID

            Create and return a new copy of the datatype creation property list
            used when this datatype was created.
        """
        return propwrap(H5Tget_create_plist(self.id))


    def __reduce__(self):
        with phil:
            return (type(self), (-1,), self.encode())


    def __setstate__(self, char* state):
        with phil:
            self.id = H5Tdecode(state)


# === Top-level classes (inherit directly from TypeID) ========================

cdef class TypeArrayID(TypeID):

    """
        Represents an array datatype
    """


    @with_phil
    def get_array_ndims(self):
        """() => INT rank

        Get the rank of the given array datatype.
        """
        return H5Tget_array_ndims(self.id)


    @with_phil
    def get_array_dims(self):
        """() => TUPLE dimensions

        Get the dimensions of the given array datatype as
        a tuple of integers.
        """
        cdef hsize_t rank
        cdef hsize_t* dims = NULL

        rank = H5Tget_array_dims(self.id, NULL)
        dims = emalloc(sizeof(hsize_t)*rank)
        try:
            H5Tget_array_dims(self.id, dims)
            return convert_dims(dims, rank)
        finally:
            efree(dims)

    cdef object py_dtype(self):
        # Numpy translation function for array types
        cdef TypeID tmp_type
        tmp_type = self.get_super()

        base_dtype = tmp_type.py_dtype()

        shape = self.get_array_dims()
        return np.dtype( (base_dtype, shape) )


cdef class TypeOpaqueID(TypeID):

    """
        Represents an opaque type
    """


    @with_phil
    def set_tag(self, char* tag):
        """(STRING tag)

        Set a string describing the contents of an opaque datatype.
        Limited to 256 characters.
        """
        H5Tset_tag(self.id, tag)


    @with_phil
    def get_tag(self):
        """() => STRING tag

        Get the tag associated with an opaque datatype.
        """
        cdef char* buf = NULL

        try:
            buf = H5Tget_tag(self.id)
            assert buf != NULL
            tag = buf
            return tag
        finally:
            H5free_memory(buf)

    cdef object py_dtype(self):
        cdef bytes tag = self.get_tag()
        if tag.startswith(b"NUMPY:"):
            # 6 = len("NUMPY:")
            return np.dtype(tag[6:], metadata={'h5py_opaque': True})

        # Numpy translation function for opaque types
        return np.dtype("|V" + str(self.get_size()))


cdef class TypeStringID(TypeID):

    """
        String datatypes, both fixed and vlen.
    """


    @with_phil
    def is_variable_str(self):
        """() => BOOL is_variable

        Determine if the given string datatype is a variable-length string.
        """
        return (H5Tis_variable_str(self.id))


    @with_phil
    def get_cset(self):
        """() => INT character_set

        Retrieve the character set used for a string.
        """
        return H5Tget_cset(self.id)


    @with_phil
    def set_cset(self, int cset):
        """(INT character_set)

        Set the character set used for a string.
        """
        H5Tset_cset(self.id, cset)


    @with_phil
    def get_strpad(self):
        """() => INT padding_type

        Get the padding type.  Legal values are:

        STR_NULLTERM
            NULL termination only (C style)

        STR_NULLPAD
            Pad buffer with NULLs

        STR_SPACEPAD
            Pad buffer with spaces (FORTRAN style)
        """
        return H5Tget_strpad(self.id)


    @with_phil
    def set_strpad(self, int pad):
        """(INT pad)

        Set the padding type.  Legal values are:

        STR_NULLTERM
            NULL termination only (C style)

        STR_NULLPAD
            Pad buffer with NULLs

        STR_SPACEPAD
            Pad buffer with spaces (FORTRAN style)
        """
        H5Tset_strpad(self.id, pad)


    cdef object py_dtype(self):
        # Numpy translation function for string types
        if self.get_cset() == H5T_CSET_ASCII:
            encoding = 'ascii'
        elif self.get_cset() == H5T_CSET_UTF8:
            encoding = 'utf-8'
        else:
            raise TypeError("Unknown string encoding (value %d)" % self.get_cset())

        if self.is_variable_str():
            length = None
        else:
            length = self.get_size()

        return string_dtype(encoding=encoding, length=length)

cdef class TypeVlenID(TypeID):

    """
        Non-string vlen datatypes.
    """

    cdef object py_dtype(self):

        # get base type id
        cdef TypeID base_type
        base_type = self.get_super()

        return vlen_dtype(base_type.dtype)

cdef class TypeTimeID(TypeID):

    """
        Unix-style time_t (deprecated)
    """
    pass

cdef class TypeBitfieldID(TypeID):

    """
        HDF5 bitfield type
    """

    @with_phil
    def get_order(self):
        """() => INT order

        Obtain the byte order of the datatype; one of:

        - ORDER_LE
        - ORDER_BE
        """
        return H5Tget_order(self.id)

    cdef object py_dtype(self):

        # Translation function for bitfield types
        return np.dtype( _order_map[self.get_order()] +
                         'u' + str(self.get_size()) )


cdef class TypeReferenceID(TypeID):

    """
        HDF5 object or region reference
    """

    cdef object py_dtype(self):
        if H5Tequal(self.id, H5T_STD_REF_OBJ):
            return ref_dtype
        elif H5Tequal(self.id, H5T_STD_REF_DSETREG):
            return regionref_dtype
        else:
            raise TypeError("Unknown reference type")


# === Numeric classes (integers and floats) ===================================

cdef class TypeAtomicID(TypeID):

    """
        Base class for atomic datatypes (float or integer)
    """


    @with_phil
    def get_order(self):
        """() => INT order

        Obtain the byte order of the datatype; one of:

        - ORDER_LE
        - ORDER_BE
        """
        return H5Tget_order(self.id)


    @with_phil
    def set_order(self, int order):
        """(INT order)

        Set the byte order of the datatype; one of:

        - ORDER_LE
        - ORDER_BE
        """
        H5Tset_order(self.id, order)


    @with_phil
    def get_precision(self):
        """() => UINT precision

        Get the number of significant bits (excludes padding).
        """
        return H5Tget_precision(self.id)


    @with_phil
    def set_precision(self, size_t precision):
        """(UINT precision)

        Set the number of significant bits (excludes padding).
        """
        H5Tset_precision(self.id, precision)


    @with_phil
    def get_offset(self):
        """() => INT offset

        Get the offset of the first significant bit.
        """
        return H5Tget_offset(self.id)


    @with_phil
    def set_offset(self, size_t offset):
        """(UINT offset)

        Set the offset of the first significant bit.
        """
        H5Tset_offset(self.id, offset)


    @with_phil
    def get_pad(self):
        """() => (INT lsb_pad_code, INT msb_pad_code)

        Determine the padding type.  Possible values are:

        - PAD_ZERO
        - PAD_ONE
        - PAD_BACKGROUND
        """
        cdef H5T_pad_t lsb
        cdef H5T_pad_t msb
        H5Tget_pad(self.id, &lsb, &msb)
        return (lsb, msb)


    @with_phil
    def set_pad(self, int lsb, int msb):
        """(INT lsb_pad_code, INT msb_pad_code)

        Set the padding type.  Possible values are:

        - PAD_ZERO
        - PAD_ONE
        - PAD_BACKGROUND
        """
        H5Tset_pad(self.id, lsb, msb)


cdef class TypeIntegerID(TypeAtomicID):

    """
        Integer atomic datatypes
    """


    @with_phil
    def get_sign(self):
        """() => INT sign

        Get the "signedness" of the datatype; one of:

        SGN_NONE
            Unsigned

        SGN_2
            Signed 2's complement
        """
        return H5Tget_sign(self.id)


    @with_phil
    def set_sign(self, int sign):
        """(INT sign)

        Set the "signedness" of the datatype; one of:

        SGN_NONE
            Unsigned

        SGN_2
            Signed 2's complement
        """
        H5Tset_sign(self.id, sign)

    cdef object py_dtype(self):
        # Translation function for integer types
        return np.dtype( _order_map[self.get_order()] +
                         _sign_map[self.get_sign()] + str(self.get_size()) )


cdef class TypeFloatID(TypeAtomicID):

    """
        Floating-point atomic datatypes
    """


    @with_phil
    def get_fields(self):
        """() => TUPLE field_info

        Get information about floating-point bit fields.  See the HDF5
        docs for a full description.  Tuple has the following members:

        0. UINT spos
        1. UINT epos
        2. UINT esize
        3. UINT mpos
        4. UINT msize
        """
        cdef size_t spos, epos, esize, mpos, msize
        H5Tget_fields(self.id, &spos, &epos, &esize, &mpos, &msize)
        return (spos, epos, esize, mpos, msize)


    @with_phil
    def set_fields(self, size_t spos, size_t epos, size_t esize,
                          size_t mpos, size_t msize):
        """(UINT spos, UINT epos, UINT esize, UINT mpos, UINT msize)

        Set floating-point bit fields.  Refer to the HDF5 docs for
        argument definitions.
        """
        H5Tset_fields(self.id, spos, epos, esize, mpos, msize)


    @with_phil
    def get_ebias(self):
        """() => UINT ebias

        Get the exponent bias.
        """
        return H5Tget_ebias(self.id)


    @with_phil
    def set_ebias(self, size_t ebias):
        """(UINT ebias)

        Set the exponent bias.
        """
        H5Tset_ebias(self.id, ebias)


    @with_phil
    def get_norm(self):
        """() => INT normalization_code

        Get the normalization strategy.  Legal values are:

        - NORM_IMPLIED
        - NORM_MSBSET
        - NORM_NONE
        """
        return H5Tget_norm(self.id)


    @with_phil
    def set_norm(self, int norm):
        """(INT normalization_code)

        Set the normalization strategy.  Legal values are:

        - NORM_IMPLIED
        - NORM_MSBSET
        - NORM_NONE
        """
        H5Tset_norm(self.id, norm)


    @with_phil
    def get_inpad(self):
        """() => INT pad_code

        Determine the internal padding strategy.  Legal values are:

        - PAD_ZERO
        - PAD_ONE
        - PAD_BACKGROUND
        """
        return H5Tget_inpad(self.id)


    @with_phil
    def set_inpad(self, int pad_code):
        """(INT pad_code)

        Set the internal padding strategy.  Legal values are:

        - PAD_ZERO
        - PAD_ONE
        - PAD_BACKGROUND
        """
        H5Tset_inpad(self.id, pad_code)

    cdef object py_dtype(self):
        # Translation function for floating-point types

        order = _order_map[self.get_order()]    # string with '<' or '>'

        s_offset, e_offset, e_size, m_offset, m_size = self.get_fields()
        e_bias = self.get_ebias()

        # Handle non-standard exponent and mantissa sizes.
        for ftype_, finfo, size in _available_ftypes:
            nmant, maxexp, minexp = _correct_float_info(ftype_, finfo)
            if (size >= self.get_size() and m_size <= nmant and
                (2**e_size - e_bias - 1) <= maxexp and (1 - e_bias) >= minexp):
                new_dtype = np.dtype(ftype_).newbyteorder(order)
                break
        else:
            raise ValueError('Insufficient precision in available types to ' +
                             'represent ' + str(self.get_fields()))

        return new_dtype


# === Composite types (enums and compound) ====================================

cdef class TypeCompositeID(TypeID):

    """
        Base class for enumerated and compound types.
    """


    @with_phil
    def get_nmembers(self):
        """() => INT number_of_members

        Determine the number of members in a compound or enumerated type.
        """
        return H5Tget_nmembers(self.id)


    @with_phil
    def get_member_name(self, int member):
        """(INT member) => STRING name

        Determine the name of a member of a compound or enumerated type,
        identified by its index (0 <= member < nmembers).
        """
        cdef char* name
        name = NULL

        if member < 0:
            raise ValueError("Member index must be non-negative.")

        try:
            name = H5Tget_member_name(self.id, member)
            assert name != NULL
            pyname = name
        finally:
            H5free_memory(name)

        return pyname


    @with_phil
    def get_member_index(self, char* name):
        """(STRING name) => INT index

        Determine the index of a member of a compound or enumerated datatype
        identified by a string name.
        """
        return H5Tget_member_index(self.id, name)

cdef class TypeCompoundID(TypeCompositeID):

    """
        Represents a compound datatype
    """


    @with_phil
    def get_member_class(self, int member):
        """(INT member) => INT class

        Determine the datatype class of the member of a compound type,
        identified by its index (0 <= member < nmembers).
        """
        if member < 0:
            raise ValueError("Member index must be non-negative.")
        return H5Tget_member_class(self.id, member)


    @with_phil
    def get_member_offset(self, int member):
        """(INT member) => INT offset

        Determine the offset, in bytes, of the beginning of the specified
        member of a compound datatype.
        """
        if member < 0:
            raise ValueError("Member index must be non-negative.")
        return H5Tget_member_offset(self.id, member)


    @with_phil
    def get_member_type(self, int member):
        """(INT member) => TypeID

        Create a copy of a member of a compound datatype, identified by its
        index.
        """
        if member < 0:
            raise ValueError("Member index must be non-negative.")
        return typewrap(H5Tget_member_type(self.id, member))


    @with_phil
    def insert(self, char* name, size_t offset, TypeID field not None):
        """(STRING name, UINT offset, TypeID field)

        Add a named member datatype to a compound datatype.  The parameter
        offset indicates the offset from the start of the compound datatype,
        in bytes.
        """
        H5Tinsert(self.id, name, offset, field.id)


    @with_phil
    def pack(self):
        """()

        Recursively removes padding (introduced on account of e.g. compiler
        alignment rules) from a compound datatype.
        """
        H5Tpack(self.id)

    cdef object py_dtype(self):
        cdef:
            TypeID tmp_type
            list field_names
            list field_types
            int i, nfields
        field_names = []
        field_types = []
        field_offsets = []
        nfields = self.get_nmembers()

        # First step: read field names and their Numpy dtypes into
        # two separate arrays.
        for i in range(nfields):
            tmp_type = self.get_member_type(i)
            name = self.get_member_name(i)
            field_names.append(name)
            field_types.append(tmp_type.py_dtype())
            field_offsets.append(self.get_member_offset(i))


        # 1. Check if it should be converted to a complex number
        if len(field_names) == 2                                and \
            tuple(field_names) == (cfg._r_name, cfg._i_name)    and \
            field_types[0] == field_types[1]                    and \
            field_types[0].kind == 'f'                          and \
            field_types[0].itemsize in (4, 8, 16):

            bstring = field_types[0].str
            blen = int(bstring[2:])
            nstring = bstring[0] + "c" + str(2*blen)
            typeobj = np.dtype(nstring)

        # 2. Otherwise, read all fields of the compound type, in HDF5 order.
        else:
            field_names = [x.decode('utf8') for x in field_names]
            typeobj = np.dtype({'names': field_names,
                                'formats': field_types,
                                'offsets': field_offsets,
                                'itemsize': self.get_size()})

        return typeobj


cdef class TypeEnumID(TypeCompositeID):

    """
        Represents an enumerated type
    """

    cdef int enum_convert(self, long long *buf, int reverse) except -1:
        # Convert the long long value in "buf" to the native representation
        # of this (enumerated) type.  Conversion performed in-place.
        # Reverse: false => llong->type; true => type->llong

        cdef hid_t basetype
        cdef H5T_class_t class_code

        class_code = H5Tget_class(self.id)
        if class_code != H5T_ENUM:
            raise ValueError("This type (class %d) is not of class ENUM" % class_code)

        basetype = H5Tget_super(self.id)
        assert basetype > 0

        try:
            if not reverse:
                H5Tconvert(H5T_NATIVE_LLONG, basetype, 1, buf, NULL, H5P_DEFAULT)
            else:
                H5Tconvert(basetype, H5T_NATIVE_LLONG, 1, buf, NULL, H5P_DEFAULT)
        finally:
            H5Tclose(basetype)


    @with_phil
    def enum_insert(self, char* name, long long value):
        """(STRING name, INT/LONG value)

        Define a new member of an enumerated type.  The value will be
        automatically converted to the base type defined for this enum.  If
        the conversion results in overflow, the value will be silently
        clipped.
        """
        cdef long long buf

        buf = value
        self.enum_convert(&buf, 0)
        H5Tenum_insert(self.id, name, &buf)


    @with_phil
    def enum_nameof(self, long long value):
        """(LONG value) => STRING name

        Determine the name associated with the given value.  Due to a
        limitation of the HDF5 library, this can only retrieve names up to
        1023 characters in length.
        """
        cdef herr_t retval
        cdef char name[1024]
        cdef long long buf

        buf = value
        self.enum_convert(&buf, 0)
        retval = H5Tenum_nameof(self.id, &buf, name, 1024)
        assert retval >= 0
        retstring = name
        return retstring


    @with_phil
    def enum_valueof(self, char* name):
        """(STRING name) => LONG value

        Get the value associated with an enum name.
        """
        cdef long long buf

        H5Tenum_valueof(self.id, name, &buf)
        self.enum_convert(&buf, 1)
        return buf


    @with_phil
    def get_member_value(self, int idx):
        """(UINT index) => LONG value

        Determine the value for the member at the given zero-based index.
        """
        cdef herr_t retval
        cdef hid_t ptype
        cdef long long val
        ptype = 0

        if idx < 0:
            raise ValueError("Index must be non-negative.")

        H5Tget_member_value(self.id, idx, &val)
        self.enum_convert(&val, 1)
        return val

    cdef object py_dtype(self):
        # Translation function for enum types

        cdef TypeID basetype = self.get_super()

        nmembers = self.get_nmembers()
        members = {}

        for idx in range(nmembers):
            name = self.get_member_name(idx)
            val = self.get_member_value(idx)
            members[name] = val

        ref = {cfg._f_name: 0, cfg._t_name: 1}

        # Boolean types have priority over standard enums
        if members == ref:
            return np.dtype('bool')

        # Convert strings to appropriate representation
        members_conv = {}
        for name, val in members.iteritems():
            try:    # ASCII;
                name = name.decode('ascii')
            except UnicodeDecodeError:
                try:    # Non-ascii; all platforms try unicode
                    name = name.decode('utf8')
                except UnicodeDecodeError:
                    pass    # Last resort: return byte string
            members_conv[name] = val
        return enum_dtype(members_conv, basetype=basetype.py_dtype())


# === Translation from NumPy dtypes to HDF5 type objects ======================

# The following series of native-C functions each translate a specific class
# of NumPy dtype into an HDF5 type object.  The result is guaranteed to be
# transient and unlocked.

def _get_float_dtype_to_hdf5():
    float_le = {}
    float_be = {}
    h5_be_list = [IEEE_F16BE, IEEE_F32BE, IEEE_F64BE, IEEE_F128BE, LDOUBLE_BE]
    h5_le_list = [IEEE_F16LE, IEEE_F32LE, IEEE_F64LE, IEEE_F128LE, LDOUBLE_LE]

    for ftype_, finfo, size in _available_ftypes:
        nmant, maxexp, minexp = _correct_float_info(ftype_, finfo)
        for h5type in h5_be_list:
            spos, epos, esize, mpos, msize = h5type.get_fields()
            ebias = h5type.get_ebias()
            if (finfo.iexp == esize and nmant == msize and
                    (maxexp - 1) == ebias):
                float_be[ftype_] = h5type
                break # first found matches, related to #1244
        for h5type in h5_le_list:
            spos, epos, esize, mpos, msize = h5type.get_fields()
            ebias = h5type.get_ebias()
            if (finfo.iexp == esize and nmant == msize and
                    (maxexp - 1) == ebias):
                float_le[ftype_] = h5type
                break # first found matches, related to #1244
    if ORDER_NATIVE == H5T_ORDER_LE:
        float_nt = dict(float_le)
    else:
        float_nt = dict(float_be)
    return float_le, float_be, float_nt

cdef dict _float_le
cdef dict _float_be
cdef dict _float_nt
_float_le, _float_be, _float_nt = _get_float_dtype_to_hdf5()

cdef dict _int_le = {1: H5Tcopy(H5T_STD_I8LE), 2: H5Tcopy(H5T_STD_I16LE), 4: H5Tcopy(H5T_STD_I32LE), 8: H5Tcopy(H5T_STD_I64LE)}
cdef dict _int_be = {1: H5Tcopy(H5T_STD_I8BE), 2: H5Tcopy(H5T_STD_I16BE), 4: H5Tcopy(H5T_STD_I32BE), 8: H5Tcopy(H5T_STD_I64BE)}
cdef dict _int_nt = {1: H5Tcopy(H5T_NATIVE_INT8), 2: H5Tcopy(H5T_NATIVE_INT16), 4: H5Tcopy(H5T_NATIVE_INT32), 8: H5Tcopy(H5T_NATIVE_INT64)}

cdef dict _uint_le = {1: H5Tcopy(H5T_STD_U8LE), 2: H5Tcopy(H5T_STD_U16LE), 4: H5Tcopy(H5T_STD_U32LE), 8: H5Tcopy(H5T_STD_U64LE)}
cdef dict _uint_be = {1: H5Tcopy(H5T_STD_U8BE), 2: H5Tcopy(H5T_STD_U16BE), 4: H5Tcopy(H5T_STD_U32BE), 8: H5Tcopy(H5T_STD_U64BE)}
cdef dict _uint_nt = {1: H5Tcopy(H5T_NATIVE_UINT8), 2: H5Tcopy(H5T_NATIVE_UINT16), 4: H5Tcopy(H5T_NATIVE_UINT32), 8: H5Tcopy(H5T_NATIVE_UINT64)}

cdef TypeFloatID _c_float(cnp.dtype dt):
    # Floats (single and double)
    cdef TypeFloatID tid

    try:
        if dt.byteorder == c'<':
            tid = _float_le[np.dtype(dt).type]
        elif dt.byteorder == c'>':
            tid = _float_be[np.dtype(dt).type]
        else:
            tid = _float_nt[np.dtype(dt).type]
    except KeyError:
        raise TypeError("Unsupported float type (%s)" % dt)

    return tid.copy()

cdef TypeIntegerID _c_int(cnp.dtype dt):
    # Integers (ints and uints)
    cdef hid_t tid

    try:
        if dt.kind == c'i':
            if dt.byteorder == c'<':
                tid = _int_le[dt.itemsize]
            elif dt.byteorder == c'>':
                tid = _int_be[dt.itemsize]
            else:
                tid = _int_nt[dt.itemsize]
        elif dt.kind == c'u':
            if dt.byteorder == c'<':
                tid = _uint_le[dt.itemsize]
            elif dt.byteorder == c'>':
                tid = _uint_be[dt.itemsize]
            else:
                tid = _uint_nt[dt.itemsize]
        else:
            raise TypeError('Illegal int kind "%s"' % dt.kind)
    except KeyError:
        raise TypeError("Unsupported integer size (%s)" % dt.itemsize)

    return TypeIntegerID(H5Tcopy(tid))


cdef TypeEnumID _c_enum(cnp.dtype dt, dict vals):
    # Enums
    cdef:
        TypeIntegerID base
        TypeEnumID out

    base = _c_int(dt)

    out = TypeEnumID(H5Tenum_create(base.id))
    for name in sorted(vals):
        if isinstance(name, bytes):
            bname = name
        else:
            bname = unicode(name).encode('utf8')
        out.enum_insert(bname, vals[name])
    return out


cdef TypeEnumID _c_bool(cnp.dtype dt):
    # Booleans
    global cfg

    cdef TypeEnumID out
    out = TypeEnumID(H5Tenum_create(H5T_NATIVE_INT8))

    out.enum_insert(cfg._f_name, 0)
    out.enum_insert(cfg._t_name, 1)

    return out


cdef TypeArrayID _c_array(cnp.dtype dt, int logical):
    # Arrays
    cdef:
        cnp.dtype base
        TypeID type_base
        object shape

    base, shape = dt.subdtype
    try:
        shape = tuple(shape)
    except TypeError:
        try:
            shape = (int(shape),)
        except TypeError:
            raise TypeError("Array shape for dtype must be a sequence or integer")
    type_base = py_create(base, logical=logical)
    return array_create(type_base, shape)


cdef TypeOpaqueID _c_opaque(cnp.dtype dt):
    # Opaque
    return TypeOpaqueID(H5Tcreate(H5T_OPAQUE, dt.itemsize))


cdef TypeOpaqueID _c_opaque_tagged(cnp.dtype dt):
    """Create an HDF5 opaque data type with a tag recording the numpy dtype.

    Tagged opaque types can be read back easily in h5py, but not in other tools
    (they are *opaque*).

    The default tag is generated via the code:
    ``b"NUMPY:" + dt_in.descr[0][1].encode()``.
    """
    cdef TypeOpaqueID new_type = _c_opaque(dt)
    new_type.set_tag(b"NUMPY:" + dt.descr[0][1].encode())

    return new_type

cdef TypeStringID _c_string(cnp.dtype dt):
    # Strings (fixed-length)
    cdef hid_t tid

    tid = H5Tcopy(H5T_C_S1)
    H5Tset_size(tid, dt.itemsize)
    H5Tset_strpad(tid, H5T_STR_NULLPAD)
    if dt.metadata and dt.metadata.get('h5py_encoding') == 'utf-8':
        H5Tset_cset(tid, H5T_CSET_UTF8)
    return TypeStringID(tid)

cdef TypeCompoundID _c_complex(cnp.dtype dt):
    # Complex numbers (names depend on cfg)
    global cfg

    cdef hid_t tid, tid_sub
    cdef size_t size, off_r, off_i

    cdef size_t length = dt.itemsize
    cdef char byteorder = dt.byteorder

    if length == 8:
        size = h5py_size_n64
        off_r = h5py_offset_n64_real
        off_i = h5py_offset_n64_imag
        if byteorder == c'<':
            tid_sub = H5T_IEEE_F32LE
        elif byteorder == c'>':
            tid_sub = H5T_IEEE_F32BE
        else:
            tid_sub = H5T_NATIVE_FLOAT
    elif length == 16:
        size = h5py_size_n128
        off_r = h5py_offset_n128_real
        off_i = h5py_offset_n128_imag
        if byteorder == c'<':
            tid_sub = H5T_IEEE_F64LE
        elif byteorder == c'>':
            tid_sub = H5T_IEEE_F64BE
        else:
            tid_sub = H5T_NATIVE_DOUBLE

    elif length == 32:
        IF COMPLEX256_SUPPORT:
            size = h5py_size_n256
            off_r = h5py_offset_n256_real
            off_i = h5py_offset_n256_imag
            tid_sub = H5T_NATIVE_LDOUBLE
        ELSE:
            raise TypeError("Illegal length %d for complex dtype" % length)
    else:
        raise TypeError("Illegal length %d for complex dtype" % length)

    tid = H5Tcreate(H5T_COMPOUND, size)
    H5Tinsert(tid, cfg._r_name, off_r, tid_sub)
    H5Tinsert(tid, cfg._i_name, off_i, tid_sub)

    return TypeCompoundID(tid)

cdef TypeCompoundID _c_compound(cnp.dtype dt, int logical, int aligned):
    # Compound datatypes
    cdef:
        hid_t tid
        TypeID member_type
        object member_dt
        size_t member_offset = 0
        dict fields = {}

    # The challenge with correctly converting a numpy/h5py dtype to a HDF5 type
    # which is composed of subtypes has three aspects we must consider
    # 1. numpy/h5py dtypes do not always have the same size as HDF5, even when
    #   equivalent (can result in overlapping elements if not careful)
    # 2. For correct round-tripping of aligned dtypes, we need to consider how
    #   much padding we need by looking at the field offsets
    # 3. There is no requirement that the offsets be monotonically increasing
    #
    # The code below tries to cover these aspects

    # Build list of names, offsets, and types, sorted by increasing offset
    # (i.e. the position of the member in the struct)
    for name in sorted(dt.names, key=(lambda n: dt.fields[n][1])):
        field = dt.fields[name]
        h5_name = name.encode('utf8') if isinstance(name, unicode) else name

        # Get HDF5 data types and set the offset for each member
        member_dt = field[0]
        member_offset = max(member_offset, field[1])
        member_type = py_create(member_dt, logical=logical, aligned=aligned)
        if aligned and (member_offset > field[1]
                        or member_dt.itemsize != member_type.get_size()):
            raise TypeError("Enforced alignment not compatible with HDF5 type")
        fields[name] = (h5_name, member_offset, member_type)

        # Update member offset based on the HDF5 type size
        member_offset += member_type.get_size()

    member_offset = max(member_offset, dt.itemsize)
    if aligned and member_offset > dt.itemsize:
        raise TypeError("Enforced alignment not compatible with HDF5 type")

    # Create compound with the necessary size, and insert its members
    tid = H5Tcreate(H5T_COMPOUND, member_offset)
    for name in dt.names:
        h5_name, member_offset, member_type = fields[name]
        H5Tinsert(tid, h5_name, member_offset, member_type.id)

    return TypeCompoundID(tid)

cdef TypeStringID _c_vlen_str():
    # Variable-length strings
    cdef hid_t tid
    tid = H5Tcopy(H5T_C_S1)
    H5Tset_size(tid, H5T_VARIABLE)
    return TypeStringID(tid)

cdef TypeStringID _c_vlen_unicode():
    cdef hid_t tid
    tid = H5Tcopy(H5T_C_S1)
    H5Tset_size(tid, H5T_VARIABLE)
    H5Tset_cset(tid, H5T_CSET_UTF8)
    return TypeStringID(tid)

cdef TypeReferenceID _c_ref(object refclass):
    if refclass is Reference:
        return STD_REF_OBJ
    elif refclass is RegionReference:
        return STD_REF_DSETREG
    raise TypeError("Unrecognized reference code")


cpdef TypeID py_create(object dtype_in, bint logical=0, bint aligned=0):
    """(OBJECT dtype_in, BOOL logical=False) => TypeID

    Given a Numpy dtype object, generate a byte-for-byte memory-compatible
    HDF5 datatype object.  The result is guaranteed to be transient and
    unlocked.

    :param dtype_in: may be a dtype object, or anything which can be
        converted to a dtype, including strings like ' dt).names is not None):
            return _c_compound(dt, logical, aligned)

        # Array or opaque
        elif kind == c'V':
            if dt.subdtype is not None:
                return _c_array(dt, logical)
            else:
                return _c_opaque(dt)

        # String
        elif kind == c'S':
            return _c_string(dt)

        # Boolean
        elif kind == c'b':
            return _c_bool(dt)

        # Object types (including those with vlen hints)
        elif kind == c'O':

            if logical:
                vlen = check_vlen_dtype(dt)
                if vlen is bytes:
                    return _c_vlen_str()
                elif vlen is unicode:
                    return _c_vlen_unicode()
                elif vlen is not None:
                    return vlen_create(py_create(vlen, logical))

                refclass = check_ref_dtype(dt)
                if refclass is not None:
                    return _c_ref(refclass)

                raise TypeError("Object dtype %r has no native HDF5 equivalent" % (dt,))

            return PYTHON_OBJECT

        # Unrecognized
        else:
            raise TypeError("No conversion path for dtype: %s" % repr(dt))

def vlen_dtype(basetype):
    """Make a numpy dtype for an HDF5 variable-length datatype

    For variable-length string dtypes, use :func:`string_dtype` instead.
    """
    return np.dtype('O', metadata={'vlen': basetype})

def string_dtype(encoding='utf-8', length=None):
    """Make a numpy dtype for HDF5 strings

    encoding may be 'utf-8' or 'ascii'.

    length may be an integer for a fixed length string dtype, or None for
    variable length strings. String lengths for HDF5 are counted in bytes,
    not unicode code points.

    For variable length strings, the data should be passed as Python str objects
    (unicode in Python 2) if the encoding is 'utf-8', and bytes if it is 'ascii'.
    For fixed length strings, the data should be numpy fixed length *bytes*
    arrays, regardless of the encoding. Fixed length unicode data is not
    supported.
    """
    # Normalise encoding name:
    try:
        encoding = codecs.lookup(encoding).name
    except LookupError:
        pass  # Use our error below

    if encoding not in {'ascii', 'utf-8'}:
        raise ValueError("Invalid encoding (%r); 'utf-8' or 'ascii' allowed"
                         % encoding)

    if isinstance(length, int):
        # Fixed length string
        return np.dtype("|S" + str(length), metadata={'h5py_encoding': encoding})
    elif length is None:
        vlen = unicode if (encoding == 'utf-8') else bytes
        return np.dtype('O', metadata={'vlen': vlen})
    else:
        raise TypeError("length must be integer or None (got %r)" % length)

def enum_dtype(values_dict, basetype=np.uint8):
    """Create a NumPy representation of an HDF5 enumerated type

    *values_dict* maps string names to integer values. *basetype* is an
    appropriate integer base dtype large enough to hold the possible options.
    """
    dt = np.dtype(basetype)
    if not np.issubdtype(dt, np.integer):
        raise TypeError("Only integer types can be used as enums")

    return np.dtype(dt, metadata={'enum': values_dict})


def opaque_dtype(np_dtype):
    """Return an equivalent dtype tagged to be stored in an HDF5 opaque type.

    This makes it easy to store numpy data like datetimes for which there is
    no equivalent HDF5 type, but it's not interoperable: other tools won't treat
    the opaque data as datetimes.
    """
    dt = np.dtype(np_dtype)
    if np.issubdtype(dt, np.object_):
        raise TypeError("Cannot store numpy object arrays as opaque data")
    if dt.names is not None:
        raise TypeError("Cannot store numpy structured arrays as opaque data")
    if dt.subdtype is not None:
        raise TypeError("Cannot store numpy sub-array dtype as opaque data")
    if dt.itemsize == 0:
        raise TypeError("dtype for opaque data must have explicit size")

    return np.dtype(dt, metadata={'h5py_opaque': True})


ref_dtype = np.dtype('O', metadata={'ref': Reference})
regionref_dtype = np.dtype('O', metadata={'ref': RegionReference})


@with_phil
def special_dtype(**kwds):
    """ Create a new h5py "special" type.  Only one keyword may be given.

    Legal keywords are:

    vlen = basetype
        Base type for HDF5 variable-length datatype. This can be Python
        str type or instance of np.dtype.
        Example: special_dtype( vlen=str )

    enum = (basetype, values_dict)
        Create a NumPy representation of an HDF5 enumerated type.  Provide
        a 2-tuple containing an (integer) base dtype and a dict mapping
        string names to integer values.

    ref = Reference | RegionReference
        Create a NumPy representation of an HDF5 object or region reference
        type.
    """

    if len(kwds) != 1:
        raise TypeError("Exactly one keyword may be provided")

    name, val = kwds.popitem()

    if name == 'vlen':
        return np.dtype('O', metadata={'vlen': val})

    if name == 'enum':
        try:
            dt, enum_vals = val
        except TypeError:
            raise TypeError("Enums must be created from a 2-tuple (basetype, values_dict)")
        return enum_dtype(enum_vals, dt)

    if name == 'ref':
        if val not in (Reference, RegionReference):
            raise ValueError("Ref class must be Reference or RegionReference")
        return ref_dtype if (val is Reference) else regionref_dtype

    raise TypeError('Unknown special type "%s"' % name)


def check_vlen_dtype(dt):
    """If the dtype represents an HDF5 vlen, returns the Python base class.

    Returns None if the dtype does not represent an HDF5 vlen.
    """
    try:
        return dt.metadata.get('vlen', None)
    except AttributeError:
        return None

string_info = namedtuple('string_info', ['encoding', 'length'])

def check_string_dtype(dt):
    """If the dtype represents an HDF5 string, returns a string_info object.

    The returned string_info object holds the encoding and the length.
    The encoding can only be 'utf-8' or 'ascii'. The length may be None
    for a variable-length string, or a fixed length in bytes.

    Returns None if the dtype does not represent an HDF5 string.
    """
    vlen_kind = check_vlen_dtype(dt)
    if vlen_kind is unicode:
        return string_info('utf-8', None)
    elif vlen_kind is bytes:
        return string_info('ascii', None)
    elif dt.kind == 'S':
        enc = (dt.metadata or {}).get('h5py_encoding', 'ascii')
        return string_info(enc, dt.itemsize)
    else:
        return None

def check_enum_dtype(dt):
    """If the dtype represents an HDF5 enumerated type, returns the dictionary
    mapping string names to integer values.

    Returns None if the dtype does not represent an HDF5 enumerated type.
    """
    try:
        return dt.metadata.get('enum', None)
    except AttributeError:
        return None

def check_opaque_dtype(dt):
    """Return True if the dtype given is tagged to be stored as HDF5 opaque data
    """
    try:
        return dt.metadata.get('h5py_opaque', False)
    except AttributeError:
        return False

def check_ref_dtype(dt):
    """If the dtype represents an HDF5 reference type, returns the reference
    class (either Reference or RegionReference).

    Returns None if the dtype does not represent an HDF5 reference type.
    """
    try:
        return dt.metadata.get('ref', None)
    except AttributeError:
        return None

@with_phil
def check_dtype(**kwds):
    """ Check a dtype for h5py special type "hint" information.  Only one
    keyword may be given.

    vlen = dtype
        If the dtype represents an HDF5 vlen, returns the Python base class.
        Currently only built-in string vlens (str) are supported.  Returns
        None if the dtype does not represent an HDF5 vlen.

    enum = dtype
        If the dtype represents an HDF5 enumerated type, returns the dictionary
        mapping string names to integer values.  Returns None if the dtype does
        not represent an HDF5 enumerated type.

    ref = dtype
        If the dtype represents an HDF5 reference type, returns the reference
        class (either Reference or RegionReference).  Returns None if the dtype
        does not represent an HDF5 reference type.
    """

    if len(kwds) != 1:
        raise TypeError("Exactly one keyword may be provided")

    name, dt = kwds.popitem()

    if name not in ('vlen', 'enum', 'ref'):
        raise TypeError('Unknown special type "%s"' % name)

    try:
        return dt.metadata[name]
    except TypeError:
        return None
    except KeyError:
        return None


@with_phil
def convert(TypeID src not None, TypeID dst not None, size_t n,
            cnp.ndarray buf not None, cnp.ndarray bkg=None, ObjectID dxpl=None):
    """ (TypeID src, TypeID dst, UINT n, NDARRAY buf, NDARRAY bkg=None,
    PropID dxpl=None)

    Convert n contiguous elements of a buffer in-place.  The array dtype
    is ignored.  The backing buffer is optional; for conversion of compound
    types, a temporary copy of conversion buffer will used for backing if
    one is not supplied.
    """
    cdef:
        void* bkg_ = NULL
        void* buf_ = buf.data

    if bkg is None and (src.detect_class(H5T_COMPOUND) or
                        dst.detect_class(H5T_COMPOUND)):
        bkg = buf.copy()
    if bkg is not None:
        bkg_ = bkg.data

    H5Tconvert(src.id, dst.id, n, buf_, bkg_, pdefault(dxpl))


@with_phil
def find(TypeID src not None, TypeID dst not None):
    """ (TypeID src, TypeID dst) => TUPLE or None

    Determine if a conversion path exists from src to dst.  Result is None
    or a tuple describing the conversion path.  Currently tuple entries are:

    1. INT need_bkg:    Whether this routine requires a backing buffer.
                        Values are BKG_NO, BKG_TEMP and BKG_YES.
    """
    cdef:
        H5T_cdata_t *data
        H5T_conv_t result = NULL

    try:
        result = H5Tfind(src.id, dst.id, &data)
        if result == NULL:
            return None
        return (data[0].need_bkg,)
    except:
        return None
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/h5z.pxd0000644000175000017500000000047414045746670015513 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/h5z.pyx0000644000175000017500000000716214507227212015526 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Filter API and constants.
"""

from libc.stdint cimport uintptr_t
from ._objects import phil, with_phil


# === Public constants and data structures ====================================

CLASS_T_VERS = H5Z_CLASS_T_VERS

FILTER_LZF = H5PY_FILTER_LZF

FILTER_ERROR    = H5Z_FILTER_ERROR
FILTER_NONE     = H5Z_FILTER_NONE
FILTER_ALL      = H5Z_FILTER_ALL
FILTER_DEFLATE  = H5Z_FILTER_DEFLATE
FILTER_SHUFFLE  = H5Z_FILTER_SHUFFLE
FILTER_FLETCHER32 = H5Z_FILTER_FLETCHER32
FILTER_SZIP     = H5Z_FILTER_SZIP
FILTER_NBIT     = H5Z_FILTER_NBIT
FILTER_SCALEOFFSET = H5Z_FILTER_SCALEOFFSET
FILTER_RESERVED = H5Z_FILTER_RESERVED
FILTER_MAX      = H5Z_FILTER_MAX

FLAG_DEFMASK    = H5Z_FLAG_DEFMASK
FLAG_MANDATORY  = H5Z_FLAG_MANDATORY
FLAG_OPTIONAL   = H5Z_FLAG_OPTIONAL
FLAG_INVMASK    = H5Z_FLAG_INVMASK
FLAG_REVERSE    = H5Z_FLAG_REVERSE
FLAG_SKIP_EDC   = H5Z_FLAG_SKIP_EDC

SZIP_ALLOW_K13_OPTION_MASK  = H5_SZIP_ALLOW_K13_OPTION_MASK   #1
SZIP_CHIP_OPTION_MASK       = H5_SZIP_CHIP_OPTION_MASK        #2
SZIP_EC_OPTION_MASK         = H5_SZIP_EC_OPTION_MASK          #4
SZIP_NN_OPTION_MASK         = H5_SZIP_NN_OPTION_MASK          #32
SZIP_MAX_PIXELS_PER_BLOCK   = H5_SZIP_MAX_PIXELS_PER_BLOCK    #32

SO_FLOAT_DSCALE = H5Z_SO_FLOAT_DSCALE
SO_FLOAT_ESCALE = H5Z_SO_FLOAT_ESCALE
SO_INT          = H5Z_SO_INT
SO_INT_MINBITS_DEFAULT = H5Z_SO_INT_MINBITS_DEFAULT

FILTER_CONFIG_ENCODE_ENABLED = H5Z_FILTER_CONFIG_ENCODE_ENABLED
FILTER_CONFIG_DECODE_ENABLED = H5Z_FILTER_CONFIG_DECODE_ENABLED

ERROR_EDC   = H5Z_ERROR_EDC
DISABLE_EDC = H5Z_DISABLE_EDC
ENABLE_EDC  = H5Z_ENABLE_EDC
NO_EDC      = H5Z_NO_EDC


# === Filter API  =============================================================

@with_phil
def filter_avail(int filter_code):
    """(INT filter_code) => BOOL

    Determine if the given filter is available to the library. The
    filter code should be one of:

    - FILTER_DEFLATE
    - FILTER_SHUFFLE
    - FILTER_FLETCHER32
    - FILTER_SZIP
    """
    return H5Zfilter_avail(filter_code)


@with_phil
def get_filter_info(int filter_code):
    """(INT filter_code) => INT filter_flags

    Retrieve a bitfield with information about the given filter. The
    filter code should be one of:

    - FILTER_DEFLATE
    - FILTER_SHUFFLE
    - FILTER_FLETCHER32
    - FILTER_SZIP

    Valid bitmasks for use with the returned bitfield are:

    - FILTER_CONFIG_ENCODE_ENABLED
    - FILTER_CONFIG_DECODE_ENABLED
    """
    cdef unsigned int flags
    H5Zget_filter_info(filter_code, &flags)
    return flags


@with_phil
def register_filter(uintptr_t cls_pointer_address):
    '''(INT cls_pointer_address) => BOOL

    Register a new filter from the memory address of a buffer containing a
    ``H5Z_class1_t`` or ``H5Z_class2_t`` data structure describing the filter.

    `cls_pointer_address` can be retrieved from a HDF5 filter plugin dynamic
    library::

        import ctypes

        filter_clib = ctypes.CDLL("/path/to/my_hdf5_filter_plugin.so")
        filter_clib.H5PLget_plugin_info.restype = ctypes.c_void_p

        h5py.h5z.register_filter(filter_clib.H5PLget_plugin_info())

    '''
    return H5Zregister(cls_pointer_address) >= 0


@with_phil
def unregister_filter(int filter_code):
    '''(INT filter_code) => BOOL

    Unregister a filter

    '''
    return H5Zunregister(filter_code) >= 0


def _register_lzf():
    register_lzf()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1684254249.0
h5py-3.13.0/h5py/ipy_completer.py0000644000175000017500000000722514430727051017504 0ustar00takluyvertakluyver#+
#
# This file is part of h5py, a low-level Python interface to the HDF5 library.
#
# Contributed by Darren Dale
#
# Copyright (C) 2009 Darren Dale
#
# http://h5py.org
# License: BSD  (See LICENSE.txt for full license)
#
#-

# pylint: disable=eval-used,protected-access

"""
    This is the h5py completer extension for ipython.  It is loaded by
    calling the function h5py.enable_ipython_completer() from within an
    interactive IPython session.

    It will let you do things like::

      f=File('foo.h5')
      f['
      # or:
      f['ite

    which will do tab completion based on the subgroups of `f`. Also::

      f['item1'].at

    will perform tab completion for the attributes in the usual way. This should
    also work::

      a = b = f['item1'].attrs.

    as should::

      f['item1/item2/it
"""

import posixpath
import re
from ._hl.attrs import AttributeManager
from ._hl.base import HLObject


from IPython import get_ipython

from IPython.core.error import TryNext
from IPython.utils import generics

re_attr_match = re.compile(r"(?:.*\=)?(.+\[.*\].*)\.(\w*)$")
re_item_match = re.compile(r"""(?:.*\=)?(.*)\[(?P['|"])(?!.*(?P=s))(.*)$""")
re_object_match = re.compile(r"(?:.*\=)?(.+?)(?:\[)")


def _retrieve_obj(name, context):
    """ Filter function for completion. """

    # we don't want to call any functions, but I couldn't find a robust regex
    # that filtered them without unintended side effects. So keys containing
    # "(" will not complete.

    if '(' in name:
        raise ValueError()

    return eval(name, context.user_ns)


def h5py_item_completer(context, command):
    """Compute possible item matches for dict-like objects"""

    base, item = re_item_match.split(command)[1:4:2]

    try:
        obj = _retrieve_obj(base, context)
    except Exception:
        return []

    path, _ = posixpath.split(item)

    try:
        if path:
            items = (posixpath.join(path, name) for name in obj[path].keys())
        else:
            items = obj.keys()
    except AttributeError:
        return []

    items = list(items)

    return [i for i in items if i[:len(item)] == item]


def h5py_attr_completer(context, command):
    """Compute possible attr matches for nested dict-like objects"""

    base, attr = re_attr_match.split(command)[1:3]
    base = base.strip()

    try:
        obj = _retrieve_obj(base, context)
    except Exception:
        return []

    attrs = dir(obj)
    try:
        attrs = generics.complete_object(obj, attrs)
    except TryNext:
        pass

    try:
        # support >=ipython-0.12
        omit__names = get_ipython().Completer.omit__names
    except AttributeError:
        omit__names = 0

    if omit__names == 1:
        attrs = [a for a in attrs if not a.startswith('__')]
    elif omit__names == 2:
        attrs = [a for a in attrs if not a.startswith('_')]

    return ["%s.%s" % (base, a) for a in attrs if a[:len(attr)] == attr]


def h5py_completer(self, event):
    """ Completer function to be loaded into IPython """
    base = re_object_match.split(event.line)[1]

    try:
        obj = self._ofind(base).obj
    except AttributeError:
        obj = self._ofind(base).get('obj')

    if not isinstance(obj, (AttributeManager, HLObject)):
        raise TryNext

    try:
        return h5py_attr_completer(self, event.line)
    except ValueError:
        pass

    try:
        return h5py_item_completer(self, event.line)
    except ValueError:
        pass

    return []


def load_ipython_extension(ip=None):
    """ Load completer function into IPython """
    if ip is None:
        ip = get_ipython()
    ip.set_hook('complete_command', h5py_completer, re_key=r"(?:.*\=)?(.+?)\[")
././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5798821
h5py-3.13.0/h5py/tests/0000755000175000017500000000000014755127217015423 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/tests/__init__.py0000644000175000017500000000123414045746670017536 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.


def run_tests(args=''):
    try:
        from pytest import main
    except ImportError:
        print("Tests require pytest, pytest not installed")
        return 1
    else:
        from shlex import split
        from subprocess import call
        from sys import executable
        cli = [executable, "-m", "pytest", "--pyargs", "h5py"]
        cli.extend(split(args))
        return call(cli)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1712309901.0
h5py-3.13.0/h5py/tests/common.py0000644000175000017500000001713014603743215017261 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

import sys
import os
import shutil
import inspect
import tempfile
import subprocess
from contextlib import contextmanager
from functools import wraps

import numpy as np
from numpy.lib.recfunctions import repack_fields
import h5py

import unittest as ut


# Check if non-ascii filenames are supported
# Evidently this is the most reliable way to check
# See also h5py issue #263 and ipython #466
# To test for this, run the testsuite with LC_ALL=C
try:
    testfile, fname = tempfile.mkstemp(chr(0x03b7))
except UnicodeError:
    UNICODE_FILENAMES = False
else:
    UNICODE_FILENAMES = True
    os.close(testfile)
    os.unlink(fname)
    del fname
    del testfile


class TestCase(ut.TestCase):

    """
        Base class for unit tests.
    """

    @classmethod
    def setUpClass(cls):
        cls.tempdir = tempfile.mkdtemp(prefix='h5py-test_')

    @classmethod
    def tearDownClass(cls):
        shutil.rmtree(cls.tempdir)

    def mktemp(self, suffix='.hdf5', prefix='', dir=None):
        if dir is None:
            dir = self.tempdir
        return tempfile.mktemp(suffix, prefix, dir=dir)

    def mktemp_mpi(self, comm=None, suffix='.hdf5', prefix='', dir=None):
        if comm is None:
            from mpi4py import MPI
            comm = MPI.COMM_WORLD
        fname = None
        if comm.Get_rank() == 0:
            fname = self.mktemp(suffix, prefix, dir)
        fname = comm.bcast(fname, 0)
        return fname

    def setUp(self):
        self.f = h5py.File(self.mktemp(), 'w')

    def tearDown(self):
        try:
            if self.f:
                self.f.close()
        except:
            pass

    def assertSameElements(self, a, b):
        for x in a:
            match = False
            for y in b:
                if x == y:
                    match = True
            if not match:
                raise AssertionError("Item '%s' appears in a but not b" % x)

        for x in b:
            match = False
            for y in a:
                if x == y:
                    match = True
            if not match:
                raise AssertionError("Item '%s' appears in b but not a" % x)

    def assertArrayEqual(self, dset, arr, message=None, precision=None, check_alignment=True):
        """ Make sure dset and arr have the same shape, dtype and contents, to
            within the given precision, optionally ignoring differences in dtype alignment.

            Note that dset may be a NumPy array or an HDF5 dataset.
        """
        if precision is None:
            precision = 1e-5
        if message is None:
            message = ''
        else:
            message = ' (%s)' % message

        if np.isscalar(dset) or np.isscalar(arr):
            assert np.isscalar(dset) and np.isscalar(arr), \
                'Scalar/array mismatch ("%r" vs "%r")%s' % (dset, arr, message)
            dset = np.asarray(dset)
            arr = np.asarray(arr)

        assert dset.shape == arr.shape, \
            "Shape mismatch (%s vs %s)%s" % (dset.shape, arr.shape, message)
        if dset.dtype != arr.dtype:
            if check_alignment:
                normalized_dset_dtype = dset.dtype
                normalized_arr_dtype = arr.dtype
            else:
                normalized_dset_dtype = repack_fields(dset.dtype)
                normalized_arr_dtype = repack_fields(arr.dtype)

            assert normalized_dset_dtype == normalized_arr_dtype, \
                "Dtype mismatch (%s vs %s)%s" % (normalized_dset_dtype, normalized_arr_dtype, message)

            if not check_alignment:
                if normalized_dset_dtype != dset.dtype:
                    dset = repack_fields(np.asarray(dset))
                if normalized_arr_dtype != arr.dtype:
                    arr = repack_fields(np.asarray(arr))

        if arr.dtype.names is not None:
            for n in arr.dtype.names:
                message = '[FIELD %s] %s' % (n, message)
                self.assertArrayEqual(dset[n], arr[n], message=message, precision=precision, check_alignment=check_alignment)
        elif arr.dtype.kind in ('i', 'f'):
            assert np.all(np.abs(dset[...] - arr[...]) < precision), \
                "Arrays differ by more than %.3f%s" % (precision, message)
        elif arr.dtype.kind == 'O':
            for v1, v2 in zip(dset.flat, arr.flat):
                self.assertArrayEqual(v1, v2, message=message, precision=precision, check_alignment=check_alignment)
        else:
            assert np.all(dset[...] == arr[...]), \
                "Arrays are not equal (dtype %s) %s" % (arr.dtype.str, message)

    def assertNumpyBehavior(self, dset, arr, s, skip_fast_reader=False):
        """ Apply slicing arguments "s" to both dset and arr.

        Succeeds if the results of the slicing are identical, or the
        exception raised is of the same type for both.

        "arr" must be a Numpy array; "dset" may be a NumPy array or dataset.
        """
        exc = None
        try:
            arr_result = arr[s]
        except Exception as e:
            exc = type(e)

        s_fast = s if isinstance(s, tuple) else (s,)

        if exc is None:
            self.assertArrayEqual(dset[s], arr_result)

            if not skip_fast_reader:
                self.assertArrayEqual(
                    dset._fast_reader.read(s_fast),
                    arr_result,
                )
        else:
            with self.assertRaises(exc):
                dset[s]

            if not skip_fast_reader:
                with self.assertRaises(exc):
                    dset._fast_reader.read(s_fast)

NUMPY_RELEASE_VERSION = tuple([int(i) for i in np.__version__.split(".")[0:2]])

@contextmanager
def closed_tempfile(suffix='', text=None):
    """
    Context manager which yields the path to a closed temporary file with the
    suffix `suffix`. The file will be deleted on exiting the context. An
    additional argument `text` can be provided to have the file contain `text`.
    """
    with tempfile.NamedTemporaryFile(
        'w+t', suffix=suffix, delete=False
    ) as test_file:
        file_name = test_file.name
        if text is not None:
            test_file.write(text)
            test_file.flush()
    yield file_name
    shutil.rmtree(file_name, ignore_errors=True)


def insubprocess(f):
    """Runs a test in its own subprocess"""
    @wraps(f)
    def wrapper(request, *args, **kwargs):
        curr_test = inspect.getsourcefile(f) + "::" + request.node.name
        # get block around test name
        insub = "IN_SUBPROCESS_" + curr_test
        for c in "/\\,:.":
            insub = insub.replace(c, "_")
        defined = os.environ.get(insub, None)
        # fork process
        if defined:
            return f(request, *args, **kwargs)
        else:
            os.environ[insub] = '1'
            env = os.environ.copy()
            env[insub] = '1'
            env.update(getattr(f, 'subproc_env', {}))

            with closed_tempfile() as stdout:
                with open(stdout, 'w+t') as fh:
                    rtn = subprocess.call([sys.executable, '-m', 'pytest', curr_test],
                                          stdout=fh, stderr=fh, env=env)
                with open(stdout, 'rt') as fh:
                    out = fh.read()

            assert rtn == 0, "\n" + out
    return wrapper


def subproc_env(d):
    """Set environment variables for the @insubprocess decorator"""
    def decorator(f):
        f.subproc_env = d
        return f

    return decorator
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1672853193.0
h5py-3.13.0/h5py/tests/conftest.py0000644000175000017500000000104314355333311017606 0ustar00takluyvertakluyverimport h5py
import pytest


@pytest.fixture()
def writable_file(tmp_path):
    with h5py.File(tmp_path / 'test.h5', 'w') as f:
        yield f


def pytest_addoption(parser):
    parser.addoption(
        '--no-network', action='store_true', default=False, help='No network access'
    )


def pytest_collection_modifyitems(config, items):
    if config.getoption('--no-network'):
        nonet = pytest.mark.skip(reason='No Internet')
        for item in items:
            if 'nonetwork' in item.keywords:
                item.add_marker(nonet)
././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5808823
h5py-3.13.0/h5py/tests/data_files/0000755000175000017500000000000014755127217017516 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/data_files/__init__.py0000644000175000017500000000030114350630273021611 0ustar00takluyvertakluyverfrom os.path import dirname, join

def get_data_file_path(basename):
    """
    Returns the path to the test data file given by `basename`
    """
    return join(dirname(__file__), basename)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/data_files/vlen_string_dset.h50000644000175000017500000001424014350630273023316 0ustar00takluyvertakluyver‰HDF

ÿÿÿÿÿÿÿÿ ÿÿÿÿÿÿÿÿ`ˆ¨ˆ¨TREEÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ0HEAPXÈDS1H!`@’fTxSNOD     GCOLsweetsorrow.is suchParting././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/data_files/vlen_string_dset_utc.h50000644000175000017500000051366014350630273024203 0ustar00takluyvertakluyver‰HDF

ÿÿÿÿÿÿÿÿ°—ÿÿÿÿÿÿÿÿ`ˆ¨ˆ¨TREEÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¨HEAPXÈds1Hx`Pù/UõSNOD õ@ARRAYOFSTRINGS	 90
MissingValue (TitleUTCTime(UnitsN/A@UniqueFieldDefinitionTES-SPECIFICHStringLengthAttribute	 °— °—Ÿ°—ž°—œ°—°—˜°—™°—š°—›°—°—°—‘°—’°—“°—”°—•°—–°——°—}°—~°—°—€°—°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—z°—{°—|°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—°—°—‘°—’°—“°—”°—•°—–°——°—˜°—™°—š°—›°—œ°—°—ž°—Ÿ°— °—¡°—¢°—£°—¤°—¥°—¦°—§°—¨°—©°—ª°—«°—¬°—­°—®°—¯°—°°—±°—²°—³°—´°—µ°—¶°—·°—¸°—¹°—º°—»°—¼°—½°—¾°—¿°—À°—Á°—°—ð—İ—Ű—ư—ǰ—Ȱ—ɰ—ʰ—˰—̰—Ͱ—ΰ—ϰ—а—Ѱ—Ò°—Ó°—Ô°—Õ°—Ö°—×°—ذ—Ù°—Ú°—Û°—ܰ—ݰ—Þ°—ß°—à°—á°—â°—ã°—ä°—å°—æ°—ç°—è°—é°—ê°—ë°—ì°—í°—î°—ï°—ð°—ñ°—ò°—ó°—ô°—õ°—ö°—÷°—ø°—ù°—ú°—û°—ü°—ý°—þ°—ÿ°—°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—z°—{°—|°—}°—~°—°—€°—°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—°—°—‘°—’°—“°—”°—•°—–°——°—˜°—™°—š°—›°—œ°—°—ž°—Ÿ°— °—¡°—¢°—£°—¤°—¥°—¦°—§°—¨°—©°—ª°—«°—¬°—­°—®°—¯°—°°—±°—²°—³°—´°—µ°—¶°—·°—¸°—¹°—º°—»°—¼°—½°—¾°—¿°—À°—Á°—°—ð—İ—Ű—ư—ǰ—Ȱ—ɰ—ʰ—˰—̰—Ͱ—ΰ—ϰ—а—Ѱ—Ò°—Ó°—Ô°—Õ°—Ö°—×°—ذ—Ù°—Ú°—Û°—ܰ—ݰ—Þ°—ß°—à°—á°—â°—ã°—ä°—å°—æ°—ç°—è°—é°—ê°—ë°—ì°—í°—î°—ï°—ð°—ñ°—ò°—ó°—ô°—õ°—ö°—÷°—ø°—ù°—ú°—û°—ü°—ý°—þ°—ÿ°—°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—z°—{°—|°—}°—~°—°—€°—°—{°—|°—}°—~°—°—€°—°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—°—°—‘°—’°—“°—”°—•°—–°——°—˜°—™°—š°—›°—œ°—°—ž°—Ÿ°— °—¡°—¢°—£°—¤°—¥°—¦°—§°—¨°—©°—ª°—«°—¬°—­°—®°—¯°—°°—±°—²°—³°—´°—µ°—¶°—·°—¸°—¹°—º°—»°—¼°—½°—¾°—¿°—À°—Á°—°—ð—İ—Ű—ư—ǰ—Ȱ—ɰ—ʰ—˰—̰—Ͱ—ΰ—ϰ—а—Ѱ—Ò°—Ó°—Ô°—Õ°—Ö°—×°—ذ—Ù°—Ú°—Û°—ܰ—ݰ—Þ°—ß°—à°—á°—â°—ã°—ä°—å°—æ°—ç°—è°—é°—ê°—ë°—ì°—í°—î°—ï°—ð°—ñ°—ò°—ó°—ô°—õ°—ö°—÷°—ø°—ù°—ú°—û°—ü°—ý°—þ°—ÿ°—°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—z°—{°—|°—}°—~°—°—€°—°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—°—°—‘°—’°—“°—”°—•°—–°——°—˜°—™°—š°—›°—œ°—°—ž°—Ÿ°— °—¡°—¢°—£°—¤°—¥°—¦°—§°—¨°—©°—ª°—«°—¬°—­°—®°—¯°—°°—±°—²°—³°—´°—µ°—¶°—·°—¸°—¹°—º°—»°—¼°—½°—¾°—¿°—À°—Á°—°—ð—İ—Ű—ư—ǰ—Ȱ—ɰ—ʰ—˰—̰—Ͱ—ΰ—ϰ—а—Ѱ—Ò°—Ó°—Ô°—Õ°—Ö°—×°—ذ—Ù°—Ú°—Û°—ܰ—ݰ—Þ°—ß°—à°—á°—â°—ã°—ä°—å°—æ°—ç°—è°—é°—ê°—ë°—ì°—í°—î°—ï°—ð°—ñ°—ò°—ó°—ô°—õ°—ö°—÷°—ø°—ù°—ú°—û°—ü°—ý°—þ°—ÿ°—°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—z°—{°—|°—}°—~°—°—€°—°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—°—°—‘°—’°—“°—”°—•°—–°——°—˜°—™°—š°—›°—œ°—°—ž°—Ÿ°— °—¡°—¢°—£°—¤°—¥°—¦°—§°—¨°—©°—ª°—«°—¬°—­°—®°—¯°—°°—±°—²°—³°—´°—µ°—¶°—·°—¸°—¹°—º°—»°—¼°—½°—¾°—¿°—À°—Á°—°—ð—İ—Ű—ư—ǰ—Ȱ—ɰ—ʰ—˰—̰—Ͱ—ΰ—ϰ—а—Ѱ—Ò°—Ó°—Ô°—Õ°—Ö°—×°—ذ—Ù°—Ú°—Û°—ܰ—ݰ—Þ°—ß°—à°—á°—â°—ã°—ä°—å°—æ°—ç°—è°—é°—ê°—ë°—ì°—í°—î°—ï°—ð°—ñ°—ò°—ó°—ô°—õ°—ö°—÷°—ø°—ù°—ú°—û°—ü°—ý°—þ°—ÿ°—°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—z°—{°—|°—}°—~°—°—€°—°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—°—°—‘°—’°—“°—”°—•°—–°——°—˜°—™°—š°—›°—œ°—°—ž°—Ÿ°— °—¡°—¢°—£°—¤°—¥°—¦°—§°—¨°—©°—ª°—«°—¬°—­°—®°—¯°—°°—±°—²°—³°—´°—µ°—¶°—·°—¸°—¹°—º°—»°—¼°—½°—¾°—¿°—À°—Á°—°—ð—İ—Ű—ư—ǰ—Ȱ—ɰ—ʰ—˰—̰—Ͱ—ΰ—ϰ—а—Ѱ—Ò°—Ó°—Ô°—Õ°—Ö°—×°—ذ—Ù°—Ú°—Û°—ܰ—ݰ—Þ°—ß°—à°—á°—â°—ã°—ä°—å°—æ°—ç°—è°—é°—ê°—ë°—ì°—í°—î°—ï°—ð°—ñ°—ò°—ó°—ô°—õ°—ö°—÷°—ø°—ù°—ú°—û°—ü°—ý°—þ°—ÿ°—°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—z°—{°—|°—}°—~°—°—€°—°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—°—°—‘°—’°—“°—”°—•°—–°——°—˜°—™°—š°—›°—œ°—°—ž°—Ÿ°— °—¡°—¢°—£°—¤°—¥°—¦°—§°—¨°—©°—ª°—«°—¬°—­°—®°—¯°—°°—±°—²°—³°—´°—µ°—¶°—·°—¸°—¹°—º°—»°—¼°—½°—¾°—¿°—À°—Á°—°—ð—İ—Ű—ư—ǰ—Ȱ—ɰ—ʰ—˰—̰—Ͱ—ΰ—ϰ—а—Ѱ—Ò°—Ó°—Ô°—Õ°—Ö°—×°—ذ—Ù°—Ú°—Û°—ܰ—ݰ—Þ°—ß°—à°—á°—â°—ã°—ä°—å°—æ°—ç°—è°—é°—ê°—ë°—ì°—í°—î°—ï°—ð°—ñ°—ò°—ó°—ô°—õ°—ö°—÷°—ø°—ù°—ú°—û°—ü°—ý°—þ°—ÿ°—°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—z°—{°—|°—}°—~°—°—€°—°—‚°—ƒ°—„°—…°—†°—‡°—ˆ°—‰°—а—‹°—Œ°—°—ް—°—°—‘°—’°—“°—”°—•°—–°——°—˜°—™°—š°—›°—œ°—°—ž°—Ÿ°— °—¡°—¢°—£°—¤°—¥°—¦°—§°—¨°—©°—ª°—«°—¬°—­°—®°—¯°—°°—±°—²°—³°—´°—µ°—¶°—·°—¸°—¹°—º°—»°—¼°—½°—¾°—¿°—À°—Á°—°—ð—İ—Ű—ư—ǰ—Ȱ—ɰ—ʰ—˰—̰—Ͱ—ΰ—ϰ—а—Ѱ—Ò°—Ó°—Ô°—Õ°—Ö°—×°—ذ—Ù°—Ú°—Û°—ܰ—ݰ—Þ°—ß°—à°—á°—â°—ã°—ä°—å°—æ°—ç°—è°—é°—ê°—ë°—ì°—í°—î°—ï°—ð°—ñ°—ò°—ó°—ô°—õ°—ö°—÷°—ø°—ù°—ú°—û°—ü°—ý°—þ°—ÿ°—°—°—°—°—°—°—°—°—°—	°—
°—°—°—
°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°—°— °—!°—"°—#°—$°—%°—&°—'°—(°—)°—*°—+°—,°—-°—.°—/°—0°—1°—2°—3°—4°—5°—6°—7°—8°—9°—:°—;°—<°—=°—>°—?°—@°—A°—B°—C°—D°—E°—F°—G°—H°—I°—J°—K°—L°—M°—N°—O°—P°—Q°—R°—S°—T°—U°—V°—W°—X°—Y°—Z°—[°—\°—]°—^°—_°—`°—a°—b°—c°—d°—e°—f°—g°—h°—i°—j°—k°—l°—m°—n°—o°—p°—q°—r°—s°—t°—u°—v°—w°—x°—y°—zGCOL2009-12-20T23:04:03.590930Z2009-12-20T23:27:22.125046Z2009-12-20T23:27:49.182943Z2009-12-20T23:28:16.726737Z2009-12-20T23:28:43.787252Z2009-12-20T23:29:11.325704Z2009-12-20T23:29:38.381599Z2009-12-20T23:30:05.924846Z	2009-12-20T23:30:32.982944Z
2009-12-20T23:31:00.524188Z2009-12-20T23:31:27.584677Z2009-12-20T23:31:55.126945Z
2009-12-20T23:32:22.182205Z2009-12-20T23:32:49.724584Z2009-12-20T23:33:16.779502Z2009-12-20T23:33:44.323771Z2009-12-20T23:34:11.380039Z2009-12-20T23:34:38.925504Z2009-12-20T23:35:05.981787Z2009-12-20T23:35:33.524227Z2009-12-20T23:36:00.580914Z2009-12-20T23:36:28.123168Z2009-12-20T23:36:55.185485Z2009-12-20T23:37:22.728530Z2009-12-20T23:37:49.782564Z2009-12-20T23:38:17.323877Z2009-12-20T23:38:44.385165Z2009-12-20T23:39:11.928912Z2009-12-20T23:39:38.981639Z2009-12-20T23:40:06.524425Z2009-12-20T23:40:33.584597Z 2009-12-20T23:41:01.124575Z!2009-12-20T23:41:28.177100Z"2009-12-20T23:41:55.722564Z#2009-12-20T23:42:22.784308Z$2009-12-20T23:42:50.322497Z%2009-12-20T23:43:17.378998Z&2009-12-20T23:43:44.924245Z'2009-12-20T23:44:11.978527Z(2009-12-20T23:44:39.523021Z)2009-12-20T23:45:06.578072Z*2009-12-20T23:45:34.125554Z+2009-12-20T23:46:01.183420Z,2009-12-20T23:46:28.719794Z-2009-12-20T23:46:55.773860Z.2009-12-20T23:47:23.325720Z/2009-12-20T23:47:50.374196Z02009-12-20T23:48:17.919692Z12009-12-20T23:48:44.977169Z22009-12-20T23:49:12.520260Z32009-12-20T23:49:39.576729Z42009-12-20T23:50:07.120896Z52009-12-20T23:50:34.176260Z62009-12-20T23:51:01.719521Z72009-12-20T23:51:28.775789Z82009-12-20T23:51:56.330188Z92009-12-20T23:52:23.379615Z:2009-12-20T23:52:50.919934Z;2009-12-20T23:53:17.976306Z<2009-12-20T23:53:45.523527Z=2009-12-20T23:54:12.578876Z>2009-12-20T23:54:40.118322Z?2009-12-20T23:55:07.174807Z@2009-12-20T23:55:34.721899ZA2009-12-20T23:56:01.773964ZB2009-12-20T23:56:29.317100ZC2009-12-20T23:56:56.374502ZD2009-12-20T23:57:23.917922ZE2009-12-20T23:57:50.974279ZF2009-12-20T23:58:18.520089ZG2009-12-20T23:58:45.573219ZH2009-12-20T23:59:13.117058ZI2009-12-20T23:59:40.179201ZJ2009-12-21T00:11:02.918024ZK2009-12-21T00:11:29.973253ZL2009-12-21T00:11:57.515434ZM2009-12-21T00:12:24.570595ZN2009-12-21T00:12:52.117429ZO2009-12-21T00:13:19.177028ZP2009-12-21T00:13:46.713778ZQ2009-12-21T00:14:13.776960ZR2009-12-21T00:14:41.316391ZS2009-12-21T00:15:08.378336ZT2009-12-21T00:15:35.913861ZU2009-12-21T00:16:02.968202ZV2009-12-21T00:16:30.513981ZW2009-12-21T00:16:57.568956ZX2009-12-21T00:17:25.113773ZY2009-12-21T00:17:52.169696ZZ2009-12-21T00:18:19.716902Z[2009-12-21T00:18:46.769846Z\2009-12-21T00:19:14.311510Z]2009-12-21T00:19:41.371423Z^2009-12-21T00:20:08.912408Z_2009-12-21T00:20:35.968331Z`2009-12-21T00:21:03.510866Za2009-12-21T00:21:30.568497Zb2009-12-21T00:22:25.168772Zc2009-12-21T00:22:52.709412Zd2009-12-21T00:23:19.767495Ze2009-12-21T00:23:47.310885Zf2009-12-21T00:24:14.367691Zg2009-12-21T00:24:41.906687Zh2009-12-21T00:25:08.968183Zi2009-12-21T00:25:36.505943Zj2009-12-21T00:26:03.566519Zk2009-12-21T00:26:31.109392Zl2009-12-21T00:26:58.167041Zm2009-12-21T00:27:25.710721Zn2009-12-21T00:27:52.768200Zo2009-12-21T00:28:20.309429Zp2009-12-21T00:28:47.365108Zq2009-12-21T00:29:14.910572Zr2009-12-21T00:29:41.965040Zs2009-12-21T00:30:09.507697Zt2009-12-21T00:30:36.567611Zu2009-12-21T00:31:04.110671Zv2009-12-21T00:31:31.164503Zw2009-12-21T00:31:58.707750Zx2009-12-21T00:32:25.769680Zy2009-12-21T00:32:53.308086Zz2009-12-21T00:33:20.364431Z{2009-12-21T00:33:47.907631Z|2009-12-21T00:34:14.963077Z}2009-12-21T00:34:42.511381Z~2009-12-21T00:35:09.561816Z2009-12-21T00:35:37.107916Z€2009-12-21T00:36:04.159763Z2009-12-21T00:36:31.705445Z‚2009-12-21T00:36:58.767793Zƒ2009-12-21T00:37:26.306526Z„2009-12-21T00:37:53.365462Z…2009-12-21T00:38:20.905048Z†2009-12-21T00:38:47.963752Z‡2009-12-21T00:39:15.508037Zˆ2009-12-21T00:39:42.561481Z‰2009-12-21T00:40:10.105147ZŠ2009-12-21T00:40:37.162407Z‹2009-12-21T00:41:04.702226ZŒ2009-12-21T00:41:31.763072Z2009-12-21T00:41:59.303167ZŽ2009-12-21T00:42:26.361019Z2009-12-21T00:42:53.902868Z2009-12-21T00:43:20.964783Z‘2009-12-21T01:06:07.095789Z’2009-12-21T01:06:34.154633Z“2009-12-21T01:07:01.695660Z”2009-12-21T01:07:28.753401Z•2009-12-21T01:07:56.293779Z–2009-12-21T01:08:23.353676Z—2009-12-21T01:08:50.902383Z˜2009-12-21T01:09:17.954975Z™2009-12-21T01:09:45.496811Zš2009-12-21T01:10:12.554738Z›2009-12-21T01:10:40.095925Zœ2009-12-21T01:11:07.153586Z2009-12-21T01:11:34.695623Zž2009-12-21T01:12:01.759428ZŸ2009-12-21T01:12:29.295012Z 2009-12-21T01:12:56.352102Z¡2009-12-21T01:13:23.894355Z¢2009-12-21T01:13:50.951213Z£2009-12-21T01:14:18.496491Z¤2009-12-21T01:14:45.558188Z¥2009-12-21T01:15:13.097215Z¦2009-12-21T01:15:40.155702Z§2009-12-21T01:16:07.693721Z¨2009-12-21T01:16:34.752580Z©2009-12-21T01:17:02.292645Zª2009-12-21T01:17:29.353117Z«2009-12-21T01:17:56.893959Z¬2009-12-21T01:18:23.950415Z­2009-12-21T01:18:51.491690Z®2009-12-21T01:19:18.549757Z¯2009-12-21T01:19:46.090816Z°2009-12-21T01:20:13.150729Z±2009-12-21T01:20:40.690502Z²2009-12-21T01:21:07.749437Z³2009-12-21T01:21:35.290264Z´2009-12-21T01:22:02.348751Zµ2009-12-21T01:22:29.896448Z¶2009-12-21T01:22:56.949475Z·2009-12-21T01:23:24.494582Z¸2009-12-21T01:23:51.548213Z¹2009-12-21T01:24:19.091071Zº2009-12-21T01:24:46.148766Z»2009-12-21T01:25:13.692618Z¼2009-12-21T01:25:40.747474Z½2009-12-21T01:26:08.292969Z¾2009-12-21T01:26:35.347240Z¿2009-12-21T01:27:02.890063ZÀ2009-12-21T01:27:29.947775ZÁ2009-12-21T01:27:57.489780ZÂ2009-12-21T01:28:24.547041ZÃ2009-12-21T01:28:52.090318ZÄ2009-12-21T01:29:19.143718ZÅ2009-12-21T01:29:46.690095ZÆ2009-12-21T01:30:13.746923ZÇ2009-12-21T01:30:41.289207ZÈ2009-12-21T01:31:08.345832ZÉ2009-12-21T01:31:35.889558ZÊ2009-12-21T01:32:02.951675ZË2009-12-21T01:32:30.488887ZÌ2009-12-21T01:32:57.545992ZÍ2009-12-21T01:33:25.091674ZÎ2009-12-21T01:33:52.145320ZÏ2009-12-21T01:34:19.690180ZÐ2009-12-21T01:34:46.744245ZÑ2009-12-21T01:35:14.293773ZÒ2009-12-21T01:35:41.352693ZÓ2009-12-21T01:36:08.886276ZÔ2009-12-21T01:36:35.945165ZÕ2009-12-21T01:37:03.483400ZÖ2009-12-21T01:37:30.543903Z×2009-12-21T01:37:58.087413ZØ2009-12-21T01:38:25.144659ZÙ2009-12-21T01:49:47.884086ZÚ2009-12-21T01:50:14.941564ZÛ2009-12-21T01:50:42.491138ZÜ2009-12-21T01:51:09.547003ZÝ2009-12-21T01:51:37.090870ZÞ2009-12-21T01:52:04.141739Zß2009-12-21T01:52:31.683154Zà2009-12-21T01:52:58.738631Zá2009-12-21T01:53:26.283739Zâ2009-12-21T01:53:53.340844Zã2009-12-21T01:54:20.883485Zä2009-12-21T01:54:47.940389Zå2009-12-21T01:55:15.482440Zæ2009-12-21T01:55:42.540942Zç2009-12-21T01:56:10.081814Zè2009-12-21T01:56:37.144334Zé2009-12-21T01:57:04.682105Zê2009-12-21T01:57:31.739241Zë2009-12-21T01:57:59.282271Zì2009-12-21T01:58:26.339205Zí2009-12-21T01:58:53.881257Zî2009-12-21T01:59:20.938936Zï2009-12-21T01:59:48.481421Zð2009-12-21T02:00:15.537055Zñ2009-12-21T02:00:43.080780Zò2009-12-21T02:01:10.145316Zó2009-12-21T02:01:37.680094Zô2009-12-21T02:02:04.745280Zõ2009-12-21T02:02:32.280677Zö2009-12-21T02:02:59.343817Z÷2009-12-21T02:03:26.879407Zø2009-12-21T02:03:53.940075Zù2009-12-21T02:04:21.480574Zú2009-12-21T02:04:48.545715Zû2009-12-21T02:05:16.080043Zü2009-12-21T02:05:43.144594Zý2009-12-21T02:06:10.683419Zþ2009-12-21T02:06:37.734954Zÿ2009-12-21T02:07:05.279148Z2009-12-21T02:07:32.339837Z2009-12-21T02:07:59.884092Z2009-12-21T02:08:26.938173Z2009-12-21T02:08:54.478006Z2009-12-21T02:09:21.536507Z2009-12-21T02:09:49.083093Z2009-12-21T02:10:16.135836Z2009-12-21T02:10:43.683720Z2009-12-21T02:11:10.740608Z	2009-12-21T02:11:38.279232Z
2009-12-21T02:12:05.335501Z2009-12-21T02:12:32.878374Z2009-12-21T02:12:59.935247Z
2009-12-21T02:13:27.479113Z2009-12-21T02:13:54.536390Z2009-12-21T02:14:22.075618Z2009-12-21T02:14:49.135300Z2009-12-21T02:15:16.674125Z2009-12-21T02:15:43.735651Z2009-12-21T02:16:11.276881Z2009-12-21T02:16:38.332000Z2009-12-21T02:17:05.876815Z2009-12-21T02:17:32.934509Z2009-12-21T02:18:00.487248Z2009-12-21T02:18:27.534054Z2009-12-21T02:18:55.074648Z2009-12-21T02:19:22.134360Z2009-12-21T02:19:49.675015Z2009-12-21T02:20:16.735115Z2009-12-21T02:20:44.274933Z2009-12-21T02:21:11.336041Z2009-12-21T02:21:38.879520Z 2009-12-21T02:22:05.932173Z!2009-12-21T02:44:52.069998Z"2009-12-21T02:45:19.126654Z#2009-12-21T02:45:46.672464Z$2009-12-21T02:46:13.731039Z%2009-12-21T02:46:41.268255Z&2009-12-21T02:47:08.327358Z'2009-12-21T02:47:35.869089Z(2009-12-21T02:48:02.926484Z)2009-12-21T02:48:30.467626Z*2009-12-21T02:48:57.529426Z+2009-12-21T02:49:25.069576Z,2009-12-21T02:49:52.124535Z-2009-12-21T02:50:19.668313Z.2009-12-21T02:50:46.727881Z/2009-12-21T02:51:14.268133Z02009-12-21T02:51:41.329846Z12009-12-21T02:52:08.868827Z22009-12-21T02:52:35.925327Z32009-12-21T02:53:03.467364Z42009-12-21T02:53:30.524671Z52009-12-21T02:53:58.070771Z62009-12-21T02:54:25.123797Z72009-12-21T02:54:52.672550Z82009-12-21T02:55:19.729206Z92009-12-21T02:55:47.266201Z:2009-12-21T02:56:14.331141Z;2009-12-21T02:56:41.866278Z<2009-12-21T02:57:08.929894Z=2009-12-21T02:57:36.465823Z>2009-12-21T02:58:03.529625Z?2009-12-21T02:58:31.066031Z@2009-12-21T02:58:58.126549ZA2009-12-21T02:59:25.665937ZB2009-12-21T02:59:52.720821ZC2009-12-21T03:00:20.266243ZD2009-12-21T03:00:47.326214ZE2009-12-21T03:01:14.871434ZF2009-12-21T03:01:41.922517ZG2009-12-21T03:02:09.464553ZH2009-12-21T03:02:36.521054ZI2009-12-21T03:03:04.063524ZJ2009-12-21T03:03:31.123018ZK2009-12-21T03:03:58.667790ZL2009-12-21T03:04:25.722750ZM2009-12-21T03:04:53.263146ZN2009-12-21T03:05:20.325134ZO2009-12-21T03:05:47.863136ZP2009-12-21T03:06:14.920041ZQ2009-12-21T03:06:42.466746ZR2009-12-21T03:07:09.522812ZS2009-12-21T03:07:37.063654ZT2009-12-21T03:08:04.119347ZU2009-12-21T03:08:31.661295ZV2009-12-21T03:08:58.720693ZW2009-12-21T03:09:26.261229ZX2009-12-21T03:09:53.322612ZY2009-12-21T03:10:20.864305ZZ2009-12-21T03:10:47.917766Z[2009-12-21T03:11:15.467609Z\2009-12-21T03:11:42.518692Z]2009-12-21T03:12:10.065119Z^2009-12-21T03:12:37.118857Z_2009-12-21T03:13:04.665288Z`2009-12-21T03:13:31.719148Za2009-12-21T03:13:59.260320Zb2009-12-21T03:14:26.319685Zc2009-12-21T03:14:53.859648Zd2009-12-21T03:15:20.918439Ze2009-12-21T03:15:48.459716Zf2009-12-21T03:16:15.523429Zg2009-12-21T03:16:43.061060Zh2009-12-21T03:17:10.117948Zi2009-12-21T03:28:32.858421Zj2009-12-21T03:28:59.913275Zk2009-12-21T03:29:27.463209Zl2009-12-21T03:29:54.518930Zm2009-12-21T03:30:22.057077Zn2009-12-21T03:30:49.113449Zo2009-12-21T03:31:16.656463Zp2009-12-21T03:31:43.716996Zq2009-12-21T03:32:11.253934Zr2009-12-21T03:32:38.322854Zs2009-12-21T03:33:05.857977Zt2009-12-21T03:33:32.916163Zu2009-12-21T03:34:00.454683Zv2009-12-21T03:34:27.514080Zw2009-12-21T03:34:55.055468Zx2009-12-21T03:35:22.115626Zy2009-12-21T03:35:49.657030Zz2009-12-21T03:36:16.713154Z{2009-12-21T03:36:44.254818Z|2009-12-21T03:37:11.310497Z}2009-12-21T03:37:38.854952Z~2009-12-21T03:38:05.911236Z2009-12-21T03:38:33.458577Z€2009-12-21T03:39:00.514628Z2009-12-21T03:39:28.052258Z‚2009-12-21T03:39:55.108138Zƒ2009-12-21T03:40:22.656943Z„2009-12-21T03:40:49.708041Z…2009-12-21T03:41:17.255077Z†2009-12-21T03:41:44.309790Z‡2009-12-21T03:42:11.853442Zˆ2009-12-21T03:42:38.910544Z‰2009-12-21T03:43:06.453965ZŠ2009-12-21T03:43:33.509096Z‹2009-12-21T03:44:01.053712ZŒ2009-12-21T03:44:28.110038Z2009-12-21T03:44:55.655084ZŽ2009-12-21T03:45:22.712189Z2009-12-21T03:45:50.252135Z2009-12-21T03:46:17.309299Z‘2009-12-21T03:46:44.852704Z’2009-12-21T03:47:11.911637Z“2009-12-21T03:47:39.452823Z”2009-12-21T03:48:06.509880Z•2009-12-21T03:48:34.051052Z–2009-12-21T03:49:01.108231Z—2009-12-21T03:49:28.650814Z˜2009-12-21T03:49:55.707961Z™2009-12-21T03:50:23.257691Zš2009-12-21T03:50:50.312160Z›2009-12-21T03:51:17.850463Zœ2009-12-21T03:51:44.911629Z2009-12-21T03:52:12.450769Zž2009-12-21T03:52:39.511980ZŸ2009-12-21T03:53:07.051912Z 2009-12-21T03:53:34.109479Z¡2009-12-21T03:54:01.649642Z¢2009-12-21T03:54:28.706805Z£2009-12-21T03:54:56.250568Z¤2009-12-21T03:55:23.306708Z¥2009-12-21T03:55:50.851278Z¦2009-12-21T03:56:17.906036Z§2009-12-21T03:56:45.451800Z¨2009-12-21T03:57:12.505379Z©2009-12-21T03:57:40.048709Zª2009-12-21T03:58:07.106416Z«2009-12-21T03:58:34.650411Z¬2009-12-21T03:59:01.709016Z­2009-12-21T03:59:29.247814Z®2009-12-21T03:59:56.306963Z¯2009-12-21T04:00:23.848508Z°2009-12-21T04:00:50.904198Z±2009-12-21T04:23:37.042430Z²2009-12-21T04:24:04.105337Z³2009-12-21T04:24:31.639840Z´2009-12-21T04:24:58.697143Zµ2009-12-21T04:25:26.239556Z¶2009-12-21T04:25:53.300147Z·2009-12-21T04:26:20.838697Z¸2009-12-21T04:26:47.898668Z¹2009-12-21T04:27:15.440431Zº2009-12-21T04:27:42.500805Z»2009-12-21T04:28:10.038378Z¼2009-12-21T04:28:37.100536Z½2009-12-21T04:29:04.638638Z¾2009-12-21T04:29:31.701850Z¿2009-12-21T04:29:59.240199ZÀ2009-12-21T04:30:26.294353ZÁ2009-12-21T04:30:53.838907ZÂ2009-12-21T04:31:20.893619ZÃ2009-12-21T04:31:48.438418ZÄ2009-12-21T04:32:15.495368ZÅ2009-12-21T04:32:43.045227ZÆ2009-12-21T04:33:10.095922ZÇ2009-12-21T04:33:37.636721ZÈ2009-12-21T04:34:04.696520ZÉ2009-12-21T04:34:32.242326ZÊ2009-12-21T04:34:59.292591ZË2009-12-21T04:35:26.837549ZÌ2009-12-21T04:35:53.902793ZÍ2009-12-21T04:36:21.437079ZÎ2009-12-21T04:36:48.493264ZÏ2009-12-21T04:37:16.038097ZÐ2009-12-21T04:37:43.091987ZÑ2009-12-21T04:38:10.637642ZÒ2009-12-21T04:38:37.691749ZÓ2009-12-21T04:39:05.240367ZÔ2009-12-21T04:39:32.292473ZÕ2009-12-21T04:39:59.835922ZÖ2009-12-21T04:40:26.892825Z×2009-12-21T04:40:54.440668ZØ2009-12-21T04:41:21.492184ZÙ2009-12-21T04:41:49.037777ZÚ2009-12-21T04:42:16.090892ZÛ2009-12-21T04:42:43.632048ZÜ2009-12-21T04:43:10.691229ZÝ2009-12-21T04:43:38.231997ZÞ2009-12-21T04:44:05.298467Zß2009-12-21T04:44:32.834784Zà2009-12-21T04:44:59.891528Zá2009-12-21T04:45:27.433104Zâ2009-12-21T04:45:54.490670Zã2009-12-21T04:46:22.030850Zä2009-12-21T04:46:49.090603Zå2009-12-21T04:47:16.639065Zæ2009-12-21T04:47:43.690163Zç2009-12-21T04:48:11.234361Zè2009-12-21T04:48:38.290686Zé2009-12-21T04:49:05.835534Zê2009-12-21T04:49:32.889874Zë2009-12-21T04:50:00.436148Zì2009-12-21T04:50:27.491854Zí2009-12-21T04:50:55.033011Zî2009-12-21T04:51:22.088778Zï2009-12-21T04:51:49.634743Zð2009-12-21T04:52:16.688711Zñ2009-12-21T04:52:44.235343Zò2009-12-21T04:53:11.289093Zó2009-12-21T04:53:38.832758Zô2009-12-21T04:54:05.889056Zõ2009-12-21T04:54:33.437085Zö2009-12-21T04:55:00.488106Z÷2009-12-21T04:55:28.034210Zø2009-12-21T04:55:55.088782Zù2009-12-21T05:07:17.829225Zú2009-12-21T05:07:44.884991Zû2009-12-21T05:08:12.429607Zü2009-12-21T05:08:39.484209Zý2009-12-21T05:09:07.029974Zþ2009-12-21T05:09:34.084127Zÿ2009-12-21T05:10:01.629316Z2009-12-21T05:10:28.684216Z2009-12-21T05:10:56.228923Z2009-12-21T05:11:23.288505Z2009-12-21T05:11:50.828080Z2009-12-21T05:12:17.882545Z2009-12-21T05:12:45.428356Z2009-12-21T05:13:12.484399Z2009-12-21T05:13:40.028118Z2009-12-21T05:14:07.082533Z	2009-12-21T05:14:34.627351Z
2009-12-21T05:15:01.682885Z2009-12-21T05:15:29.228308Z2009-12-21T05:15:56.282026Z
2009-12-21T05:16:23.828039Z2009-12-21T05:16:50.882595Z2009-12-21T05:17:18.432640Z2009-12-21T05:17:45.482559Z2009-12-21T05:18:13.025634Z2009-12-21T05:18:40.081655Z2009-12-21T05:19:07.628318Z2009-12-21T05:19:34.681634Z2009-12-21T05:20:02.223955Z2009-12-21T05:20:29.282389Z2009-12-21T05:20:56.828819Z2009-12-21T05:21:23.881531Z2009-12-21T05:21:51.430163Z2009-12-21T05:22:18.480269Z2009-12-21T05:22:46.028587Z2009-12-21T05:23:13.080590Z2009-12-21T05:23:40.622196Z2009-12-21T05:24:07.681764Z2009-12-21T05:24:35.227760Z 2009-12-21T05:25:02.280704Z!2009-12-21T05:25:29.822755Z"2009-12-21T05:25:56.879644Z#2009-12-21T05:26:24.422704Z$2009-12-21T05:26:51.479174Z%2009-12-21T05:27:19.022838Z&2009-12-21T05:27:46.084784Z'2009-12-21T05:28:13.623304Z(2009-12-21T05:28:40.677644Z)2009-12-21T05:29:08.238868Z*2009-12-21T05:29:35.279966Z+2009-12-21T05:30:02.822048Z,2009-12-21T05:30:29.878518Z-2009-12-21T05:30:57.420958Z.2009-12-21T05:31:24.477476Z/2009-12-21T05:31:52.021936Z02009-12-21T05:32:19.077780Z12009-12-21T05:32:46.622049Z22009-12-21T05:33:13.677248Z32009-12-21T05:33:41.221849Z42009-12-21T05:34:08.277786Z52009-12-21T05:34:35.826880Z62009-12-21T05:35:02.881737Z72009-12-21T05:35:30.419368Z82009-12-21T05:35:57.476412Z92009-12-21T05:36:25.021244Z:2009-12-21T05:36:52.075538Z;2009-12-21T05:37:19.618987Z<2009-12-21T05:37:46.676480Z=2009-12-21T05:38:14.219466Z>2009-12-21T05:38:41.275591Z?2009-12-21T05:39:08.819198Z@2009-12-21T05:39:35.875771ZA2009-12-21T06:02:54.213325ZB2009-12-21T06:03:21.268643ZC2009-12-21T06:03:48.812308ZD2009-12-21T06:04:15.868872ZE2009-12-21T06:04:43.412571ZF2009-12-21T06:05:10.473070ZG2009-12-21T06:05:38.015914ZH2009-12-21T06:06:05.065713ZI2009-12-21T06:06:32.610501ZJ2009-12-21T06:06:59.664437ZK2009-12-21T06:07:27.216886ZL2009-12-21T06:07:54.263966ZM2009-12-21T06:08:21.814891ZN2009-12-21T06:08:48.868119ZO2009-12-21T06:09:16.415698ZP2009-12-21T06:09:43.466454ZQ2009-12-21T06:10:11.017058ZR2009-12-21T06:10:38.067164ZS2009-12-21T06:11:05.617783ZT2009-12-21T06:11:32.667485ZU2009-12-21T06:12:00.207431ZV2009-12-21T06:12:27.266394ZW2009-12-21T06:12:54.809149ZX2009-12-21T06:13:21.866313ZY2009-12-21T06:13:49.408074ZZ2009-12-21T06:14:16.466029Z[2009-12-21T06:14:44.007805Z\2009-12-21T06:15:11.065977Z]2009-12-21T06:15:38.612175Z^2009-12-21T06:16:05.665910Z_2009-12-21T06:16:33.206896Z`2009-12-21T06:17:00.265021Za2009-12-21T06:17:27.806969Zb2009-12-21T06:17:54.868831Zc2009-12-21T06:18:22.406298Zd2009-12-21T06:18:49.465290Ze2009-12-21T06:19:17.006866Zf2009-12-21T06:19:44.064866Zg2009-12-21T06:20:11.606080Zh2009-12-21T06:20:38.664396Zi2009-12-21T06:21:06.206778Zj2009-12-21T06:21:33.264686Zk2009-12-21T06:22:00.806521Zl2009-12-21T06:22:27.870251Zm2009-12-21T06:22:55.407622Zn2009-12-21T06:23:22.463993Zo2009-12-21T06:23:50.007353Zp2009-12-21T06:24:17.063105Zq2009-12-21T06:24:44.606165Zr2009-12-21T06:25:11.665055Zs2009-12-21T06:25:39.206987Zt2009-12-21T06:26:06.262381Zu2009-12-21T06:26:33.807473Zv2009-12-21T06:27:00.862733Zw2009-12-21T06:27:28.403187Zx2009-12-21T06:27:55.463286Zy2009-12-21T06:28:23.003957Zz2009-12-21T06:28:50.061591Z{2009-12-21T06:29:17.605182Z|2009-12-21T06:29:44.661368Z}2009-12-21T06:30:12.205116Z~2009-12-21T06:30:39.261705Z2009-12-21T06:31:06.804562Z€2009-12-21T06:31:33.861467Z2009-12-21T06:32:01.402810Z‚2009-12-21T06:32:28.462609Zƒ2009-12-21T06:32:56.003379Z„2009-12-21T06:33:23.061705Z…2009-12-21T06:33:50.604133Z†2009-12-21T06:34:17.661281Z‡2009-12-21T06:34:45.203073Zˆ2009-12-21T06:35:12.261230Z‰2009-12-21T06:46:35.000846ZŠ2009-12-21T06:47:02.058541Z‹2009-12-21T06:47:29.600205ZŒ2009-12-21T06:47:56.656070Z2009-12-21T06:48:24.200670ZŽ2009-12-21T06:48:51.257011Z2009-12-21T06:49:18.800604Z2009-12-21T06:49:45.856743Z‘2009-12-21T06:50:13.403592Z’2009-12-21T06:50:40.456288Z“2009-12-21T06:51:07.999693Z”2009-12-21T06:51:35.056484Z•2009-12-21T06:52:02.603158Z–2009-12-21T06:52:29.662264Z—2009-12-21T06:52:57.199450Z˜2009-12-21T06:53:24.261266Z™2009-12-21T06:53:51.800763Zš2009-12-21T06:54:18.851040Z›2009-12-21T06:54:46.397356Zœ2009-12-21T06:55:13.455037Z2009-12-21T06:55:40.996312Zž2009-12-21T06:56:08.054643ZŸ2009-12-21T06:56:35.595424Z 2009-12-21T06:57:02.655150Z¡2009-12-21T06:57:30.196380Z¢2009-12-21T06:57:57.253827Z£2009-12-21T06:58:24.795769Z¤2009-12-21T06:58:51.853666Z¥2009-12-21T06:59:19.391669Z¦2009-12-21T06:59:46.454189Z§2009-12-21T07:00:13.996339Z¨2009-12-21T07:00:41.054959Z©2009-12-21T07:01:08.595511Zª2009-12-21T07:01:35.659298Z«2009-12-21T07:02:03.195734Z¬2009-12-21T07:02:30.253599Z­2009-12-21T07:02:57.794643Z®2009-12-21T07:03:24.855348Z¯2009-12-21T07:03:52.395729Z°2009-12-21T07:04:19.453683Z±2009-12-21T07:04:46.995114Z²2009-12-21T07:05:14.055819Z³2009-12-21T07:05:41.594225Z´2009-12-21T07:06:08.651517Zµ2009-12-21T07:06:36.194189Z¶2009-12-21T07:07:03.251435Z·2009-12-21T07:07:30.793457Z¸2009-12-21T07:07:57.852377Z¹2009-12-21T07:08:25.393450Zº2009-12-21T07:08:52.450309Z»2009-12-21T07:09:19.991942Z¼2009-12-21T07:09:47.050211Z½2009-12-21T07:10:14.590619Z¾2009-12-21T07:10:41.655387Z¿2009-12-21T07:11:09.191621ZÀ2009-12-21T07:11:36.248045ZÁ2009-12-21T07:12:03.792532ZÂ2009-12-21T07:12:30.850181ZÃ2009-12-21T07:12:58.398115ZÄ2009-12-21T07:13:25.449712ZÅ2009-12-21T07:13:52.999628ZÆ2009-12-21T07:14:20.052996ZÇ2009-12-21T07:14:47.591433ZÈ2009-12-21T07:15:14.647888ZÉ2009-12-21T07:15:42.192877ZÊ2009-12-21T07:16:09.248194ZË2009-12-21T07:16:36.791368ZÌ2009-12-21T07:17:03.852347ZÍ2009-12-21T07:17:31.395319ZÎ2009-12-21T07:17:58.449038ZÏ2009-12-21T07:18:25.991001ZÐ2009-12-21T07:18:53.050352ZÑ2009-12-21T07:41:39.188077ZÒ2009-12-21T07:42:06.246232ZÓ2009-12-21T07:42:33.789592ZÔ2009-12-21T07:43:00.840070ZÕ2009-12-21T07:43:28.388331ZÖ2009-12-21T07:43:55.439847Z×2009-12-21T07:44:22.980803ZØ2009-12-21T07:44:50.039982ZÙ2009-12-21T07:45:17.580842ZÚ2009-12-21T07:45:44.639699ZÛ2009-12-21T07:46:12.182857ZÜ2009-12-21T07:46:39.240765ZÝ2009-12-21T07:47:06.781907ZÞ2009-12-21T07:47:33.839660Zß2009-12-21T07:48:01.381296Zà2009-12-21T07:48:28.439980Zá2009-12-21T07:48:55.980434Zâ2009-12-21T07:49:23.038083Zã2009-12-21T07:49:50.577312Zä2009-12-21T07:50:17.638032Zå2009-12-21T07:50:45.179882Zæ2009-12-21T07:51:12.242805Zç2009-12-21T07:51:39.789391Zè2009-12-21T07:52:06.844693Zé2009-12-21T07:52:34.380250Zê2009-12-21T07:53:01.436994Zë2009-12-21T07:53:28.979046Zì2009-12-21T07:53:56.036136Zí2009-12-21T07:54:23.583126Zî2009-12-21T07:54:50.638877Zï2009-12-21T07:55:18.183230Zð2009-12-21T07:55:45.235165Zñ2009-12-21T07:56:12.782661Zò2009-12-21T07:56:39.839131Zó2009-12-21T07:57:07.380877Zô2009-12-21T07:57:34.438258Zõ2009-12-21T07:58:01.978421Zö2009-12-21T07:58:29.038780Z÷2009-12-21T07:58:56.582155Zø2009-12-21T07:59:23.634742Zù2009-12-21T07:59:51.176949Zú2009-12-21T08:00:18.240507Zû2009-12-21T08:00:45.776479Zü2009-12-21T08:01:12.833584Zý2009-12-21T08:01:40.377237Zþ2009-12-21T08:02:07.436713Zÿ2009-12-21T08:02:34.977296Z2009-12-21T08:03:02.033435Z2009-12-21T08:03:29.577678Z2009-12-21T08:03:56.634220Z2009-12-21T08:04:24.177410Z2009-12-21T08:04:51.233967Z2009-12-21T08:05:18.780196Z2009-12-21T08:05:45.833713Z2009-12-21T08:06:13.378314Z2009-12-21T08:06:40.435447Z	2009-12-21T08:07:07.979845Z
2009-12-21T08:07:35.038389Z2009-12-21T08:08:02.575791Z2009-12-21T08:08:29.629510Z
2009-12-21T08:08:57.177209Z2009-12-21T08:09:24.238720Z2009-12-21T08:09:51.774334Z2009-12-21T08:10:18.832231Z2009-12-21T08:10:46.374365Z2009-12-21T08:11:13.432970Z2009-12-21T08:11:40.974405Z2009-12-21T08:12:08.028497Z2009-12-21T08:12:35.575735Z2009-12-21T08:13:02.630059Z2009-12-21T08:13:30.174690Z2009-12-21T08:13:57.234428Z2009-12-21T08:25:19.973936Z2009-12-21T08:25:47.026806Z2009-12-21T08:26:14.569964Z2009-12-21T08:26:41.627980Z2009-12-21T08:27:09.169613Z2009-12-21T08:27:36.224066Z2009-12-21T08:28:03.769157Z 2009-12-21T08:28:30.826264Z!2009-12-21T08:28:58.374582Z"2009-12-21T08:29:25.427593Z#2009-12-21T08:29:52.975367Z$2009-12-21T08:30:20.022732Z%2009-12-21T08:30:47.569018Z&2009-12-21T08:31:14.625504Z'2009-12-21T08:31:42.172209Z(2009-12-21T08:32:09.225653Z)2009-12-21T08:32:36.768237Z*2009-12-21T08:33:03.827261Z+2009-12-21T08:33:31.368893Z,2009-12-21T08:33:58.423967Z-2009-12-21T08:34:25.967012Z.2009-12-21T08:34:53.027530Z/2009-12-21T08:35:20.563331Z02009-12-21T08:35:47.625291Z12009-12-21T08:36:15.167358Z22009-12-21T08:36:42.232514Z32009-12-21T08:37:09.766872Z42009-12-21T08:37:36.823731Z52009-12-21T08:38:04.368867Z62009-12-21T08:38:31.424344Z72009-12-21T08:38:58.965125Z82009-12-21T08:39:26.030529Z92009-12-21T08:39:53.565523Z:2009-12-21T08:40:20.628445Z;2009-12-21T08:40:48.167689Z<2009-12-21T08:41:15.223368Z=2009-12-21T08:41:42.763791Z>2009-12-21T08:42:09.824929Z?2009-12-21T08:42:37.364142Z@2009-12-21T08:43:04.422024ZA2009-12-21T08:43:31.964464ZB2009-12-21T08:43:59.024781ZC2009-12-21T08:44:26.562582ZD2009-12-21T08:44:53.622697ZE2009-12-21T08:45:21.166982ZF2009-12-21T08:45:48.221622ZG2009-12-21T08:46:15.762433ZH2009-12-21T08:46:42.821151ZI2009-12-21T08:47:10.364584ZJ2009-12-21T08:47:37.421689ZK2009-12-21T08:48:04.962531ZL2009-12-21T08:48:32.019188ZM2009-12-21T08:48:59.564078ZN2009-12-21T08:49:26.620144ZO2009-12-21T08:49:54.161591ZP2009-12-21T08:50:21.219643ZQ2009-12-21T08:50:48.761741ZR2009-12-21T08:51:15.818785ZS2009-12-21T08:51:43.361861ZT2009-12-21T08:52:10.419726ZU2009-12-21T08:52:37.960150ZV2009-12-21T08:53:05.020668ZW2009-12-21T08:53:32.561076ZX2009-12-21T08:53:59.618802ZY2009-12-21T08:54:27.164049ZZ2009-12-21T08:54:54.218099Z[2009-12-21T08:55:21.765706Z\2009-12-21T08:55:48.818530Z]2009-12-21T08:56:16.359851Z^2009-12-21T08:56:43.421751Z_2009-12-21T08:57:10.959366Z`2009-12-21T08:57:38.017822Za2009-12-21T09:20:24.154140Zb2009-12-21T09:20:51.209819Zc2009-12-21T09:21:18.752087Zd2009-12-21T09:21:45.810760Ze2009-12-21T09:22:13.353231Zf2009-12-21T09:22:40.407406Zg2009-12-21T09:23:07.952559Zh2009-12-21T09:23:35.010627Zi2009-12-21T09:24:02.555688Zj2009-12-21T09:24:29.608946Zk2009-12-21T09:24:57.151588Zl2009-12-21T09:25:24.213315Zm2009-12-21T09:25:51.753537Zn2009-12-21T09:26:18.808828Zo2009-12-21T09:26:46.351050Zp2009-12-21T09:27:13.407939Zq2009-12-21T09:27:40.951186Zr2009-12-21T09:28:08.008880Zs2009-12-21T09:28:35.555369Zt2009-12-21T09:29:02.612926Zu2009-12-21T09:29:30.151082Zv2009-12-21T09:29:57.208529Zw2009-12-21T09:30:24.749558Zx2009-12-21T09:30:51.810433Zy2009-12-21T09:31:19.350469Zz2009-12-21T09:31:46.408956Z{2009-12-21T09:32:13.949813Z|2009-12-21T09:32:41.006560Z}2009-12-21T09:33:08.549486Z~2009-12-21T09:33:35.606199Z2009-12-21T09:34:03.149879Z€2009-12-21T09:34:30.207776Z2009-12-21T09:34:57.748820Z‚2009-12-21T09:35:24.804670Zƒ2009-12-21T09:35:52.349792Z„2009-12-21T09:36:19.405052Z…2009-12-21T09:36:46.946499Z†2009-12-21T09:37:14.005791Z‡2009-12-21T09:37:41.548262Zˆ2009-12-21T09:38:08.611790Z‰2009-12-21T09:38:36.149607ZŠ2009-12-21T09:39:03.204277Z‹2009-12-21T09:39:30.747740ZŒ2009-12-21T09:39:57.804613Z2009-12-21T09:40:25.344633ZŽ2009-12-21T09:40:52.404950Z2009-12-21T09:41:19.947078Z2009-12-21T09:41:46.999902Z‘2009-12-21T09:42:14.546561Z’2009-12-21T09:42:41.603264Z“2009-12-21T09:43:09.149043Z”2009-12-21T09:43:36.203166Z•2009-12-21T09:44:03.744802Z–2009-12-21T09:46:20.002684Z—2009-12-21T09:47:42.142078Z˜2009-12-21T09:48:09.206630Z™2009-12-21T09:48:36.745816Zš2009-12-21T09:49:03.804531Z›2009-12-21T09:49:31.346167Zœ2009-12-21T09:49:58.409379Z2009-12-21T09:50:25.945180Zž2009-12-21T09:50:53.005698ZŸ2009-12-21T09:51:20.549321Z 2009-12-21T09:51:47.603444Z¡2009-12-21T09:52:15.147005Z¢2009-12-21T09:52:42.201592Z£2009-12-21T10:04:04.946263Z¤2009-12-21T10:04:31.998312Z¥2009-12-21T10:04:59.545792Z¦2009-12-21T10:05:26.598895Z§2009-12-21T10:05:54.141878Z¨2009-12-21T10:06:21.198239Z©2009-12-21T10:06:48.739685Zª2009-12-21T10:07:15.801383Z«2009-12-21T10:07:43.343156Z¬2009-12-21T10:08:10.398524Z­2009-12-21T10:08:37.940172Z®2009-12-21T10:09:05.004335Z¯2009-12-21T10:09:32.540337Z°2009-12-21T10:09:59.602282Z±2009-12-21T10:10:27.141929Z²2009-12-21T10:10:54.197593Z³2009-12-21T10:11:21.740781Z´2009-12-21T10:11:48.798145Zµ2009-12-21T10:12:16.340832Z¶2009-12-21T10:12:43.397706Z·2009-12-21T10:13:10.939989Z¸2009-12-21T10:13:37.996863Z¹2009-12-21T10:14:05.536495Zº2009-12-21T10:14:32.599060Z»2009-12-21T10:15:00.138806Z¼2009-12-21T10:15:27.204841Z½2009-12-21T10:15:54.744892Z¾2009-12-21T10:16:21.797127Z¿2009-12-21T10:16:49.345042ZÀ2009-12-21T10:17:16.398099ZÁ2009-12-21T10:17:43.938210ZÂ2009-12-21T10:18:10.995813ZÃ2009-12-21T10:18:38.537261ZÄ2009-12-21T10:19:05.604293ZÅ2009-12-21T10:19:33.134416ZÆ2009-12-21T10:20:00.196113ZÇ2009-12-21T10:20:27.736970ZÈ2009-12-21T10:20:54.796062ZÉ2009-12-21T10:21:22.339510ZÊ2009-12-21T10:21:49.395668ZË2009-12-21T10:22:16.938884ZÌ2009-12-21T10:22:43.995802ZÍ2009-12-21T10:23:11.535590ZÎ2009-12-21T10:23:38.592510ZÏ2009-12-21T10:24:06.137601ZÐ2009-12-21T10:24:33.195219ZÑ2009-12-21T10:25:00.739349ZÒ2009-12-21T10:25:27.795648ZÓ2009-12-21T10:25:55.335234ZÔ2009-12-21T10:26:22.391517ZÕ2009-12-21T10:26:49.934189ZÖ2009-12-21T10:27:16.994491Z×2009-12-21T10:27:44.540963ZØ2009-12-21T10:28:11.598472ZÙ2009-12-21T10:28:39.133667ZÚ2009-12-21T10:29:06.193147ZÛ2009-12-21T10:29:33.734191ZÜ2009-12-21T10:30:00.792863ZÝ2009-12-21T10:30:28.329516ZÞ2009-12-21T10:30:55.393492Zß2009-12-21T10:31:22.938105Zà2009-12-21T10:31:49.992326Zá2009-12-21T10:32:17.533168Zâ2009-12-21T10:32:44.592057Zã2009-12-21T10:33:12.138995Zä2009-12-21T10:33:39.192409Zå2009-12-21T10:34:06.732398Zæ2009-12-21T10:34:33.798594Zç2009-12-21T10:35:01.332967Zè2009-12-21T10:35:28.390244Zé2009-12-21T10:35:55.936933Zê2009-12-21T10:36:22.991991Zë2009-12-21T10:59:09.126171Zì2009-12-21T10:59:36.183029Zí2009-12-21T11:00:03.725857Zî2009-12-21T11:00:30.783351Zï2009-12-21T11:00:58.324424Zð2009-12-21T11:01:25.384105Zñ2009-12-21T11:01:52.923939Zò2009-12-21T11:02:19.982425Zó2009-12-21T11:02:47.528883Zô2009-12-21T11:03:14.583351Zõ2009-12-21T11:03:42.123588Zö2009-12-21T11:04:09.184510Z÷2009-12-21T11:04:36.724731Zø2009-12-21T11:05:03.782194Zù2009-12-21T11:05:31.324463Zú2009-12-21T11:05:58.386982Zû2009-12-21T11:06:25.923299Zü2009-12-21T11:06:52.983269Zý2009-12-21T11:07:20.523103Zþ2009-12-21T11:07:47.582210Zÿ2009-12-21T11:08:15.123222Z2009-12-21T11:08:42.182158Z2009-12-21T11:09:09.719153Z2009-12-21T11:09:36.782883Z2009-12-21T11:10:04.322328Z2009-12-21T11:10:31.385235Z2009-12-21T11:10:58.921207Z2009-12-21T11:11:25.986595Z2009-12-21T11:11:53.522768Z2009-12-21T11:12:20.581069Z	2009-12-21T11:12:48.121321Z
2009-12-21T11:13:15.188680Z2009-12-21T11:13:42.721786Z2009-12-21T11:14:09.781323Z
2009-12-21T11:14:37.320961Z2009-12-21T11:15:04.380480Z2009-12-21T11:15:31.920660Z2009-12-21T11:15:58.986634Z2009-12-21T11:16:26.521554Z2009-12-21T11:16:53.578889Z2009-12-21T11:17:21.121546Z2009-12-21T11:17:48.178651Z2009-12-21T11:18:15.719896Z2009-12-21T11:18:42.779592Z2009-12-21T11:19:10.318835Z2009-12-21T11:19:37.377741Z2009-12-21T11:20:04.919807Z2009-12-21T11:20:31.977286Z2009-12-21T11:20:59.518516Z2009-12-21T11:21:26.583672Z2009-12-21T11:21:54.123505Z2009-12-21T11:22:21.181805Z2009-12-21T11:22:48.721783Z 2009-12-21T11:23:15.777937Z!2009-12-21T11:23:43.324239Z"2009-12-21T11:24:10.377205Z#2009-12-21T11:24:37.917548Z$2009-12-21T11:25:04.977245Z%2009-12-21T11:25:32.518820Z&2009-12-21T11:25:59.578232Z'2009-12-21T11:26:27.122920Z(2009-12-21T11:26:54.186045Z)2009-12-21T11:27:21.718231Z*2009-12-21T11:27:48.776516Z+2009-12-21T11:28:16.317015Z,2009-12-21T11:28:43.379923Z-2009-12-21T11:29:10.916171Z.2009-12-21T11:29:37.975914Z/2009-12-21T11:30:05.522821Z02009-12-21T11:30:32.579477Z12009-12-21T11:31:00.112717Z22009-12-21T11:31:27.175392Z32009-12-21T11:42:49.917856Z42009-12-21T11:43:16.970480Z52009-12-21T11:43:44.514407Z62009-12-21T11:44:11.577314Z72009-12-21T11:44:39.119009Z82009-12-21T11:45:06.172066Z92009-12-21T11:45:33.713527Z:2009-12-21T11:46:00.771641Z;2009-12-21T11:46:28.316763Z<2009-12-21T11:46:55.371605Z=2009-12-21T11:47:22.909513Z>2009-12-21T11:47:49.977013Z?2009-12-21T11:48:17.512038Z@2009-12-21T11:48:44.570338ZA2009-12-21T11:49:12.112434ZB2009-12-21T11:49:39.170750ZC2009-12-21T11:50:06.707947ZD2009-12-21T11:50:33.771475ZE2009-12-21T11:51:01.316612ZF2009-12-21T11:51:28.369654ZG2009-12-21T11:51:55.914373ZH2009-12-21T11:52:22.970238ZI2009-12-21T11:52:50.512201ZJ2009-12-21T11:53:17.570095ZK2009-12-21T11:53:45.111202ZL2009-12-21T11:54:12.174012ZM2009-12-21T11:54:39.711706ZN2009-12-21T11:55:06.775528ZO2009-12-21T11:55:34.313158ZP2009-12-21T11:56:01.369704ZQ2009-12-21T11:56:28.914145ZR2009-12-21T11:56:55.972430ZS2009-12-21T11:57:23.510764ZT2009-12-21T11:57:50.568935ZU2009-12-21T11:58:18.110742ZV2009-12-21T11:58:45.172760ZW2009-12-21T11:59:12.710235ZX2009-12-21T11:59:39.764844ZY2009-12-21T12:00:07.309301ZZ2009-12-21T12:00:34.371836Z[2009-12-21T12:01:01.913917Z\2009-12-21T12:01:28.968371Z]2009-12-21T12:01:56.511446Z^2009-12-21T12:02:23.564750Z_2009-12-21T12:02:51.108959Z`2009-12-21T12:03:18.167072Za2009-12-21T12:03:45.713979Zb2009-12-21T12:04:12.770636Zc2009-12-21T12:04:40.308468Zd2009-12-21T12:05:07.363340Ze2009-12-21T12:05:34.908230Zf2009-12-21T12:06:01.966157Zg2009-12-21T12:06:29.512458Zh2009-12-21T12:06:56.566074Zi2009-12-21T12:07:24.109956Zj2009-12-21T12:07:51.162223Zk2009-12-21T12:08:18.707036Zl2009-12-21T12:08:45.765397Zm2009-12-21T12:09:13.310287Zn2009-12-21T12:09:40.366293Zo2009-12-21T12:10:07.905148Zp2009-12-21T12:10:34.970693Zq2009-12-21T12:11:02.513783Zr2009-12-21T12:11:29.564219Zs2009-12-21T12:11:57.107248Zt2009-12-21T12:12:24.164989Zu2009-12-21T12:12:51.704419Zv2009-12-21T12:13:18.771189Zw2009-12-21T12:13:46.305360Zx2009-12-21T12:14:13.371928Zy2009-12-21T12:14:40.908334Zz2009-12-21T12:15:07.965005Z{2009-12-20T16:28:05.648802Z|2009-12-20T16:28:32.703240Z}2009-12-20T16:51:50.142807Z~2009-12-20T16:52:17.199074Z2009-12-20T16:52:44.742119Z€2009-12-20T16:53:11.797380Z2009-12-20T16:53:39.344472Z‚2009-12-20T16:54:06.396722Zƒ2009-12-20T16:54:33.941612Z„2009-12-20T16:55:00.996503Z…2009-12-20T16:55:28.545502Z†2009-12-20T16:55:55.595565Z‡2009-12-20T16:56:23.141821Zˆ2009-12-20T16:56:50.195468Z‰2009-12-20T16:57:17.742763ZŠ2009-12-20T16:57:44.795820Z‹2009-12-20T16:58:12.346078ZŒ2009-12-20T16:58:39.400748Z2009-12-20T16:59:06.939387ZŽ2009-12-20T16:59:33.996260Z2009-12-20T17:00:01.539321Z2009-12-20T17:00:28.600847Z‘2009-12-20T17:00:56.138230Z’2009-12-20T17:01:23.194327Z“2009-12-20T17:01:50.738814Z”2009-12-20T17:02:17.799891Z•2009-12-20T17:02:45.337708Z–2009-12-20T17:03:12.395993Z—2009-12-20T17:03:39.941054Z˜2009-12-20T17:04:06.994514Z™2009-12-20T17:04:34.538381Zš2009-12-20T17:05:01.590026Z›2009-12-20T17:05:29.136515Zœ2009-12-20T17:05:56.200617Z2009-12-20T17:06:23.736462Zž2009-12-20T17:06:50.799169ZŸ2009-12-20T17:07:18.335977Z 2009-12-20T17:07:45.399551Z¡2009-12-20T17:08:12.936576Z¢2009-12-20T17:08:39.997855Z£2009-12-20T17:09:07.537084Z¤2009-12-20T17:09:34.593352Z¥2009-12-20T17:10:02.135388Z¦2009-12-20T17:10:29.191888Z§2009-12-20T17:10:56.735196Z¨2009-12-20T17:11:23.790814Z©2009-12-20T17:11:51.336354Zª2009-12-20T17:12:18.394034Z«2009-12-20T17:12:45.936645Z¬2009-12-20T17:13:12.994603Z­2009-12-20T17:13:40.538005Z®2009-12-20T17:14:07.591511Z¯2009-12-20T17:14:35.134788Z°2009-12-20T17:15:02.195306Z±2009-12-20T17:15:29.737575Z²2009-12-20T17:15:56.793455Z³2009-12-20T17:16:24.331473Z´2009-12-20T17:16:51.390363Zµ2009-12-20T17:17:18.933407Z¶2009-12-20T17:17:45.994748Z·2009-12-20T17:18:13.536784Z¸2009-12-20T17:18:40.586646Z¹2009-12-20T17:19:08.134328Zº2009-12-20T17:19:35.190208Z»2009-12-20T17:20:02.740253Z¼2009-12-20T17:20:29.789567Z½2009-12-20T17:20:57.337078Z¾2009-12-20T17:21:24.392353Z¿2009-12-20T17:21:51.939431ZÀ2009-12-20T17:22:18.988874ZÁ2009-12-20T17:22:46.530538ZÂ2009-12-20T17:23:13.588636ZÃ2009-12-20T17:23:41.133697ZÄ2009-12-20T17:24:08.194618ZÅ2009-12-20T17:35:30.932036ZÆ2009-12-20T17:35:57.983568ZÇ2009-12-20T17:36:25.538638ZÈ2009-12-20T17:36:52.583485ZÉ2009-12-20T17:37:20.131918ZÊ2009-12-20T17:37:47.183666ZË2009-12-20T17:38:14.730790ZÌ2009-12-20T17:38:41.784033ZÍ2009-12-20T17:39:09.336901ZÎ2009-12-20T17:39:36.392854ZÏ2009-12-20T17:40:03.930195ZÐ2009-12-20T17:40:30.984550ZÑ2009-12-20T17:40:58.530620ZÒ2009-12-20T17:41:25.581892ZÓ2009-12-20T17:41:53.131189ZÔ2009-12-20T17:42:20.181034ZÕ2009-12-20T17:42:47.728949ZÖ2009-12-20T17:43:14.781401Z×2009-12-20T17:43:42.328292ZØ2009-12-20T17:44:09.383210ZÙ2009-12-20T17:44:36.930573ZÚ2009-12-20T17:45:03.982881ZÛ2009-12-20T17:45:31.530041ZÜ2009-12-20T17:45:58.580843ZÝ2009-12-20T17:46:26.127951ZÞ2009-12-20T17:46:53.186266Zß2009-12-20T17:47:20.728257Zà2009-12-20T17:47:47.786774Zá2009-12-20T17:48:15.325632Zâ2009-12-20T17:48:42.381883Zã2009-12-20T17:49:09.927439Zä2009-12-20T17:49:36.984699Zå2009-12-20T17:50:04.527889Zæ2009-12-20T17:50:31.585424Zç2009-12-20T17:50:59.126611Zè2009-12-20T17:51:26.187561Zé2009-12-20T17:51:53.733862Zê2009-12-20T17:52:20.785523Zë2009-12-20T17:52:48.092245Zì2009-12-20T17:53:15.381221Zí2009-12-20T17:53:42.922781Zî2009-12-20T17:54:09.980564Zï2009-12-20T17:54:37.526592Zð2009-12-20T17:55:04.585538Zñ2009-12-20T17:55:32.127518Zò2009-12-20T17:55:59.185890Zó2009-12-20T17:56:26.725217Zô2009-12-20T17:56:53.780783Zõ2009-12-20T17:57:21.325352Zö2009-12-20T17:57:48.388968Z÷2009-12-20T17:58:15.928713Zø2009-12-20T17:58:42.980245Zù2009-12-20T17:59:10.525188Zú2009-12-20T17:59:37.583591Zû2009-12-20T18:00:05.124619Zü2009-12-20T18:00:32.176668Zý2009-12-20T18:00:59.732948Zþ2009-12-20T18:01:26.779207Zÿ2009-12-20T18:01:54.328228Z2009-12-20T18:02:21.377885Z2009-12-20T18:02:48.928219Z2009-12-20T18:03:15.978050Z2009-12-20T18:03:43.522620Z2009-12-20T18:04:10.582591Z2009-12-20T18:04:38.126152Z2009-12-20T18:05:05.181298Z2009-12-20T18:05:32.722140Z2009-12-20T18:05:59.775649Z	2009-12-20T18:06:27.326308Z
2009-12-20T18:06:54.379148Z2009-12-20T18:07:21.921526Z2009-12-20T18:07:48.976988Z
2009-12-20T18:30:35.020912Z2009-12-20T18:31:02.068769Z2009-12-20T18:31:29.609911Z2009-12-20T18:31:56.678118Z2009-12-20T18:32:24.215706Z2009-12-20T18:32:51.269411Z2009-12-20T18:33:18.813407Z2009-12-20T18:33:45.874184Z2009-12-20T18:34:13.414318Z2009-12-20T18:34:40.470286Z2009-12-20T18:35:08.015677Z2009-12-20T18:35:35.071413Z2009-12-20T18:36:02.613361Z2009-12-20T18:36:29.673751Z2009-12-20T18:36:57.214117Z2009-12-20T18:37:24.265370Z2009-12-20T18:37:51.815027Z2009-12-20T18:38:18.866670Z2009-12-20T18:38:46.411201Z 2009-12-20T18:39:13.467455Z!2009-12-20T18:39:41.012413Z"2009-12-20T18:40:08.068895Z#2009-12-20T18:40:35.612316Z$2009-12-20T18:41:02.672766Z%2009-12-20T18:41:30.211210Z&2009-12-20T18:41:57.268806Z'2009-12-20T18:42:24.811964Z(2009-12-20T18:42:51.868276Z)2009-12-20T18:43:19.412285Z*2009-12-20T18:43:46.473094Z+2009-12-20T18:44:14.011801Z,2009-12-20T18:44:41.064325Z-2009-12-20T18:45:08.611128Z.2009-12-20T18:45:35.663250Z/2009-12-20T18:46:03.211077Z02009-12-20T18:46:30.264176Z12009-12-20T18:46:57.813616Z22009-12-20T18:47:24.864124Z32009-12-20T18:47:52.416186Z42009-12-20T18:48:19.468711Z52009-12-20T18:48:47.008471Z62009-12-20T18:49:14.064657Z72009-12-20T18:49:41.608203Z82009-12-20T18:50:08.666964Z92009-12-20T18:50:36.208121Z:2009-12-20T18:51:03.267703Z;2009-12-20T18:51:30.808412Z<2009-12-20T18:51:57.869280Z=2009-12-20T18:52:25.407614Z>2009-12-20T18:52:52.460558Z?2009-12-20T18:53:20.010339Z@2009-12-20T18:53:47.061469ZA2009-12-20T18:54:14.609698ZB2009-12-20T18:54:41.660796ZC2009-12-20T18:55:09.208422ZD2009-12-20T18:55:36.262016ZE2009-12-20T18:56:03.809021ZF2009-12-20T18:56:30.862973ZG2009-12-20T18:56:58.409978ZH2009-12-20T18:57:25.461277ZI2009-12-20T18:57:53.005071ZJ2009-12-20T18:58:20.067291ZK2009-12-20T18:58:47.610464ZL2009-12-20T18:59:14.662011ZM2009-12-20T18:59:42.205184ZN2009-12-20T19:00:09.262130ZO2009-12-20T19:00:36.810748ZP2009-12-20T19:01:03.861676ZQ2009-12-20T19:01:31.406462ZR2009-12-20T19:01:58.463020ZS2009-12-20T19:02:26.006038ZT2009-12-20T19:02:53.065187ZU2009-12-20T19:14:15.808748ZGCOL2009-12-20T19:14:42.852990Z2009-12-20T19:15:10.404634Z2009-12-20T19:15:37.456569Z2009-12-20T19:16:05.003029Z2009-12-20T19:16:32.060767Z2009-12-20T19:16:59.606143Z2009-12-20T19:17:26.659304Z2009-12-20T19:17:54.202942Z	2009-12-20T19:18:21.255607Z
2009-12-20T19:18:48.806862Z2009-12-20T19:19:15.856161Z2009-12-20T19:19:43.406593Z
2009-12-20T19:20:10.458327Z2009-12-20T19:20:38.007535Z2009-12-20T19:21:05.055034Z2009-12-20T19:21:32.600038Z2009-12-20T19:21:59.654766Z2009-12-20T19:22:27.202794Z2009-12-20T19:22:54.256528Z2009-12-20T19:23:21.799298Z2009-12-20T19:23:48.853654Z2009-12-20T19:24:16.401511Z2009-12-20T19:24:43.454393Z2009-12-20T19:25:10.999180Z2009-12-20T19:25:38.055163Z2009-12-20T19:26:05.607457Z2009-12-20T19:26:32.653469Z2009-12-20T19:27:00.198860Z2009-12-20T19:27:27.257869Z2009-12-20T19:27:54.798389Z2009-12-20T19:28:21.853567Z 2009-12-20T19:28:49.397950Z!2009-12-20T19:29:16.454104Z"2009-12-20T19:29:43.998085Z#2009-12-20T19:30:11.054471Z$2009-12-20T19:30:38.597196Z%2009-12-20T19:31:05.655786Z&2009-12-20T19:31:33.197361Z'2009-12-20T19:32:00.257736Z(2009-12-20T19:32:27.797682Z)2009-12-20T19:32:54.855263Z*2009-12-20T19:33:22.399042Z+2009-12-20T19:33:49.457168Z,2009-12-20T19:34:16.998542Z-2009-12-20T19:34:44.058729Z.2009-12-20T19:35:11.597250Z/2009-12-20T19:35:38.657670Z02009-12-20T19:36:06.200828Z12009-12-20T19:36:33.255943Z22009-12-20T19:37:00.795503Z32009-12-20T19:37:27.856079Z42009-12-20T19:37:55.395420Z52009-12-20T19:38:22.452785Z62009-12-20T19:38:50.001589Z72009-12-20T19:39:17.051493Z82009-12-20T19:39:44.595039Z92009-12-20T19:40:11.646385Z:2009-12-20T19:40:39.193173Z;2009-12-20T19:41:06.256153Z<2009-12-20T19:41:33.798117Z=2009-12-20T19:42:00.854845Z>2009-12-20T19:42:28.393396Z?2009-12-20T19:42:55.456609Z@2009-12-20T19:43:22.993531ZA2009-12-20T19:43:50.056093ZB2009-12-20T19:44:17.596861ZC2009-12-20T19:44:44.652411ZD2009-12-20T19:45:12.193569ZE2009-12-20T19:45:39.247893ZF2009-12-20T19:46:06.792881ZG2009-12-20T19:46:33.848480ZH2009-12-20T20:09:19.897442ZI2009-12-20T20:09:46.946554ZJ2009-12-20T20:10:14.488301ZK2009-12-20T20:10:41.539741ZL2009-12-20T20:11:09.088839ZM2009-12-20T20:11:36.141349ZN2009-12-20T20:12:03.691347ZO2009-12-20T20:12:30.744493ZP2009-12-20T20:12:58.292319ZQ2009-12-20T20:13:25.340998ZR2009-12-20T20:13:52.884978ZS2009-12-20T20:14:19.940915ZT2009-12-20T20:14:47.484477ZU2009-12-20T20:15:14.544588ZV2009-12-20T20:15:42.087248ZW2009-12-20T20:16:09.139340ZX2009-12-20T20:16:36.687151ZY2009-12-20T20:17:03.740281ZZ2009-12-20T20:17:31.285471Z[2009-12-20T20:17:58.343410Z\2009-12-20T20:18:25.886614Z]2009-12-20T20:18:52.941574Z^2009-12-20T20:19:20.484796Z_2009-12-20T20:19:47.539460Z`2009-12-20T20:20:15.082851Za2009-12-20T20:20:42.141395Zb2009-12-20T20:21:09.691162Zc2009-12-20T20:21:36.741887Zd2009-12-20T20:22:04.282842Ze2009-12-20T20:22:31.344224Zf2009-12-20T20:22:58.882775Zg2009-12-20T20:23:25.944763Zh2009-12-20T20:23:53.484911Zi2009-12-20T20:24:20.533403Zj2009-12-20T20:24:48.081602Zk2009-12-20T20:25:15.133553Zl2009-12-20T20:25:42.682962Zm2009-12-20T20:26:09.734061Zn2009-12-20T20:26:37.285533Zo2009-12-20T20:27:04.334614Zp2009-12-20T20:27:31.883247Zq2009-12-20T20:27:58.935324Zr2009-12-20T20:28:26.481769Zs2009-12-20T20:28:53.534899Zt2009-12-20T20:29:21.088341Zu2009-12-20T20:29:48.135048Zv2009-12-20T20:30:15.680657Zw2009-12-20T20:30:42.734795Zx2009-12-20T20:31:10.279768Zy2009-12-20T20:31:37.336201Zz2009-12-20T20:32:04.880988Z{2009-12-20T20:32:31.935096Z|2009-12-20T20:32:59.479650Z}2009-12-20T20:33:26.535402Z~2009-12-20T20:33:54.080823Z2009-12-20T20:34:21.134604Z€2009-12-20T20:34:48.685440Z2009-12-20T20:35:15.733731Z‚2009-12-20T20:35:43.279107Zƒ2009-12-20T20:36:10.333432Z„2009-12-20T20:36:37.877084Z…2009-12-20T20:37:04.933102Z†2009-12-20T20:37:32.485225Z‡2009-12-20T20:37:59.533188Zˆ2009-12-20T20:38:27.076966Z‰2009-12-20T20:38:54.132702ZŠ2009-12-20T20:39:21.677333Z‹2009-12-20T20:39:48.733317ZŒ2009-12-20T20:40:16.280538Z2009-12-20T20:40:43.332443ZŽ2009-12-20T20:41:10.875848Z2009-12-20T20:41:37.932406Z2009-12-20T20:53:00.680759Z‘2009-12-20T20:53:27.728863Z’2009-12-20T20:53:55.275092Z“2009-12-20T20:54:22.328222Z”2009-12-20T20:54:49.873815Z•2009-12-20T20:55:16.930483Z–2009-12-20T20:55:44.471591Z—2009-12-20T20:56:11.528522Z˜2009-12-20T20:56:39.078753Z™2009-12-20T20:57:06.127447Zš2009-12-20T20:57:33.671815Z›2009-12-20T20:58:00.727799Zœ2009-12-20T20:58:28.270165Z2009-12-20T20:58:55.326940Zž2009-12-20T20:59:22.878210ZŸ2009-12-20T20:59:49.925306Z 2009-12-20T21:00:17.470279Z¡2009-12-20T21:00:44.525612Z¢2009-12-20T21:01:12.069912Z£2009-12-20T21:01:39.126801Z¤2009-12-20T21:02:06.670780Z¥2009-12-20T21:03:01.272141Z¦2009-12-20T21:03:28.326480Z§2009-12-20T21:03:55.874524Z¨2009-12-20T21:04:22.927049Z©2009-12-20T21:04:50.473665Zª2009-12-20T21:05:17.526376Z«2009-12-20T21:05:45.074870Z¬2009-12-20T21:06:12.126325Z­2009-12-20T21:06:39.667513Z®2009-12-20T21:07:06.728073Z¯2009-12-20T21:07:34.267161Z°2009-12-20T21:08:01.325803Z±2009-12-20T21:08:28.867596Z²2009-12-20T21:08:55.925379Z³2009-12-20T21:09:23.467384Z´2009-12-20T21:09:50.533767Zµ2009-12-20T21:10:18.066500Z¶2009-12-20T21:10:45.126486Z·2009-12-20T21:11:12.380142Z¸2009-12-20T21:11:39.731445Z¹2009-12-20T21:12:07.266469Zº2009-12-20T21:12:34.326119Z»2009-12-20T21:13:01.867380Z¼2009-12-20T21:13:28.925106Z½2009-12-20T21:13:56.466046Z¾2009-12-20T21:14:23.525784Z¿2009-12-20T21:14:51.066926ZÀ2009-12-20T21:15:18.124367ZÁ2009-12-20T21:15:45.664588ZÂ2009-12-20T21:16:12.724083ZÃ2009-12-20T21:16:40.265731ZÄ2009-12-20T21:17:07.324419ZÅ2009-12-20T21:17:34.867981ZÆ2009-12-20T21:18:01.923344ZÇ2009-12-20T21:18:29.468534ZÈ2009-12-20T21:18:56.522456ZÉ2009-12-20T21:19:24.066047ZÊ2009-12-20T21:19:51.122575ZË2009-12-20T21:20:18.663043ZÌ2009-12-20T21:20:45.723330ZÍ2009-12-20T21:21:13.262946ZÎ2009-12-20T21:21:40.321433ZÏ2009-12-20T21:22:07.864404ZÐ2009-12-20T21:22:34.920962ZÑ2009-12-20T21:23:02.464022ZÒ2009-12-20T21:23:29.528326ZÓ2009-12-20T21:23:57.061319ZÔ2009-12-20T21:24:24.126646ZÕ2009-12-20T21:24:51.662028ZÖ2009-12-20T21:25:18.719879Z×2009-12-20T21:48:04.755901ZØ2009-12-20T21:48:31.812991ZÙ2009-12-20T21:48:59.359593ZÚ2009-12-20T21:49:26.413701ZÛ2009-12-20T21:49:53.955260ZÜ2009-12-20T21:50:21.017450ZÝ2009-12-20T21:50:48.553855ZÞ2009-12-20T21:51:15.614141Zß2009-12-20T21:51:43.153804Zà2009-12-20T21:52:10.211997Zá2009-12-20T21:52:37.751069Zâ2009-12-20T21:53:04.812566Zã2009-12-20T21:53:32.352213Zä2009-12-20T21:53:59.411103Zå2009-12-20T21:54:26.958180Zæ2009-12-20T21:54:54.010803Zç2009-12-20T21:55:21.559106Zè2009-12-20T21:55:48.611356Zé2009-12-20T21:56:16.152167Zê2009-12-20T21:56:43.208234Zë2009-12-20T21:57:10.752086Zì2009-12-20T21:57:37.815831Zí2009-12-20T21:58:05.352034Zî2009-12-20T21:58:32.410909Zï2009-12-20T21:58:59.950338Zð2009-12-20T21:59:27.018317Zñ2009-12-20T21:59:54.550846Zò2009-12-20T22:00:21.612574Zó2009-12-20T22:00:49.151817Zô2009-12-20T22:01:16.209669Zõ2009-12-20T22:01:43.751146Zö2009-12-20T22:02:10.808738Z÷2009-12-20T22:02:38.351295Zø2009-12-20T22:03:05.405517Zù2009-12-20T22:03:32.950646Zú2009-12-20T22:04:00.006382Zû2009-12-20T22:04:27.553491Zü2009-12-20T22:04:54.608146Zý2009-12-20T22:05:22.154432Zþ2009-12-20T22:05:49.207085Zÿ2009-12-20T22:06:16.749510Z2009-12-20T22:06:43.805173Z2009-12-20T22:07:11.349272Z2009-12-20T22:07:38.404718Z2009-12-20T22:08:05.949764Z2009-12-20T22:08:33.006249Z2009-12-20T22:09:00.545881Z2009-12-20T22:09:27.607780Z2009-12-20T22:09:55.146232Z2009-12-20T22:10:22.207402Z	2009-12-20T22:10:49.748183Z
2009-12-20T22:11:16.805878Z2009-12-20T22:11:44.347526Z2009-12-20T22:12:11.406617Z
2009-12-20T22:12:38.945427Z2009-12-20T22:13:06.006535Z2009-12-20T22:13:33.552619Z2009-12-20T22:14:00.604668Z2009-12-20T22:14:28.146922Z2009-12-20T22:14:55.206634Z2009-12-20T22:15:22.742418Z2009-12-20T22:15:49.806582Z2009-12-20T22:16:17.345251Z2009-12-20T22:16:44.407382Z2009-12-20T22:17:11.945930Z2009-12-20T22:17:39.004649Z2009-12-20T22:18:06.544684Z2009-12-20T22:18:33.604411Z2009-12-20T22:19:01.145236Z2009-12-20T22:19:28.208376Z2009-12-20T22:19:55.745448Z2009-12-20T22:20:22.805703Z2009-12-20T22:31:45.542140Z 2009-12-20T22:32:12.601045Z!2009-12-20T22:32:40.144062Z"2009-12-20T22:33:07.200373Z#2009-12-20T22:33:34.742398Z$2009-12-20T22:34:01.801143Z%2009-12-20T22:34:29.341612Z&2009-12-20T22:34:56.399463Z'2009-12-20T22:35:23.942233Z(2009-12-20T22:35:51.000435Z)2009-12-20T22:36:18.541695Z*2009-12-20T22:36:45.600011Z+2009-12-20T22:37:13.142442Z,2009-12-20T22:37:40.199867Z-2009-12-20T22:38:07.741721Z.2009-12-20T22:38:34.799039Z/2009-12-20T22:39:02.340801Z02009-12-20T22:39:29.399175Z12009-12-20T22:41:18.598652Z22009-12-20T22:41:46.145761Z32009-12-20T22:42:13.199190Z42009-12-20T22:42:40.741242Z52009-12-20T22:43:07.804382Z62009-12-20T22:43:35.338351Z72009-12-20T22:44:02.395861Z82009-12-20T22:44:29.942477Z92009-12-20T22:44:57.001410Z:2009-12-20T22:45:24.538420Z;2009-12-20T22:45:51.595929Z<2009-12-20T22:46:19.142112Z=2009-12-20T22:46:46.196265Z>2009-12-20T22:47:13.737639Z?2009-12-20T22:47:40.796803Z@2009-12-20T22:48:08.340985ZA2009-12-20T22:48:35.396333ZB2009-12-20T22:49:02.944749ZC2009-12-20T22:49:29.995444ZD2009-12-20T22:49:57.536197ZE2009-12-20T22:50:24.595407ZF2009-12-20T22:50:52.136953ZG2009-12-20T22:51:19.195497ZH2009-12-20T22:51:46.736513ZI2009-12-20T22:52:13.797043ZJ2009-12-20T22:52:41.337005ZK2009-12-20T22:53:08.396588ZL2009-12-20T22:53:35.936220ZM2009-12-20T22:54:02.997747ZN2009-12-20T22:54:30.539926ZO2009-12-20T22:54:57.597074ZP2009-12-20T22:55:25.135564ZQ2009-12-20T22:55:52.191300ZR2009-12-20T22:56:19.736148ZS2009-12-20T22:56:46.798957ZT2009-12-20T22:57:14.335879ZU2009-12-20T22:57:41.400070ZV2009-12-20T22:58:08.935782ZW2009-12-20T22:58:35.990944ZX2009-12-20T22:59:03.533570ZY2009-12-20T22:59:30.592273ZZ2009-12-20T22:59:58.133921Z[2009-12-20T23:00:25.192004Z\2009-12-20T23:00:52.734041Z]2009-12-20T23:01:19.790667Z^2009-12-20T23:01:47.333152Z_2009-12-20T23:02:14.390429Z`2009-12-20T23:02:41.933303Za2009-12-20T23:03:08.992983Zb2009-12-20T23:03:36.532496Zc2009-12-20T13:11:02.559095Zd2009-12-20T13:33:48.796504Ze2009-12-20T13:34:15.850987Zf2009-12-20T13:34:43.395304Zg2009-12-20T13:35:10.451412Zh2009-12-20T13:35:37.995609Zi2009-12-20T13:36:05.056432Zj2009-12-20T13:36:32.592704Zk2009-12-20T13:36:59.651049Zl2009-12-20T13:37:27.200159Zm2009-12-20T13:37:54.250420Zn2009-12-20T13:38:21.793655Zo2009-12-20T13:38:48.849938Zp2009-12-20T13:39:16.392982Zq2009-12-20T13:39:43.450274Zr2009-12-20T13:40:11.000004Zs2009-12-20T13:40:38.053910Zt2009-12-20T13:41:05.594074Zu2009-12-20T13:41:32.657415Zv2009-12-20T13:42:00.192641Zw2009-12-20T13:42:27.252725Zx2009-12-20T13:42:54.793582Zy2009-12-20T13:43:21.850456Zz2009-12-20T13:43:49.398945Z{2009-12-20T13:44:16.456656Z|2009-12-20T13:44:43.991417Z}2009-12-20T13:45:11.048522Z~2009-12-20T13:45:38.597586Z2009-12-20T13:46:05.647881Z€2009-12-20T13:46:33.195332Z2009-12-20T13:47:00.248620Z‚2009-12-20T13:47:27.791292Zƒ2009-12-20T13:47:54.846366Z„2009-12-20T13:48:22.391598Z…2009-12-20T13:48:49.451139Z†2009-12-20T13:49:16.996216Z‡2009-12-20T13:49:44.047690Zˆ2009-12-20T13:50:11.590517Z‰2009-12-20T13:50:38.656868ZŠ2009-12-20T13:51:06.189892Z‹2009-12-20T13:51:33.251201ZŒ2009-12-20T13:52:00.791034Z2009-12-20T13:52:27.846496ZŽ2009-12-20T13:52:55.389602Z2009-12-20T13:53:22.446878Z2009-12-20T13:53:49.990155Z‘2009-12-20T13:54:17.046020Z’2009-12-20T13:54:44.588289Z“2009-12-20T13:55:11.646668Z”2009-12-20T13:55:39.191696Z•2009-12-20T13:56:06.245399Z–2009-12-20T13:56:33.791971Z—2009-12-20T13:57:00.843891Z˜2009-12-20T13:57:28.388088Z™2009-12-20T13:57:55.442168Zš2009-12-20T13:58:22.989291Z›2009-12-20T13:58:50.045358Zœ2009-12-20T13:59:17.591242Z2009-12-20T13:59:44.644185Zž2009-12-20T14:00:12.187359ZŸ2009-12-20T14:00:39.246770Z 2009-12-20T14:01:06.788608Z¡2009-12-20T14:01:33.843910Z¢2009-12-20T14:02:01.387115Z£2009-12-20T14:02:28.442851Z¤2009-12-20T14:02:55.990521Z¥2009-12-20T14:03:23.042659Z¦2009-12-20T14:03:50.586281Z§2009-12-20T14:04:17.643150Z¨2009-12-20T14:04:45.187237Z©2009-12-20T14:05:12.246402Zª2009-12-20T14:05:39.786612Z«2009-12-20T14:06:06.842476Z¬2009-12-20T14:17:29.583700Z­2009-12-20T14:17:56.640992Z®2009-12-20T14:18:24.188317Z¯2009-12-20T14:18:51.241866Z°2009-12-20T14:19:18.784651Z±2009-12-20T14:19:45.838933Z²2009-12-20T14:20:13.385250Z³2009-12-20T14:20:40.439186Z´2009-12-20T14:21:07.987634Zµ2009-12-20T14:21:35.044289Z¶2009-12-20T14:22:02.584976Z·2009-12-20T14:22:29.639909Z¸2009-12-20T14:22:57.183341Z¹2009-12-20T14:23:24.238659Zº2009-12-20T14:23:51.781288Z»2009-12-20T14:24:18.838466Z¼2009-12-20T14:24:46.384277Z½2009-12-20T14:25:13.438529Z¾2009-12-20T14:25:40.983046Z¿2009-12-20T14:26:08.041129ZÀ2009-12-20T14:26:35.583366ZÁ2009-12-20T14:27:02.637664ZÂ2009-12-20T14:27:30.183159ZÃ2009-12-20T14:27:57.244469ZÄ2009-12-20T14:28:24.781913ZÅ2009-12-20T14:28:51.836367ZÆ2009-12-20T14:29:19.383505ZÇ2009-12-20T14:29:46.436201ZÈ2009-12-20T14:30:13.982677ZÉ2009-12-20T14:30:41.037721ZÊ2009-12-20T14:31:08.581773ZË2009-12-20T14:31:35.638909ZÌ2009-12-20T14:32:03.180356ZÍ2009-12-20T14:32:30.237461ZÎ2009-12-20T14:32:57.785765ZÏ2009-12-20T14:33:24.839007ZÐ2009-12-20T14:33:52.385092ZÑ2009-12-20T14:34:19.446280ZÒ2009-12-20T14:34:46.983054ZÓ2009-12-20T14:35:14.037337ZÔ2009-12-20T14:35:41.580149ZÕ2009-12-20T14:36:08.636216ZÖ2009-12-20T14:36:36.178640Z×2009-12-20T14:37:03.239360ZØ2009-12-20T14:37:30.783242ZÙ2009-12-20T14:37:57.837912ZÚ2009-12-20T14:38:25.380398ZÛ2009-12-20T14:38:52.439675ZÜ2009-12-20T14:39:19.978732ZÝ2009-12-20T14:39:47.040662ZÞ2009-12-20T14:40:14.581908Zß2009-12-20T14:40:41.641604Zà2009-12-20T14:41:09.178226Zá2009-12-20T14:41:36.231889Zâ2009-12-20T14:42:03.777151Zã2009-12-20T14:42:30.832643Zä2009-12-20T14:42:58.379333Zå2009-12-20T14:43:25.433150Zæ2009-12-20T14:43:52.980073Zç2009-12-20T14:44:20.034758Zè2009-12-20T14:44:47.575988Zé2009-12-20T14:45:14.633249Zê2009-12-20T14:45:42.179783Zë2009-12-20T14:46:09.233680Zì2009-12-20T14:46:36.775512Zí2009-12-20T14:47:03.834588Zî2009-12-20T14:47:31.375414Zï2009-12-20T14:47:58.433714Zð2009-12-20T14:48:25.978993Zñ2009-12-20T14:48:53.032266Zò2009-12-20T14:49:20.575110Zó2009-12-20T14:49:47.633657Zô2009-12-20T15:12:33.865141Zõ2009-12-20T15:13:00.926699Zö2009-12-20T15:13:28.470968Z÷2009-12-20T15:13:55.526012Zø2009-12-20T15:14:23.069490Zù2009-12-20T15:14:50.125557Zú2009-12-20T15:15:17.669624Zû2009-12-20T15:15:44.726296Zü2009-12-20T15:16:12.270037Zý2009-12-20T15:16:39.323996Zþ2009-12-20T15:17:06.870297Zÿ2009-12-20T15:17:33.926379Z2009-12-20T15:18:01.468043Z2009-12-20T15:18:28.524700Z2009-12-20T15:18:56.068147Z2009-12-20T15:19:23.121034Z2009-12-20T15:19:50.667289Z2009-12-20T15:20:17.722735Z2009-12-20T15:20:45.266012Z2009-12-20T15:21:12.322079Z2009-12-20T15:21:39.866147Z	2009-12-20T15:22:06.924230Z
2009-12-20T15:22:34.466498Z2009-12-20T15:23:01.523558Z2009-12-20T15:23:29.065625Z
2009-12-20T15:23:56.125322Z2009-12-20T15:24:23.666163Z2009-12-20T15:24:50.720818Z2009-12-20T15:25:18.267523Z2009-12-20T15:25:45.327825Z2009-12-20T15:26:12.865268Z2009-12-20T15:26:39.923397Z2009-12-20T15:27:07.464814Z2009-12-20T15:27:34.521314Z2009-12-20T15:28:02.063879Z2009-12-20T15:28:29.122069Z2009-12-20T15:28:56.666928Z2009-12-20T15:29:23.722824Z2009-12-20T15:29:51.264876Z2009-12-20T15:30:18.325870Z2009-12-20T15:30:45.865211Z2009-12-20T15:31:12.921511Z2009-12-20T15:31:40.462120Z2009-12-20T15:32:07.525678Z 2009-12-20T15:32:35.066939Z!2009-12-20T15:33:02.118181Z"2009-12-20T15:33:29.663646Z#2009-12-20T15:33:56.718921Z$2009-12-20T15:34:24.269271Z%2009-12-20T15:34:51.325710Z&2009-12-20T15:35:18.864550Z'2009-12-20T15:35:45.919856Z(2009-12-20T15:36:13.463506Z)2009-12-20T15:36:40.518378Z*2009-12-20T15:37:08.062244Z+2009-12-20T15:37:35.118714Z,2009-12-20T15:38:02.662999Z-2009-12-20T15:38:29.718895Z.2009-12-20T15:38:57.261985Z/2009-12-20T15:39:24.319246Z02009-12-20T15:39:51.861918Z12009-12-20T15:40:18.916060Z22009-12-20T15:40:46.462285Z32009-12-20T15:41:13.515544Z42009-12-20T15:41:41.061644Z52009-12-20T15:42:08.116329Z62009-12-20T15:42:35.660956Z72009-12-20T15:43:02.718496Z82009-12-20T15:43:30.265821Z92009-12-20T15:43:57.316413Z:2009-12-20T15:44:24.862729Z;2009-12-20T15:44:51.917106Z<2009-12-20T15:56:14.657169Z=2009-12-20T15:56:41.713468Z>2009-12-20T15:57:09.260545Z?2009-12-20T15:57:36.312869Z@2009-12-20T15:58:03.860406ZA2009-12-20T15:58:30.913612ZB2009-12-20T15:58:58.461915ZC2009-12-20T15:59:25.513141ZD2009-12-20T15:59:53.058636ZE2009-12-20T16:00:20.113029ZF2009-12-20T16:00:47.662401ZG2009-12-20T16:01:14.713613ZH2009-12-20T16:01:42.256099ZI2009-12-20T16:02:09.312816ZJ2009-12-20T16:02:36.855861ZK2009-12-20T16:03:03.913153ZL2009-12-20T16:03:31.455390ZM2009-12-20T16:03:58.511331ZN2009-12-20T16:04:26.056625ZO2009-12-20T16:04:53.112071ZP2009-12-20T16:05:20.655565ZQ2009-12-20T16:05:47.711849ZR2009-12-20T16:06:15.255126ZS2009-12-20T16:06:42.312945ZT2009-12-20T16:07:09.858098ZU2009-12-20T16:07:36.911543ZV2009-12-20T16:08:04.454200ZW2009-12-20T16:08:31.511104ZX2009-12-20T16:08:59.055187ZY2009-12-20T16:09:26.111626ZZ2009-12-20T16:09:53.653491Z[2009-12-20T16:10:20.710193Z\2009-12-20T16:10:48.255503Z]2009-12-20T16:11:15.310576Z^2009-12-20T16:11:42.852799Z_2009-12-20T16:12:09.912309Z`2009-12-20T16:12:37.452731Za2009-12-20T16:13:04.509217Zb2009-12-20T16:13:32.053905Zc2009-12-20T16:13:59.109972Zd2009-12-20T16:14:26.652411Ze2009-12-20T16:14:53.708539Zf2009-12-20T16:15:21.250778Zg2009-12-20T16:15:48.309047Zh2009-12-20T16:16:15.851532Zi2009-12-20T16:16:42.905971Zj2009-12-20T16:17:10.452489Zk2009-12-20T16:17:37.507144Zl2009-12-20T16:18:05.051973Zm2009-12-20T16:18:32.108055Zn2009-12-20T16:18:59.650370Zo2009-12-20T16:19:26.710842Zp2009-12-20T16:19:54.252086Zq2009-12-20T16:20:21.305950Zr2009-12-20T16:20:48.850406Zs2009-12-20T16:21:15.912135Zt2009-12-20T16:21:43.447425Zu2009-12-20T16:22:10.506018Zv2009-12-20T16:22:38.048458Zw2009-12-20T16:23:05.106233Zx2009-12-20T16:23:32.649554Zy2009-12-20T16:23:59.704785Zz2009-12-20T16:24:27.250280Z{2009-12-20T16:24:54.304516Z|2009-12-20T16:25:21.850212Z}2009-12-20T16:25:48.905659Z~2009-12-20T16:26:16.449153Z2009-12-20T16:26:43.506817Z€2009-12-20T16:27:11.048434Z2009-12-20T16:27:38.104810Z‚2009-12-20T11:55:03.729976Zƒ2009-12-20T11:55:30.778483Z„2009-12-20T11:55:58.325644Z…2009-12-20T11:56:25.379409Z†2009-12-20T11:56:52.924770Z‡2009-12-20T11:57:19.976850Zˆ2009-12-20T11:57:47.528565Z‰2009-12-20T11:58:14.583779ZŠ2009-12-20T11:58:42.126063Z‹2009-12-20T11:59:09.179783ZŒ2009-12-20T11:59:36.723546Z2009-12-20T12:00:03.776893ZŽ2009-12-20T12:00:31.321060Z2009-12-20T12:00:58.378104Z2009-12-20T12:01:25.922777Z‘2009-12-20T12:01:52.981337Z’2009-12-20T12:02:20.521903Z“2009-12-20T12:02:47.577365Z”2009-12-20T12:03:15.121387Z•2009-12-20T12:03:42.177732Z–2009-12-20T12:04:09.721553Z—2009-12-20T12:04:36.774842Z˜2009-12-20T12:05:04.326900Z™2009-12-20T12:05:31.381212Zš2009-12-20T12:05:58.926274Z›2009-12-20T12:06:25.976104Zœ2009-12-20T12:06:53.520934Z2009-12-20T12:07:20.575451Zž2009-12-20T12:07:48.120030ZŸ2009-12-20T12:08:15.181369Z 2009-12-20T12:08:42.718767Z¡2009-12-20T12:09:09.773796Z¢2009-12-20T12:09:37.320283Z£2009-12-20T12:10:04.374333Z¤2009-12-20T12:10:31.920403Z¥2009-12-20T12:10:58.975895Z¦2009-12-20T12:11:26.521979Z§2009-12-20T12:11:53.572259Z¨2009-12-20T12:12:21.118903Z©2009-12-20T12:12:48.173543Zª2009-12-20T12:13:15.716789Z«2009-12-20T12:13:42.772960Z¬2009-12-20T12:14:10.319964Z­2009-12-20T12:14:37.379056Z®2009-12-20T12:15:04.921092Z¯2009-12-20T12:15:31.973357Z°2009-12-20T12:15:59.516216Z±2009-12-20T12:16:26.572686Z²2009-12-20T12:16:54.116955Z³2009-12-20T12:17:21.176450Z´2009-12-20T12:17:48.720378Zµ2009-12-20T12:18:15.778027Z¶2009-12-20T12:18:43.317427Z·2009-12-20T12:19:10.372919Z¸2009-12-20T12:19:37.916165Z¹2009-12-20T12:20:04.976497Zº2009-12-20T12:20:32.521418Z»2009-12-20T12:20:59.574817Z¼2009-12-20T12:21:27.116496Z½2009-12-20T12:21:54.174393Z¾2009-12-20T12:22:21.714443Z¿2009-12-20T12:22:48.774341ZÀ2009-12-20T12:23:16.318021ZÁ2009-12-20T12:23:43.374537ZÂ2009-12-20T12:24:10.915162ZÃ2009-12-20T12:24:37.971305ZÄ2009-12-20T12:25:05.515357ZÅ2009-12-20T12:25:32.573409ZÆ2009-12-20T12:26:00.117136ZÇ2009-12-20T12:26:27.170208ZÈ2009-12-20T12:26:54.714664ZÉ2009-12-20T12:27:21.768947ZÊ2009-12-20T12:38:44.516254ZË2009-12-20T12:39:11.566472ZÌ2009-12-20T12:39:39.114775ZÍ2009-12-20T12:40:06.171694ZÎ2009-12-20T12:40:33.708316ZÏ2009-12-20T12:41:00.766818ZÐ2009-12-20T12:41:28.311118ZÑ2009-12-20T12:41:55.371792ZÒ2009-12-20T12:42:22.910865ZÓ2009-12-20T12:42:49.971585ZÔ2009-12-20T12:43:17.509432ZÕ2009-12-20T12:43:44.571579ZÖ2009-12-20T12:44:12.109476Z×2009-12-20T12:44:39.170892ZØ2009-12-20T12:45:06.708663ZÙ2009-12-20T12:45:33.766807ZÚ2009-12-20T12:46:01.313264ZÛ2009-12-20T12:46:28.365934ZÜ2009-12-20T12:46:55.907194ZÝ2009-12-20T12:47:22.966099ZÞ2009-12-20T12:47:50.507158Zß2009-12-20T12:48:17.566652Zà2009-12-20T12:48:45.106299Zá2009-12-20T12:49:12.165794Zâ2009-12-20T12:49:39.706263Zã2009-12-20T12:50:06.767789Zä2009-12-20T12:50:34.310461Zå2009-12-20T12:51:01.363720Zæ2009-12-20T12:51:28.910021Zç2009-12-20T12:51:55.965282Zè2009-12-20T12:52:23.511568Zé2009-12-20T12:52:50.566037Zê2009-12-20T12:53:18.107297Zë2009-12-20T12:53:45.168622Zì2009-12-20T12:54:12.708455Zí2009-12-20T12:54:39.764118Zî2009-12-20T12:55:07.305767Zï2009-12-20T12:55:34.365090Zð2009-12-20T12:56:01.907716Zñ2009-12-20T12:56:28.964076Zò2009-12-20T12:56:56.508300Zó2009-12-20T12:57:23.566786Zô2009-12-20T12:57:51.106620Zõ2009-12-20T12:58:18.163509Zö2009-12-20T12:58:45.706165Z÷2009-12-20T12:59:12.763240Zø2009-12-20T12:59:40.305554Zù2009-12-20T13:00:07.362785Zú2009-12-20T13:00:34.907675Zû2009-12-20T13:01:01.961524Zü2009-12-20T13:01:29.505234Zý2009-12-20T13:01:56.561643Zþ2009-12-20T13:02:24.110627Zÿ2009-12-20T13:02:51.160922Z2009-12-20T13:03:18.704682Z2009-12-20T13:03:45.763975Z2009-12-20T13:04:13.303980Z2009-12-20T13:04:40.358634Z2009-12-20T13:05:07.904206Z2009-12-20T13:05:34.960675Z2009-12-20T13:06:02.504557Z2009-12-20T13:06:29.560050Z2009-12-20T13:06:57.102147Z	2009-12-20T13:07:24.158153Z
2009-12-20T13:07:51.702795Z2009-12-20T13:08:18.762999Z2009-12-20T13:08:46.302355Z
2009-12-20T13:09:13.361029Z2009-12-20T13:09:40.903498Z2009-12-20T13:10:07.957218Z2009-12-20T13:10:35.503215Z2009-12-20T10:59:59.439028Z2009-12-20T11:00:26.496675Z2009-12-20T11:00:54.040385Z2009-12-20T11:01:21.094838Z2009-12-20T11:01:48.639884Z2009-12-20T11:02:15.695951Z2009-12-20T11:02:43.245076Z2009-12-20T11:03:10.295899Z2009-12-20T11:03:37.842775Z2009-12-20T11:04:04.894539Z2009-12-20T11:04:32.438349Z2009-12-20T11:04:59.496633Z2009-12-20T11:05:27.043323Z2009-12-20T11:05:54.098411Z2009-12-20T11:06:21.638850Z 2009-12-20T11:06:48.698634Z!2009-12-20T11:07:16.242025Z"2009-12-20T11:07:43.285475Z#2009-12-20T11:08:10.837955Z$2009-12-20T11:08:37.892482Z%2009-12-20T11:09:05.440913Z&2009-12-20T11:09:32.495238Z'2009-12-20T11:10:00.039781Z(2009-12-20T11:10:27.092119Z)2009-12-20T11:10:54.637397Z*2009-12-20T11:11:21.696980Z+2009-12-20T11:11:49.237532Z,2009-12-20T11:12:16.296293Z-2009-12-20T11:12:43.838287Z.2009-12-20T11:13:10.890827Z/2009-12-20T11:13:38.437429Z02009-12-20T11:14:05.491494Z12009-12-20T11:14:33.041798Z22009-12-20T11:15:00.091023Z32009-12-20T11:15:27.635727Z42009-12-20T11:15:54.692182Z52009-12-20T11:16:22.236839Z62009-12-20T11:16:49.290703Z72009-12-20T11:17:16.841628Z82009-12-20T11:17:43.891614Z92009-12-20T11:18:11.436648Z:2009-12-20T11:18:38.490429Z;2009-12-20T11:19:06.036669Z<2009-12-20T11:19:33.092535Z=2009-12-20T11:20:00.635438Z>2009-12-20T11:20:27.687457Z?2009-12-20T11:20:55.234348Z@2009-12-20T11:21:22.289406ZA2009-12-20T11:21:49.833056ZB2009-12-20T11:22:16.889525ZC2009-12-20T11:22:44.434819ZD2009-12-20T11:23:11.494656ZE2009-12-20T11:23:39.033527ZF2009-12-20T11:24:06.087919ZG2009-12-20T11:24:33.635073ZH2009-12-20T11:25:00.688084ZI2009-12-20T11:25:28.237024ZJ2009-12-20T11:25:55.287010ZK2009-12-20T11:26:22.833296ZL2009-12-20T11:26:49.887547ZM2009-12-20T11:27:17.433028ZN2009-12-20T11:27:44.488691ZO2009-12-20T11:28:12.036777ZP2009-12-20T11:28:39.089647ZQ2009-12-20T11:29:06.632940ZR2009-12-20T11:29:33.684292ZS2009-12-20T11:30:01.232393ZT2009-12-20T11:30:28.286427ZU2009-12-20T11:30:55.832109ZV2009-12-20T11:31:22.887742ZW2009-12-20T11:31:50.432243ZX2009-12-20T11:32:17.487489ZY2009-12-20T10:32:41.449933ZZ2009-12-20T10:33:08.499821Z[2009-12-20T10:33:36.050068Z\2009-12-20T10:34:03.108023Z]2009-12-20T10:34:30.648993Z^2009-12-20T10:34:57.703131Z_2009-12-20T10:35:25.249330Z`2009-12-20T10:35:52.307687Za2009-12-20T10:36:19.848239Zb2009-12-20T10:36:46.901152Zc2009-12-20T10:37:14.448762Zd2009-12-20T10:37:41.498680Ze2009-12-20T10:38:09.047805Zf2009-12-20T10:38:36.104192Zg2009-12-20T10:39:03.650608Zh2009-12-20T10:39:30.701819Zi2009-12-20T10:39:58.243266Zj2009-12-20T10:40:25.302296Zk2009-12-20T10:40:52.849031Zl2009-12-20T10:41:19.903918Zm2009-12-20T10:41:47.443893Zn2009-12-20T10:42:14.498293Zo2009-12-20T10:42:42.049750Zp2009-12-20T10:43:09.099520Zq2009-12-20T10:43:36.644626Zr2009-12-20T10:44:03.699012Zs2009-12-20T10:44:31.244994Zt2009-12-20T10:44:58.299462Zu2009-12-20T10:45:25.846167Zv2009-12-20T10:45:52.900419Zw2009-12-20T10:46:20.449140Zx2009-12-20T10:46:47.503392Zy2009-12-20T10:47:15.045055Zz2009-12-20T10:47:42.104348Z{2009-12-20T10:48:09.643405Z|2009-12-20T10:48:36.699674Z}2009-12-20T10:24:30.050822Z~2009-12-20T10:24:57.107695Z2009-12-20T10:25:24.651501Z€2009-12-20T10:25:51.712623Z2009-12-20T10:26:19.253621Z‚2009-12-20T10:26:46.305640Zƒ2009-12-20T10:27:13.852314Z„2009-12-20T10:27:40.911980Z…2009-12-20T10:28:08.451829Z†2009-12-20T10:28:35.501271Z‡2009-12-20T10:29:03.051689Zˆ2009-12-20T10:29:30.110248Z‰2009-12-20T10:29:57.650020ZŠ2009-12-20T10:30:24.706320Z‹2009-12-20T10:30:52.249177ZŒ2009-12-20T10:31:19.304422Z2009-12-20T10:31:46.850336ZŽ2009-12-20T10:32:13.903564Z2009-12-20T10:20:24.106798Z2009-12-20T10:20:51.662587Z‘2009-12-20T10:21:18.706612Z’2009-12-20T10:21:46.252853Z“2009-12-20T10:22:13.306918Z”2009-12-20T10:22:40.858835Z•2009-12-20T10:23:07.907426Z–2009-12-20T10:23:35.452704Z—2009-12-20T10:24:02.512198Z˜2009-12-20T10:18:34.908689Z™2009-12-20T10:19:02.462767Zš2009-12-20T10:19:29.508808Z›2009-12-20T10:19:57.055611Zœ2009-12-20T10:17:40.310054Z2009-12-20T10:18:07.862647Zž2009-12-20T10:17:13.262140ZŸ2009-12-20T10:16:45.711273Z 2009-12-20T10:16:18.662409ZðQ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/data_files/vlen_string_s390x.h50000644000175000017500000002146014350630273023247 0ustar00takluyvertakluyver‰HDF

ÿÿÿÿÿÿÿÿ0#ÿÿÿÿÿÿÿÿ`ˆ¨ ØTREEÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿHEAPXHÈDSvariableDSBEintDSLEintDSBEfloatDSLEfloatˆ¨H
version_infoâX&created_on_s390x	FALSETRUEPJÈ?_xSNOD(!P8 "  ø	@P(JÈ?_€GCOLâSummary of the h5py configuration
---------------------------------

h5py    2.10.0
HDF5    1.10.4
Python  3.6.10 (default, Dec 19 2019, 15:48:40) [GCC]
sys.platform    linux
sys.maxsize     9223372036854775807
numpy   1.19.1
	sorrow...}This file was created on Linux 4.12.14-197.45-default #1 SMP Thu Jun4 11:06:04 UTC 2020 (2b6c749) s390x s390x s390x GNU/Linuxsweetis suchParting	}@	¸Që…?ùÂ\(õÃ@®záG®@G®záH?ó333333…ëQ¸	@Ãõ(\Âù?®Gáz®@Ház®G@333333ó?@x(JÈ?_€!?@44ÿ (JÈ?_x ?@44ÿÈ(JÈ?_x././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1686137956.0
h5py-3.13.0/h5py/tests/test_attribute_create.py0000644000175000017500000000565614440066144022366 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Tests the h5py.AttributeManager.create() method.
"""

import numpy as np
from .. import h5t, h5a

from .common import ut, TestCase

class TestArray(TestCase):

    """
        Check that top-level array types can be created and read.
    """

    def test_int(self):
        # See issue 498

        dt = np.dtype('(3,)i')
        data = np.arange(3, dtype='i')

        self.f.attrs.create('x', data=data, dtype=dt)

        aid = h5a.open(self.f.id, b'x')

        htype = aid.get_type()
        self.assertEqual(htype.get_class(), h5t.ARRAY)

        out = self.f.attrs['x']

        self.assertArrayEqual(out, data)

    def test_string_dtype(self):
        # See issue 498 discussion

        self.f.attrs.create('x', data=42, dtype='i8')

    def test_str(self):
        # See issue 1057
        self.f.attrs.create('x', chr(0x03A9))
        out = self.f.attrs['x']
        self.assertEqual(out, chr(0x03A9))
        self.assertIsInstance(out, str)

    def test_tuple_of_unicode(self):
        # Test that a tuple of unicode strings can be set as an attribute. It will
        # be converted to a numpy array of vlen unicode type:
        data = ('a', 'b')
        self.f.attrs.create('x', data=data)
        result = self.f.attrs['x']
        self.assertTrue(all(result == data))
        self.assertEqual(result.dtype, np.dtype('O'))

        # However, a numpy array of type U being passed in will not be
        # automatically converted, and should raise an error as it does
        # not map to a h5py dtype
        data_as_U_array = np.array(data)
        self.assertEqual(data_as_U_array.dtype, np.dtype('U1'))
        with self.assertRaises(TypeError):
            self.f.attrs.create('y', data=data_as_U_array)

    def test_shape(self):
        self.f.attrs.create('x', data=42, shape=1)
        result = self.f.attrs['x']
        self.assertEqual(result.shape, (1,))

        self.f.attrs.create('y', data=np.arange(3), shape=3)
        result = self.f.attrs['y']
        self.assertEqual(result.shape, (3,))

    def test_dtype(self):
        dt = np.dtype('(3,)i')
        array = np.arange(3, dtype='i')
        self.f.attrs.create('x', data=array, dtype=dt)
        # Array dtype shape is incompatible with data shape
        array = np.arange(4, dtype='i')
        with self.assertRaises(ValueError):
            self.f.attrs.create('x', data=array, dtype=dt)
        # Shape of new attribute conflicts with shape of data
        dt = np.dtype('()i')
        with self.assertRaises(ValueError):
            self.f.attrs.create('x', data=array, shape=(5,), dtype=dt)

    def test_key_type(self):
        with self.assertRaises(TypeError):
            self.f.attrs.create(1, data=('a', 'b'))
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/tests/test_attrs.py0000644000175000017500000002226514675110407020172 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Attributes testing module

    Covers all operations which access the .attrs property, with the
    exception of data read/write and type conversion.  Those operations
    are tested by module test_attrs_data.
"""

import numpy as np

from collections.abc import MutableMapping

from .common import TestCase, ut

import h5py
from h5py import File
from h5py import h5a,  h5t
from h5py import AttributeManager


class BaseAttrs(TestCase):

    def setUp(self):
        self.f = File(self.mktemp(), 'w')

    def tearDown(self):
        if self.f:
            self.f.close()

class TestRepr(TestCase):

    """ Feature: AttributeManager provide a helpful
        __repr__ string
    """

    def test_repr(self):
        grp = self.f.create_group('grp')
        grp.attrs.create('att', 1)
        self.assertIsInstance(repr(grp.attrs), str)
        grp.id.close()
        self.assertIsInstance(repr(grp.attrs), str)


class TestAccess(BaseAttrs):

    """
        Feature: Attribute creation/retrieval via special methods
    """

    def test_create(self):
        """ Attribute creation by direct assignment """
        self.f.attrs['a'] = 4.0
        self.assertEqual(list(self.f.attrs.keys()), ['a'])
        self.assertEqual(self.f.attrs['a'], 4.0)

    def test_create_2(self):
        """ Attribute creation by create() method """
        self.f.attrs.create('a', 4.0)
        self.assertEqual(list(self.f.attrs.keys()), ['a'])
        self.assertEqual(self.f.attrs['a'], 4.0)

    def test_modify(self):
        """ Attributes are modified by direct assignment"""
        self.f.attrs['a'] = 3
        self.assertEqual(list(self.f.attrs.keys()), ['a'])
        self.assertEqual(self.f.attrs['a'], 3)
        self.f.attrs['a'] = 4
        self.assertEqual(list(self.f.attrs.keys()), ['a'])
        self.assertEqual(self.f.attrs['a'], 4)

    def test_modify_2(self):
        """ Attributes are modified by modify() method """
        self.f.attrs.modify('a',3)
        self.assertEqual(list(self.f.attrs.keys()), ['a'])
        self.assertEqual(self.f.attrs['a'], 3)

        self.f.attrs.modify('a', 4)
        self.assertEqual(list(self.f.attrs.keys()), ['a'])
        self.assertEqual(self.f.attrs['a'], 4)

        # If the attribute doesn't exist, create new
        self.f.attrs.modify('b', 5)
        self.assertEqual(list(self.f.attrs.keys()), ['a', 'b'])
        self.assertEqual(self.f.attrs['a'], 4)
        self.assertEqual(self.f.attrs['b'], 5)

        # Shape of new value is incompatible with the previous
        new_value = np.arange(5)
        with self.assertRaises(TypeError):
            self.f.attrs.modify('b', new_value)

    def test_overwrite(self):
        """ Attributes are silently overwritten """
        self.f.attrs['a'] = 4.0
        self.f.attrs['a'] = 5.0
        self.assertEqual(self.f.attrs['a'], 5.0)

    def test_rank(self):
        """ Attribute rank is preserved """
        self.f.attrs['a'] = (4.0, 5.0)
        self.assertEqual(self.f.attrs['a'].shape, (2,))
        self.assertArrayEqual(self.f.attrs['a'], np.array((4.0,5.0)))

    def test_single(self):
        """ Attributes of shape (1,) don't become scalars """
        self.f.attrs['a'] = np.ones((1,))
        out = self.f.attrs['a']
        self.assertEqual(out.shape, (1,))
        self.assertEqual(out[()], 1)

    def test_access_exc(self):
        """ Attempt to access missing item raises KeyError """
        with self.assertRaises(KeyError):
            self.f.attrs['a']

    def test_get_id(self):
        self.f.attrs['a'] = 4.0
        aid = self.f.attrs.get_id('a')
        assert isinstance(aid, h5a.AttrID)

        with self.assertRaises(KeyError):
            self.f.attrs.get_id('b')

class TestDelete(BaseAttrs):

    """
        Feature: Deletion of attributes using __delitem__
    """

    def test_delete(self):
        """ Deletion via "del" """
        self.f.attrs['a'] = 4.0
        self.assertIn('a', self.f.attrs)
        del self.f.attrs['a']
        self.assertNotIn('a', self.f.attrs)

    def test_delete_exc(self):
        """ Attempt to delete missing item raises KeyError """
        with self.assertRaises(KeyError):
            del self.f.attrs['a']


class TestUnicode(BaseAttrs):

    """
        Feature: Attributes can be accessed via Unicode or byte strings
    """

    def test_ascii(self):
        """ Access via pure-ASCII byte string """
        self.f.attrs[b"ascii"] = 42
        out = self.f.attrs[b"ascii"]
        self.assertEqual(out, 42)

    def test_raw(self):
        """ Access via non-ASCII byte string """
        name = b"non-ascii\xfe"
        self.f.attrs[name] = 42
        out = self.f.attrs[name]
        self.assertEqual(out, 42)

    def test_unicode(self):
        """ Access via Unicode string with non-ascii characters """
        name = "Omega" + chr(0x03A9)
        self.f.attrs[name] = 42
        out = self.f.attrs[name]
        self.assertEqual(out, 42)


class TestCreate(BaseAttrs):

    """
        Options for explicit attribute creation
    """

    def test_named(self):
        """ Attributes created from named types link to the source type object
        """
        self.f['type'] = np.dtype('u8')
        self.f.attrs.create('x', 42, dtype=self.f['type'])
        self.assertEqual(self.f.attrs['x'], 42)
        aid = h5a.open(self.f.id, b'x')
        htype = aid.get_type()
        htype2 = self.f['type'].id
        self.assertEqual(htype, htype2)
        self.assertTrue(htype.committed())

    def test_empty(self):
        # https://github.com/h5py/h5py/issues/1540
        """ Create attribute with h5py.Empty value
        """
        self.f.attrs.create('empty', h5py.Empty('f'))
        self.assertEqual(self.f.attrs['empty'], h5py.Empty('f'))

        self.f.attrs.create('empty', h5py.Empty(None))
        self.assertEqual(self.f.attrs['empty'], h5py.Empty(None))

class TestMutableMapping(BaseAttrs):
    '''Tests if the registration of AttributeManager as a MutableMapping
    behaves as expected
    '''
    def test_resolution(self):
        assert issubclass(AttributeManager, MutableMapping)
        assert isinstance(self.f.attrs, MutableMapping)

    def test_validity(self):
        '''
        Test that the required functions are implemented.
        '''
        AttributeManager.__getitem__
        AttributeManager.__setitem__
        AttributeManager.__delitem__
        AttributeManager.__iter__
        AttributeManager.__len__

class TestVlen(BaseAttrs):
    def test_vlen(self):
        a = np.array([np.arange(3), np.arange(4)],
            dtype=h5t.vlen_dtype(int))
        self.f.attrs['a'] = a
        self.assertArrayEqual(self.f.attrs['a'][0], a[0])

    def test_vlen_s1(self):
        dt = h5py.vlen_dtype(np.dtype('S1'))
        a = np.empty((1,), dtype=dt)
        a[0] = np.array([b'a', b'b'], dtype='S1')

        self.f.attrs.create('test', a)
        self.assertArrayEqual(self.f.attrs['test'][0], a[0])


class TestTrackOrder(BaseAttrs):
    def fill_attrs(self, track_order):
        attrs = self.f.create_group('test', track_order=track_order).attrs
        for i in range(100):
            attrs[str(i)] = i
        return attrs

    # https://forum.hdfgroup.org/t/bug-h5arename-fails-unexpectedly/4881
    def test_track_order(self):
        attrs = self.fill_attrs(track_order=True)  # creation order
        self.assertEqual(list(attrs),
                         [str(i) for i in range(100)])

    def test_no_track_order(self):
        attrs = self.fill_attrs(track_order=False)  # name alphanumeric
        self.assertEqual(list(attrs),
                         sorted([str(i) for i in range(100)]))

    def fill_attrs2(self, track_order):
        group = self.f.create_group('test', track_order=track_order)
        for i in range(12):
            group.attrs[str(i)] = i
        return group

    def test_track_order_overwrite_delete(self):
        # issue 1385
        group = self.fill_attrs2(track_order=True)  # creation order
        self.assertEqual(group.attrs["11"], 11)
        # overwrite attribute
        group.attrs['11'] = 42.0
        self.assertEqual(group.attrs["11"], 42.0)
        # delete attribute
        self.assertIn('10', group.attrs)
        del group.attrs['10']
        self.assertNotIn('10', group.attrs)


class TestDatatype(BaseAttrs):

    def test_datatype(self):
        self.f['foo'] = np.dtype('f')
        dt = self.f['foo']
        self.assertEqual(list(dt.attrs.keys()), [])
        dt.attrs.create('a', 4.0)
        self.assertEqual(list(dt.attrs.keys()), ['a'])
        self.assertEqual(list(dt.attrs.values()), [4.0])

def test_python_int_uint64(writable_file):
    f = writable_file
    data = [np.iinfo(np.int64).max, np.iinfo(np.int64).max + 1]

    # Check creating a new attribute
    f.attrs.create('a', data, dtype=np.uint64)
    assert f.attrs['a'].dtype == np.dtype(np.uint64)
    np.testing.assert_array_equal(f.attrs['a'], np.array(data, dtype=np.uint64))

    # Check modifying an existing attribute
    f.attrs.modify('a', data)
    np.testing.assert_array_equal(f.attrs['a'], np.array(data, dtype=np.uint64))
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0
h5py-3.13.0/h5py/tests/test_attrs_data.py0000644000175000017500000002305014532312724021152 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Attribute data transfer testing module

    Covers all data read/write and type-conversion operations for attributes.
"""

import numpy as np

from .common import TestCase, ut

import h5py
from h5py import h5a, h5s, h5t
from h5py import File
from h5py._hl.base import is_empty_dataspace


class BaseAttrs(TestCase):

    def setUp(self):
        self.f = File(self.mktemp(), 'w')

    def tearDown(self):
        if self.f:
            self.f.close()


class TestScalar(BaseAttrs):

    """
        Feature: Scalar types map correctly to array scalars
    """

    def test_int(self):
        """ Integers are read as correct NumPy type """
        self.f.attrs['x'] = np.array(1, dtype=np.int8)
        out = self.f.attrs['x']
        self.assertIsInstance(out, np.int8)

    def test_compound(self):
        """ Compound scalars are read as numpy.void """
        dt = np.dtype([('a', 'i'), ('b', 'f')])
        data = np.array((1, 4.2), dtype=dt)
        self.f.attrs['x'] = data
        out = self.f.attrs['x']
        self.assertIsInstance(out, np.void)
        self.assertEqual(out, data)
        self.assertEqual(out['b'], data['b'])

    def test_compound_with_vlen_fields(self):
        """ Compound scalars with vlen fields can be written and read """
        dt = np.dtype([('a', h5py.vlen_dtype(np.int32)),
                       ('b', h5py.vlen_dtype(np.int32))])

        data = np.array((np.array(list(range(1, 5)), dtype=np.int32),
                        np.array(list(range(8, 10)), dtype=np.int32)), dtype=dt)[()]

        self.f.attrs['x'] = data
        out = self.f.attrs['x']

        # Specifying check_alignment=False because vlen fields have 8 bytes of padding
        # because the vlen datatype in hdf5 occupies 16 bytes
        self.assertArrayEqual(out, data, check_alignment=False)

    def test_nesting_compound_with_vlen_fields(self):
        """ Compound scalars with nested compound vlen fields can be written and read """
        dt_inner = np.dtype([('a', h5py.vlen_dtype(np.int32)),
                             ('b', h5py.vlen_dtype(np.int32))])

        dt = np.dtype([('f1', h5py.vlen_dtype(dt_inner)),
                       ('f2', np.int64)])

        inner1 = (np.array(range(1, 3), dtype=np.int32),
                  np.array(range(6, 9), dtype=np.int32))

        inner2 = (np.array(range(10, 14), dtype=np.int32),
                  np.array(range(16, 20), dtype=np.int32))

        data = np.array((np.array([inner1, inner2], dtype=dt_inner),
                         2),
                        dtype=dt)[()]

        self.f.attrs['x'] = data
        out = self.f.attrs['x']
        self.assertArrayEqual(out, data, check_alignment=False)

    def test_vlen_compound_with_vlen_string(self):
        """ Compound scalars with vlen compounds containing vlen strings can be written and read """
        dt_inner = np.dtype([('a', h5py.string_dtype()),
                             ('b', h5py.string_dtype())])

        dt = np.dtype([('f', h5py.vlen_dtype(dt_inner))])

        data = np.array((np.array([(b"apples", b"bananas"), (b"peaches", b"oranges")], dtype=dt_inner),),dtype=dt)[()]
        self.f.attrs['x'] = data
        out = self.f.attrs['x']
        self.assertArrayEqual(out, data, check_alignment=False)


class TestArray(BaseAttrs):

    """
        Feature: Non-scalar types are correctly retrieved as ndarrays
    """

    def test_single(self):
        """ Single-element arrays are correctly recovered """
        data = np.ndarray((1,), dtype='f')
        self.f.attrs['x'] = data
        out = self.f.attrs['x']
        self.assertIsInstance(out, np.ndarray)
        self.assertEqual(out.shape, (1,))

    def test_multi(self):
        """ Rank-1 arrays are correctly recovered """
        data = np.ndarray((42,), dtype='f')
        data[:] = 42.0
        data[10:35] = -47.0
        self.f.attrs['x'] = data
        out = self.f.attrs['x']
        self.assertIsInstance(out, np.ndarray)
        self.assertEqual(out.shape, (42,))
        self.assertArrayEqual(out, data)


class TestTypes(BaseAttrs):

    """
        Feature: All supported types can be stored in attributes
    """

    def test_int(self):
        """ Storage of integer types """
        dtypes = (np.int8, np.int16, np.int32, np.int64,
                  np.uint8, np.uint16, np.uint32, np.uint64)
        for dt in dtypes:
            data = np.ndarray((1,), dtype=dt)
            data[...] = 42
            self.f.attrs['x'] = data
            out = self.f.attrs['x']
            self.assertEqual(out.dtype, dt)
            self.assertArrayEqual(out, data)

    def test_float(self):
        """ Storage of floating point types """
        dtypes = tuple(np.dtype(x) for x in ('f4', '>f8', 'c8', 'c16'))

        for dt in dtypes:
            data = np.ndarray((1,), dtype=dt)
            data[...] = -4.2j + 35.9
            self.f.attrs['x'] = data
            out = self.f.attrs['x']
            self.assertEqual(out.dtype, dt)
            self.assertArrayEqual(out, data)

    def test_string(self):
        """ Storage of fixed-length strings """
        dtypes = tuple(np.dtype(x) for x in ('|S1', '|S10'))

        for dt in dtypes:
            data = np.ndarray((1,), dtype=dt)
            data[...] = 'h'
            self.f.attrs['x'] = data
            out = self.f.attrs['x']
            self.assertEqual(out.dtype, dt)
            self.assertEqual(out[0], data[0])

    def test_bool(self):
        """ Storage of NumPy booleans """

        data = np.ndarray((2,), dtype=np.bool_)
        data[...] = True, False
        self.f.attrs['x'] = data
        out = self.f.attrs['x']
        self.assertEqual(out.dtype, data.dtype)
        self.assertEqual(out[0], data[0])
        self.assertEqual(out[1], data[1])

    def test_vlen_string_array(self):
        """ Storage of vlen byte string arrays"""
        dt = h5py.string_dtype(encoding='ascii')

        data = np.ndarray((2,), dtype=dt)
        data[...] = "Hello", "Hi there!  This is HDF5!"

        self.f.attrs['x'] = data
        out = self.f.attrs['x']
        self.assertEqual(out.dtype, dt)
        self.assertEqual(out[0], data[0])
        self.assertEqual(out[1], data[1])

    def test_string_scalar(self):
        """ Storage of variable-length byte string scalars (auto-creation) """

        self.f.attrs['x'] = b'Hello'
        out = self.f.attrs['x']

        self.assertEqual(out, 'Hello')
        self.assertEqual(type(out), str)

        aid = h5py.h5a.open(self.f.id, b"x")
        tid = aid.get_type()
        self.assertEqual(type(tid), h5py.h5t.TypeStringID)
        self.assertEqual(tid.get_cset(), h5py.h5t.CSET_ASCII)
        self.assertTrue(tid.is_variable_str())

    def test_unicode_scalar(self):
        """ Storage of variable-length unicode strings (auto-creation) """

        self.f.attrs['x'] = u"Hello" + chr(0x2340) + u"!!"
        out = self.f.attrs['x']
        self.assertEqual(out, u"Hello" + chr(0x2340) + u"!!")
        self.assertEqual(type(out), str)

        aid = h5py.h5a.open(self.f.id, b"x")
        tid = aid.get_type()
        self.assertEqual(type(tid), h5py.h5t.TypeStringID)
        self.assertEqual(tid.get_cset(), h5py.h5t.CSET_UTF8)
        self.assertTrue(tid.is_variable_str())


class TestEmpty(BaseAttrs):

    def setUp(self):
        BaseAttrs.setUp(self)
        sid = h5s.create(h5s.NULL)
        tid = h5t.C_S1.copy()
        tid.set_size(10)
        aid = h5a.create(self.f.id, b'x', tid, sid)
        self.empty_obj = h5py.Empty(np.dtype("S10"))

    def test_read(self):
        self.assertEqual(
            self.empty_obj, self.f.attrs['x']
        )

    def test_write(self):
        self.f.attrs["y"] = self.empty_obj
        self.assertTrue(is_empty_dataspace(h5a.open(self.f.id, b'y')))

    def test_modify(self):
        with self.assertRaises(OSError):
            self.f.attrs.modify('x', 1)

    def test_values(self):
        # list() is for Py3 where these are iterators
        values = list(self.f.attrs.values())
        self.assertEqual(
            [self.empty_obj], values
        )

    def test_items(self):
        items = list(self.f.attrs.items())
        self.assertEqual(
            [(u"x", self.empty_obj)], items
        )

    def test_itervalues(self):
        values = list(self.f.attrs.values())
        self.assertEqual(
            [self.empty_obj], values
        )

    def test_iteritems(self):
        items = list(self.f.attrs.items())
        self.assertEqual(
            [(u"x", self.empty_obj)], items
        )


class TestWriteException(BaseAttrs):

    """
        Ensure failed attribute writes don't leave garbage behind.
    """

    def test_write(self):
        """ ValueError on string write wipes out attribute """

        s = b"Hello\x00Hello"

        try:
            self.f.attrs['x'] = s
        except ValueError:
            pass

        with self.assertRaises(KeyError):
            self.f.attrs['x']
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/test_base.py0000644000175000017500000000735014350630273017743 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Common high-level operations test

    Tests features common to all high-level objects, like the .name property.
"""

from h5py import File
from h5py._hl.base import is_hdf5, Empty
from .common import ut, TestCase, UNICODE_FILENAMES

import numpy as np
import os
import tempfile

class BaseTest(TestCase):

    def setUp(self):
        self.f = File(self.mktemp(), 'w')

    def tearDown(self):
        if self.f:
            self.f.close()


class TestName(BaseTest):

    """
        Feature: .name attribute returns the object name
    """

    def test_anonymous(self):
        """ Anonymous objects have name None """
        grp = self.f.create_group(None)
        self.assertIs(grp.name, None)

class TestParent(BaseTest):

    """
        test the parent group of the high-level interface objects
    """

    def test_object_parent(self):
        # Anonymous objects
        grp = self.f.create_group(None)
        # Parent of an anonymous object is undefined
        with self.assertRaises(ValueError):
            grp.parent

        # Named objects
        grp = self.f.create_group("bar")
        sub_grp = grp.create_group("foo")
        parent = sub_grp.parent.name
        self.assertEqual(parent, "/bar")

class TestMapping(BaseTest):

    """
        Test if the registration of Group as a
        Mapping behaves as expected
    """

    def setUp(self):
        super().setUp()
        data = ('a', 'b')
        self.grp = self.f.create_group('bar')
        self.attr = self.f.attrs.create('x', data)

    def test_keys(self):
        key_1 = self.f.keys()
        self.assertIsInstance(repr(key_1), str)
        key_2 = self.grp.keys()
        self.assertIsInstance(repr(key_2), str)

    def test_values(self):
        value_1 = self.f.values()
        self.assertIsInstance(repr(value_1), str)
        value_2 = self.grp.values()
        self.assertIsInstance(repr(value_2), str)

    def test_items(self):
        item_1 = self.f.items()
        self.assertIsInstance(repr(item_1), str)
        item_2 = self.grp.items()
        self.assertIsInstance(repr(item_1), str)


class TestRepr(BaseTest):

    """
        repr() works correctly with Unicode names
    """

    USTRING = chr(0xfc) + chr(0xdf)

    def _check_type(self, obj):
        self.assertIsInstance(repr(obj), str)

    def test_group(self):
        """ Group repr() with unicode """
        grp = self.f.create_group(self.USTRING)
        self._check_type(grp)

    def test_dataset(self):
        """ Dataset repr() with unicode """
        dset = self.f.create_dataset(self.USTRING, (1,))
        self._check_type(dset)

    def test_namedtype(self):
        """ Named type repr() with unicode """
        self.f['type'] = np.dtype('f')
        typ = self.f['type']
        self._check_type(typ)

    def test_empty(self):
        data = Empty(dtype='f')
        self.assertNotEqual(Empty(dtype='i'), data)
        self._check_type(data)

    @ut.skipIf(not UNICODE_FILENAMES, "Filesystem unicode support required")
    def test_file(self):
        """ File object repr() with unicode """
        fname = tempfile.mktemp(self.USTRING+'.hdf5')
        try:
            with File(fname,'w') as f:
                self._check_type(f)
        finally:
            try:
                os.unlink(fname)
            except Exception:
                pass

def test_is_hdf5():
    filename = File(tempfile.mktemp(), "w").filename
    assert is_hdf5(filename)
    # non-existing HDF5 file
    filename = tempfile.mktemp()
    assert not is_hdf5(filename)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0
h5py-3.13.0/h5py/tests/test_big_endian_file.py0000644000175000017500000000264714532312724022113 0ustar00takluyvertakluyverimport pytest

import numpy as np
from h5py import File
from .common import TestCase
from .data_files import get_data_file_path


def test_vlen_big_endian():
    with File(get_data_file_path("vlen_string_s390x.h5")) as f:
        assert f.attrs["created_on_s390x"] == 1

        dset = f["DSvariable"]
        assert dset[0] == b"Parting"
        assert dset[1] == b"is such"
        assert dset[2] == b"sweet"
        assert dset[3] == b"sorrow..."

        dset = f["DSLEfloat"]
        assert dset[0] == 3.14
        assert dset[1] == 1.61
        assert dset[2] == 2.71
        assert dset[3] == 2.41
        assert dset[4] == 1.2
        assert dset.dtype == "f8"

        assert f["DSLEint"][0] == 1
        assert f["DSLEint"].dtype == "i8"


class TestEndianess(TestCase):
    def test_simple_int_be(self):
        fname = self.mktemp()

        arr = np.ndarray(shape=(1,), dtype=">i4", buffer=bytearray([0, 1, 3, 2]))
        be_number = 0 * 256 ** 3 + 1 * 256 ** 2 + 3 * 256 ** 1 + 2 * 256 ** 0

        with File(fname, mode="w") as f:
            f.create_dataset("int", data=arr)

        with File(fname, mode="r") as f:
            assert f["int"][()][0] == be_number
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/tests/test_completions.py0000644000175000017500000000270114045746670021372 0ustar00takluyvertakluyverfrom .common import TestCase


class TestCompletions(TestCase):

    def test_group_completions(self):
        # Test completions on top-level file.
        g = self.f.create_group('g')
        self.f.create_group('h')
        self.f.create_dataset('data', [1, 2, 3])
        self.assertEqual(
            self.f._ipython_key_completions_(),
            ['data', 'g', 'h'],
        )

        self.f.create_group('data2', [1, 2, 3])
        self.assertEqual(
            self.f._ipython_key_completions_(),
            ['data', 'data2', 'g', 'h'],
        )

        # Test on subgroup.
        g.create_dataset('g_data1', [1, 2, 3])
        g.create_dataset('g_data2', [4, 5, 6])
        self.assertEqual(
            g._ipython_key_completions_(),
            ['g_data1', 'g_data2'],
        )

        g.create_dataset('g_data3', [7, 8, 9])
        self.assertEqual(
            g._ipython_key_completions_(),
            ['g_data1', 'g_data2', 'g_data3'],
        )

    def test_attrs_completions(self):
        attrs = self.f.attrs

        # Write out of alphabetical order to test that completions come back in
        # alphabetical order, as opposed to, say, insertion order.
        attrs['b'] = 1
        attrs['a'] = 2
        self.assertEqual(
            attrs._ipython_key_completions_(),
            ['a', 'b']
        )

        attrs['c'] = 3
        self.assertEqual(
            attrs._ipython_key_completions_(),
            ['a', 'b', 'c']
        )
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/h5py/tests/test_dataset.py0000644000175000017500000023505414755054371020472 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Dataset testing operations.

    Tests all dataset operations, including creation, with the exception of:

    1. Slicing operations for read and write, handled by module test_slicing
    2. Type conversion for read and write (currently untested)
"""

import pathlib
import os
import sys
import numpy as np
import platform
import pytest
import warnings

from .common import ut, TestCase
from .data_files import get_data_file_path
from h5py import File, Group, Dataset
from h5py._hl.base import is_empty_dataspace, product
from h5py import h5f, h5t
from h5py.h5py_warnings import H5pyDeprecationWarning
from h5py import version
import h5py
import h5py._hl.selections as sel
from h5py.tests.common import NUMPY_RELEASE_VERSION

class BaseDataset(TestCase):
    def setUp(self):
        self.f = File(self.mktemp(), 'w')

    def tearDown(self):
        if self.f:
            self.f.close()


class TestRepr(BaseDataset):
    """
        Feature: repr(Dataset) behaves sensibly
    """

    def test_repr_open(self):
        """ repr() works on live and dead datasets """
        ds = self.f.create_dataset('foo', (4,))
        self.assertIsInstance(repr(ds), str)
        self.f.close()
        self.assertIsInstance(repr(ds), str)


class TestCreateShape(BaseDataset):

    """
        Feature: Datasets can be created from a shape only
    """

    def test_create_scalar(self):
        """ Create a scalar dataset """
        dset = self.f.create_dataset('foo', ())
        self.assertEqual(dset.shape, ())

    def test_create_simple(self):
        """ Create a size-1 dataset """
        dset = self.f.create_dataset('foo', (1,))
        self.assertEqual(dset.shape, (1,))

    def test_create_integer(self):
        """ Create a size-1 dataset with integer shape"""
        dset = self.f.create_dataset('foo', 1)
        self.assertEqual(dset.shape, (1,))

    def test_create_extended(self):
        """ Create an extended dataset """
        dset = self.f.create_dataset('foo', (63,))
        self.assertEqual(dset.shape, (63,))
        self.assertEqual(dset.size, 63)
        dset = self.f.create_dataset('bar', (6, 10))
        self.assertEqual(dset.shape, (6, 10))
        self.assertEqual(dset.size, (60))

    def test_create_integer_extended(self):
        """ Create an extended dataset """
        dset = self.f.create_dataset('foo', 63)
        self.assertEqual(dset.shape, (63,))
        self.assertEqual(dset.size, 63)
        dset = self.f.create_dataset('bar', (6, 10))
        self.assertEqual(dset.shape, (6, 10))
        self.assertEqual(dset.size, (60))

    def test_default_dtype(self):
        """ Confirm that the default dtype is float """
        dset = self.f.create_dataset('foo', (63,))
        self.assertEqual(dset.dtype, np.dtype('=f4'))

    def test_missing_shape(self):
        """ Missing shape raises TypeError """
        with self.assertRaises(TypeError):
            self.f.create_dataset('foo')

    def test_long_double(self):
        """ Confirm that the default dtype is float """
        dset = self.f.create_dataset('foo', (63,), dtype=np.longdouble)
        if platform.machine() in ['ppc64le']:
            pytest.xfail("Storage of long double deactivated on %s" % platform.machine())
        self.assertEqual(dset.dtype, np.longdouble)

    @ut.skipIf(not hasattr(np, "complex256"), "No support for complex256")
    def test_complex256(self):
        """ Confirm that the default dtype is float """
        dset = self.f.create_dataset('foo', (63,),
                                     dtype=np.dtype('complex256'))
        self.assertEqual(dset.dtype, np.dtype('complex256'))

    def test_name_bytes(self):
        dset = self.f.create_dataset(b'foo', (1,))
        self.assertEqual(dset.shape, (1,))

        dset2 = self.f.create_dataset(b'bar/baz', (2,))
        self.assertEqual(dset2.shape, (2,))

class TestCreateData(BaseDataset):

    """
        Feature: Datasets can be created from existing data
    """

    def test_create_scalar(self):
        """ Create a scalar dataset from existing array """
        data = np.ones((), 'f')
        dset = self.f.create_dataset('foo', data=data)
        self.assertEqual(dset.shape, data.shape)

    def test_create_extended(self):
        """ Create an extended dataset from existing data """
        data = np.ones((63,), 'f')
        dset = self.f.create_dataset('foo', data=data)
        self.assertEqual(dset.shape, data.shape)

    def test_dataset_intermediate_group(self):
        """ Create dataset with missing intermediate groups """
        ds = self.f.create_dataset("/foo/bar/baz", shape=(10, 10), dtype=' 0

        si = ds.get_chunk_info_by_coord((0, 0))
        assert si.chunk_offset == (0, 0)
        assert si.filter_mask == 0
        assert si.byte_offset is not None
        assert si.size > 0


@ut.skipUnless(h5py.version.hdf5_version_tuple >= (1, 12, 3) or
               (h5py.version.hdf5_version_tuple >= (1, 10, 10) and h5py.version.hdf5_version_tuple < (1, 10, 99)),
               "chunk iteration requires  HDF5 1.10.10 and later 1.10, or 1.12.3 and later")
def test_chunk_iter():
    """H5Dchunk_iter() for chunk information"""
    from io import BytesIO
    buf = BytesIO()
    with h5py.File(buf, 'w') as f:
        f.create_dataset('test', shape=(100, 100), chunks=(10, 10), dtype='i4')
        f['test'][:] = 1

    buf.seek(0)
    with h5py.File(buf, 'r') as f:
        dsid = f['test'].id

        num_chunks = dsid.get_num_chunks()
        assert num_chunks == 100
        ci = {}
        for j in range(num_chunks):
            si = dsid.get_chunk_info(j)
            ci[si.chunk_offset] = si

        def callback(chunk_info):
            known = ci[chunk_info.chunk_offset]
            assert chunk_info.chunk_offset == known.chunk_offset
            assert chunk_info.filter_mask == known.filter_mask
            assert chunk_info.byte_offset == known.byte_offset
            assert chunk_info.size == known.size

        dsid.chunk_iter(callback)


def test_empty_shape(writable_file):
    ds = writable_file.create_dataset('empty', dtype='int32')
    assert ds.shape is None
    assert ds.maxshape is None


def test_zero_storage_size():
    # https://github.com/h5py/h5py/issues/1475
    from io import BytesIO
    buf = BytesIO()
    with h5py.File(buf, 'w') as fout:
        fout.create_dataset('empty', dtype='uint8')

    buf.seek(0)
    with h5py.File(buf, 'r') as fin:
        assert fin['empty'].chunks is None
        assert fin['empty'].id.get_offset() is None
        assert fin['empty'].id.get_storage_size() == 0


def test_python_int_uint64(writable_file):
    # https://github.com/h5py/h5py/issues/1547
    data = [np.iinfo(np.int64).max, np.iinfo(np.int64).max + 1]

    # Check creating a new dataset
    ds = writable_file.create_dataset('x', data=data, dtype=np.uint64)
    assert ds.dtype == np.dtype(np.uint64)
    np.testing.assert_array_equal(ds[:], np.array(data, dtype=np.uint64))

    # Check writing to an existing dataset
    ds[:] = data
    np.testing.assert_array_equal(ds[:], np.array(data, dtype=np.uint64))


def test_setitem_fancy_indexing(writable_file):
    # https://github.com/h5py/h5py/issues/1593
    arr = writable_file.create_dataset('data', (5, 1000, 2), dtype=np.uint8)
    block = np.random.randint(255, size=(5, 3, 2))
    arr[:, [0, 2, 4], ...] = block


def test_vlen_spacepad():
    with File(get_data_file_path("vlen_string_dset.h5")) as f:
        assert f["DS1"][0] == b"Parting"


def test_vlen_nullterm():
    with File(get_data_file_path("vlen_string_dset_utc.h5")) as f:
        assert f["ds1"][0] == b"2009-12-20T10:16:18.662409Z"


def test_allow_unknown_filter(writable_file):
    # apparently 256-511 are reserved for testing purposes
    fake_filter_id = 256
    ds = writable_file.create_dataset(
        'data', shape=(10, 10), dtype=np.uint8, compression=fake_filter_id,
        allow_unknown_filter=True
    )
    assert str(fake_filter_id) in ds._filters


def test_dset_chunk_cache():
    """Chunk cache configuration for individual datasets."""
    from io import BytesIO
    buf = BytesIO()
    with h5py.File(buf, 'w') as fout:
        ds = fout.create_dataset(
            'x', shape=(10, 20), chunks=(5, 4), dtype='i4',
            rdcc_nbytes=2 * 1024 * 1024, rdcc_w0=0.2, rdcc_nslots=997)
        ds_chunk_cache = ds.id.get_access_plist().get_chunk_cache()
        assert fout.id.get_access_plist().get_cache()[1:] != ds_chunk_cache
        assert ds_chunk_cache == (997, 2 * 1024 * 1024, 0.2)

    buf.seek(0)
    with h5py.File(buf, 'r') as fin:
        ds = fin.require_dataset(
            'x', shape=(10, 20), dtype='i4',
            rdcc_nbytes=3 * 1024 * 1024, rdcc_w0=0.67, rdcc_nslots=709)
        ds_chunk_cache = ds.id.get_access_plist().get_chunk_cache()
        assert fin.id.get_access_plist().get_cache()[1:] != ds_chunk_cache
        assert ds_chunk_cache == (709, 3 * 1024 * 1024, 0.67)


class TestCommutative(BaseDataset):
    """
    Test the symmetry of operators, at least with the numpy types.
    Issue: https://github.com/h5py/h5py/issues/1947
    """
    def test_numpy_commutative(self,):
        """
        Create a h5py dataset, extract one element convert to numpy
        Check that it returns symmetric response to == and !=
        """
        shape = (100,1)
        dset = self.f.create_dataset("test", shape, dtype=float,
                                     data=np.random.rand(*shape))

        # grab a value from the elements, ie dset[0, 0]
        # check that mask arrays are commutative wrt ==, !=
        val = np.float64(dset[0, 0])

        assert np.all((val == dset) == (dset == val))
        assert np.all((val != dset) == (dset != val))

        # generate sample not in the dset, ie max(dset)+delta
        # check that mask arrays are commutative wrt ==, !=
        delta = 0.001
        nval = np.nanmax(dset)+delta

        assert np.all((nval == dset) == (dset == nval))
        assert np.all((nval != dset) == (dset != nval))

    def test_basetype_commutative(self,):
        """
        Create a h5py dataset and check basetype compatibility.
        Check that operation is symmetric, even if it is potentially
        not meaningful.
        """
        shape = (100,1)
        dset = self.f.create_dataset("test", shape, dtype=float,
                                     data=np.random.rand(*shape))

        # generate float type, sample float(0.)
        # check that operation is symmetric (but potentially meaningless)
        val = float(0.)
        assert (val == dset) == (dset == val)
        assert (val != dset) == (dset != val)

class TestVirtualPrefix(BaseDataset):
    """
    Test setting virtual prefix
    """
    def test_virtual_prefix_create(self):
        shape = (100,1)
        virtual_prefix = "/path/to/virtual"
        dset = self.f.create_dataset("test", shape, dtype=float,
                                     data=np.random.rand(*shape),
                                     virtual_prefix = virtual_prefix)

        virtual_prefix_readback = pathlib.Path(dset.id.get_access_plist().get_virtual_prefix().decode()).as_posix()
        assert virtual_prefix_readback == virtual_prefix

    def test_virtual_prefix_require(self):
        virtual_prefix = "/path/to/virtual"
        dset = self.f.require_dataset('foo', (10, 3), 'f', virtual_prefix = virtual_prefix)
        virtual_prefix_readback = pathlib.Path(dset.id.get_access_plist().get_virtual_prefix().decode()).as_posix()
        self.assertEqual(virtual_prefix, virtual_prefix_readback)
        self.assertIsInstance(dset, Dataset)
        self.assertEqual(dset.shape, (10, 3))


def ds_str(file, shape=(10, )):
    dt = h5py.string_dtype(encoding='ascii')
    fill_value = b'fill'
    return file.create_dataset('x', shape, dtype=dt, fillvalue=fill_value)


def ds_fields(file, shape=(10, )):
    dt = np.dtype([
        ('foo', h5py.string_dtype(encoding='ascii')),
        ('bar', np.float64),
    ])
    fill_value = np.asarray(('fill', 0.0), dtype=dt)
    file['x'] = np.broadcast_to(fill_value, shape)
    return file['x']


view_getters = pytest.mark.parametrize(
    "view_getter,make_ds",
    [
        (lambda ds: ds, ds_str),
        (lambda ds: ds.astype(dtype=object), ds_str),
        (lambda ds: ds.asstr(), ds_str),
        (lambda ds: ds.fields("foo"), ds_fields),
    ],
    ids=["ds", "astype", "asstr", "fields"],
)


COPY_IF_NEEDED = False if NUMPY_RELEASE_VERSION < (2, 0) else None

@pytest.mark.parametrize("copy", [True, COPY_IF_NEEDED])
@view_getters
def test_array_copy(view_getter, make_ds, copy, writable_file):
    ds = make_ds(writable_file)
    view = view_getter(ds)
    np.array(view, copy=copy)


@pytest.mark.skipif(
    NUMPY_RELEASE_VERSION < (2, 0),
    reason="forbidding copies requires numpy 2",
)
@view_getters
def test_array_copy_false(view_getter, make_ds, writable_file):
    ds = make_ds(writable_file)
    view = view_getter(ds)
    with pytest.raises(ValueError, match="memory allocation cannot be avoided"):
        np.array(view, copy=False)


@view_getters
def test_array_dtype(view_getter, make_ds, writable_file):
    ds = make_ds(writable_file)
    view = view_getter(ds)
    assert np.array(view, dtype='|S10').dtype == np.dtype('|S10')


@view_getters
def test_array_scalar(view_getter, make_ds, writable_file):
    ds = make_ds(writable_file, shape=())
    view = view_getter(ds)
    assert isinstance(view[()], (bytes, str))
    assert np.array(view).shape == ()


@view_getters
def test_array_nd(view_getter, make_ds, writable_file):
    ds = make_ds(writable_file, shape=(5, 6))
    view = view_getter(ds)
    assert np.array(view).shape == (5, 6)


@view_getters
def test_view_properties(view_getter, make_ds, writable_file):
    ds = make_ds(writable_file, shape=(5, 6))
    view = view_getter(ds)
    assert view.dtype == np.dtype(object)
    assert view.ndim == 2
    assert view.shape == (5, 6)
    assert view.size == 30
    assert len(view) == 5
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/tests/test_dataset_getitem.py0000644000175000017500000004444114507227212022175 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Tests the h5py.Dataset.__getitem__ method.

    This module does not specifically test type conversion.  The "type" axis
    therefore only tests objects which interact with the slicing system in
    unreliable ways; for example, compound and array types.

    See test_dataset_getitem_types for type-conversion tests.

    Tests are organized into TestCases by dataset shape and type.  Test
    methods vary by slicing arg type.

    1. Dataset shape:
        Empty
        Scalar
        1D
        3D

    2. Type:
        Float
        Compound
        Array

    3. Slicing arg types:
        Ellipsis
        Empty tuple
        Regular slice
        MultiBlockSlice
        Indexing
        Index list
        Boolean mask
        Field names
"""

import sys

import numpy as np
import h5py

from .common import ut, TestCase


class TestEmpty(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        sid = h5py.h5s.create(h5py.h5s.NULL)
        tid = h5py.h5t.C_S1.copy()
        tid.set_size(10)
        dsid = h5py.h5d.create(self.f.id, b'x', tid, sid)
        self.dset = h5py.Dataset(dsid)
        self.empty_obj = h5py.Empty(np.dtype("S10"))

    def test_ndim(self):
        """ Verify number of dimensions """
        self.assertEqual(self.dset.ndim, 0)

    def test_shape(self):
        """ Verify shape """
        self.assertEqual(self.dset.shape, None)

    def test_size(self):
        """ Verify shape """
        self.assertEqual(self.dset.size, None)

    def test_nbytes(self):
        """ Verify nbytes """
        self.assertEqual(self.dset.nbytes, 0)

    def test_ellipsis(self):
        self.assertEqual(self.dset[...], self.empty_obj)

    def test_tuple(self):
        self.assertEqual(self.dset[()], self.empty_obj)

    def test_slice(self):
        """ slice -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[0:4]

    def test_multi_block_slice(self):
        """ MultiBlockSlice -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[h5py.MultiBlockSlice()]

    def test_index(self):
        """ index -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[0]

    def test_indexlist(self):
        """ index list -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[[1,2,5]]

    def test_mask(self):
        """ mask -> ValueError """
        mask = np.array(True, dtype='bool')
        with self.assertRaises(ValueError):
            self.dset[mask]

    def test_fieldnames(self):
        """ field name -> ValueError """
        with self.assertRaises(ValueError):
            self.dset['field']


class TestScalarFloat(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        self.data = np.array(42.5, dtype=np.double)
        self.dset = self.f.create_dataset('x', data=self.data)

    def test_ndim(self):
        """ Verify number of dimensions """
        self.assertEqual(self.dset.ndim, 0)

    def test_size(self):
        """ Verify size """
        self.assertEqual(self.dset.size, 1)

    def test_nbytes(self):
        """ Verify nbytes """
        self.assertEqual(self.dset.nbytes, self.data.dtype.itemsize)  # not sure if 'f' is always alias for 'f4'

    def test_shape(self):
        """ Verify shape """
        self.assertEqual(self.dset.shape, tuple())

    def test_ellipsis(self):
        """ Ellipsis -> scalar ndarray """
        out = self.dset[...]
        self.assertArrayEqual(out, self.data)

    def test_tuple(self):
        """ () -> bare item """
        out = self.dset[()]
        self.assertArrayEqual(out, self.data.item())

    def test_slice(self):
        """ slice -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[0:4]

    def test_multi_block_slice(self):
        """ MultiBlockSlice -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[h5py.MultiBlockSlice()]

    def test_index(self):
        """ index -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[0]

    # FIXME: NumPy has IndexError instead
    def test_indexlist(self):
        """ index list -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[[1,2,5]]

    # FIXME: NumPy permits this
    def test_mask(self):
        """ mask -> ValueError """
        mask = np.array(True, dtype='bool')
        with self.assertRaises(ValueError):
            self.dset[mask]

    def test_fieldnames(self):
        """ field name -> ValueError (no fields) """
        with self.assertRaises(ValueError):
            self.dset['field']


class TestScalarCompound(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        self.data = np.array((42.5, -118, "Hello"), dtype=[('a', 'f'), ('b', 'i'), ('c', '|S10')])
        self.dset = self.f.create_dataset('x', data=self.data)

    def test_ndim(self):
        """ Verify number of dimensions """
        self.assertEqual(self.dset.ndim, 0)

    def test_shape(self):
        """ Verify shape """
        self.assertEqual(self.dset.shape, tuple())

    def test_size(self):
        """ Verify size """
        self.assertEqual(self.dset.size, 1)

    def test_nbytes(self):
        """ Verify nbytes """
        self.assertEqual(self.dset.nbytes, self.data.dtype.itemsize)

    def test_ellipsis(self):
        """ Ellipsis -> scalar ndarray """
        out = self.dset[...]
        # assertArrayEqual doesn't work with compounds; do manually
        self.assertIsInstance(out, np.ndarray)
        self.assertEqual(out.shape, self.data.shape)
        self.assertEqual(out.dtype, self.data.dtype)

    def test_tuple(self):
        """ () -> np.void instance """
        out = self.dset[()]
        self.assertIsInstance(out, np.void)
        self.assertEqual(out.dtype, self.data.dtype)

    def test_slice(self):
        """ slice -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[0:4]

    def test_multi_block_slice(self):
        """ MultiBlockSlice -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[h5py.MultiBlockSlice()]

    def test_index(self):
        """ index -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[0]

    # FIXME: NumPy has IndexError instead
    def test_indexlist(self):
        """ index list -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[[1,2,5]]

    # FIXME: NumPy permits this
    def test_mask(self):
        """ mask -> ValueError  """
        mask = np.array(True, dtype='bool')
        with self.assertRaises(ValueError):
            self.dset[mask]

    # FIXME: NumPy returns a scalar ndarray
    def test_fieldnames(self):
        """ field name -> bare value """
        out = self.dset['a']
        self.assertIsInstance(out, np.float32)
        self.assertEqual(out, self.dset['a'])


class TestScalarArray(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        self.dt = np.dtype('(3,2)f')
        self.data = np.array([(3.2, -119), (42, 99.8), (3.14, 0)], dtype='f')
        self.dset = self.f.create_dataset('x', (), dtype=self.dt)
        self.dset[...] = self.data

    def test_ndim(self):
        """ Verify number of dimensions """
        self.assertEqual(self.data.ndim, 2)
        self.assertEqual(self.dset.ndim, 0)

    def test_size(self):
        """ Verify size """
        self.assertEqual(self.dset.size, 1)

    def test_nbytes(self):
        """ Verify nbytes """
        self.assertEqual(self.dset.nbytes, self.dset.dtype.itemsize)  # not sure if 'f' is always alias for 'f4'

    def test_shape(self):
        """ Verify shape """
        self.assertEqual(self.data.shape, (3, 2))
        self.assertEqual(self.dset.shape, tuple())

    def test_ellipsis(self):
        """ Ellipsis -> ndarray promoted to underlying shape """
        out = self.dset[...]
        self.assertArrayEqual(out, self.data)

    def test_tuple(self):
        """ () -> same as ellipsis """
        out = self.dset[...]
        self.assertArrayEqual(out, self.data)

    def test_slice(self):
        """ slice -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[0:4]

    def test_multi_block_slice(self):
        """ MultiBlockSlice -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[h5py.MultiBlockSlice()]

    def test_index(self):
        """ index -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[0]

    def test_indexlist(self):
        """ index list -> ValueError """
        with self.assertRaises(ValueError):
            self.dset[[]]

    def test_mask(self):
        """ mask -> ValueError """
        mask = np.array(True, dtype='bool')
        with self.assertRaises(ValueError):
            self.dset[mask]

    def test_fieldnames(self):
        """ field name -> ValueError (no fields) """
        with self.assertRaises(ValueError):
            self.dset['field']


class Test1DZeroFloat(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        self.data = np.ones((0,), dtype='f')
        self.dset = self.f.create_dataset('x', data=self.data)

    def test_ndim(self):
        """ Verify number of dimensions """
        self.assertEqual(self.dset.ndim, 1)

    def test_shape(self):
        """ Verify shape """
        self.assertEqual(self.dset.shape, (0,))

    def test_ellipsis(self):
        """ Ellipsis -> ndarray of matching shape """
        self.assertNumpyBehavior(self.dset, self.data, np.s_[...])

    def test_tuple(self):
        """ () -> same as ellipsis """
        self.assertNumpyBehavior(self.dset, self.data, np.s_[()])

    def test_slice(self):
        """ slice -> ndarray of shape (0,) """
        self.assertNumpyBehavior(self.dset, self.data, np.s_[0:4])

    def test_slice_stop_less_than_start(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[7:5])

    def test_index(self):
        """ index -> out of range """
        with self.assertRaises(IndexError):
            self.dset[0]

    def test_indexlist(self):
        """ index list """
        self.assertNumpyBehavior(self.dset, self.data, np.s_[[]])

    def test_mask(self):
        """ mask -> ndarray of matching shape """
        mask = np.ones((0,), dtype='bool')
        self.assertNumpyBehavior(
            self.dset,
            self.data,
            np.s_[mask],
            # Fast reader doesn't work with boolean masks
            skip_fast_reader=True,
        )

    def test_fieldnames(self):
        """ field name -> ValueError (no fields) """
        with self.assertRaises(ValueError):
            self.dset['field']


class Test1DFloat(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        self.data = np.arange(13).astype('f')
        self.dset = self.f.create_dataset('x', data=self.data)

    def test_ndim(self):
        """ Verify number of dimensions """
        self.assertEqual(self.dset.ndim, 1)

    def test_shape(self):
        """ Verify shape """
        self.assertEqual(self.dset.shape, (13,))

    def test_ellipsis(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[...])

    def test_tuple(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[()])

    def test_slice_simple(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[0:4])

    def test_slice_zerosize(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[4:4])

    def test_slice_strides(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[1:7:3])

    def test_slice_negindexes(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[-8:-2:3])

    def test_slice_stop_less_than_start(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[7:5])

    def test_slice_outofrange(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[100:400:3])

    def test_slice_backwards(self):
        """ we disallow negative steps """
        with self.assertRaises(ValueError):
            self.dset[::-1]

    def test_slice_zerostride(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[::0])

    def test_index_simple(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[3])

    def test_index_neg(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[-4])

    # FIXME: NumPy permits this... it adds a new axis in front
    def test_index_none(self):
        with self.assertRaises(TypeError):
            self.dset[None]

    def test_index_illegal(self):
        """ Illegal slicing argument """
        with self.assertRaises(TypeError):
            self.dset[{}]

    def test_index_outofrange(self):
        with self.assertRaises(IndexError):
            self.dset[100]

    def test_indexlist_simple(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[[1,2,5]])

    def test_indexlist_numpyarray(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[np.array([1, 2, 5])])

    def test_indexlist_single_index_ellipsis(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[[0], ...])

    def test_indexlist_numpyarray_single_index_ellipsis(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[np.array([0]), ...])

    def test_indexlist_numpyarray_ellipsis(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[np.array([1, 2, 5]), ...])

    def test_indexlist_empty(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[[]])

    def test_indexlist_outofrange(self):
        with self.assertRaises(IndexError):
            self.dset[[100]]

    def test_indexlist_nonmonotonic(self):
        """ we require index list values to be strictly increasing """
        with self.assertRaises(TypeError):
            self.dset[[1,3,2]]

    def test_indexlist_monotonic_negative(self):
        # This should work: indices are logically increasing
        self.assertNumpyBehavior(self.dset, self.data,  np.s_[[0, 2, -2]])

        with self.assertRaises(TypeError):
            self.dset[[-2, -3]]

    def test_indexlist_repeated(self):
        """ we forbid repeated index values """
        with self.assertRaises(TypeError):
            self.dset[[1,1,2]]

    def test_mask_true(self):
        self.assertNumpyBehavior(
            self.dset,
            self.data,
            np.s_[self.data > -100],
            # Fast reader doesn't work with boolean masks
            skip_fast_reader=True,
        )

    def test_mask_false(self):
        self.assertNumpyBehavior(
            self.dset,
            self.data,
            np.s_[self.data > 100],
            # Fast reader doesn't work with boolean masks
            skip_fast_reader=True,
        )

    def test_mask_partial(self):
        self.assertNumpyBehavior(
            self.dset,
            self.data,
            np.s_[self.data > 5],
            # Fast reader doesn't work with boolean masks
            skip_fast_reader=True,
        )

    def test_mask_wrongsize(self):
        """ we require the boolean mask shape to match exactly """
        with self.assertRaises(TypeError):
            self.dset[np.ones((2,), dtype='bool')]

    def test_fieldnames(self):
        """ field name -> ValueError (no fields) """
        with self.assertRaises(ValueError):
            self.dset['field']


class Test2DZeroFloat(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        self.data = np.ones((0,3), dtype='f')
        self.dset = self.f.create_dataset('x', data=self.data)

    def test_ndim(self):
        """ Verify number of dimensions """
        self.assertEqual(self.dset.ndim, 2)

    def test_shape(self):
        """ Verify shape """
        self.assertEqual(self.dset.shape, (0, 3))

    def test_indexlist(self):
        """ see issue #473 """
        self.assertNumpyBehavior(self.dset, self.data, np.s_[:,[0,1,2]])


class Test2DFloat(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        self.data = np.ones((5,3), dtype='f')
        self.dset = self.f.create_dataset('x', data=self.data)

    def test_ndim(self):
        """ Verify number of dimensions """
        self.assertEqual(self.dset.ndim, 2)

    def test_size(self):
        """ Verify size """
        self.assertEqual(self.dset.size, 15)

    def test_nbytes(self):
        """ Verify nbytes """
        self.assertEqual(self.dset.nbytes, 15*self.data.dtype.itemsize)  # not sure if 'f' is always alias for 'f4'

    def test_shape(self):
        """ Verify shape """
        self.assertEqual(self.dset.shape, (5, 3))

    def test_indexlist(self):
        """ see issue #473 """
        self.assertNumpyBehavior(self.dset, self.data, np.s_[:,[0,1,2]])

    def test_index_emptylist(self):
        self.assertNumpyBehavior(self.dset, self.data, np.s_[:, []])
        self.assertNumpyBehavior(self.dset, self.data, np.s_[[]])


class TestVeryLargeArray(TestCase):

    def setUp(self):
        TestCase.setUp(self)
        self.dset = self.f.create_dataset('x', shape=(2**15, 2**16))

    @ut.skipIf(sys.maxsize < 2**31, 'Maximum integer size >= 2**31 required')
    def test_size(self):
        self.assertEqual(self.dset.size, 2**31)


def test_read_no_fill_value(writable_file):
    # With FILL_TIME_NEVER, HDF5 doesn't write zeros in the output array for
    # unallocated chunks. If we read into uninitialized memory, it can appear
    # to read random values. https://github.com/h5py/h5py/issues/2069
    dcpl = h5py.h5p.create(h5py.h5p.DATASET_CREATE)
    dcpl.set_chunk((1,))
    dcpl.set_fill_time(h5py.h5d.FILL_TIME_NEVER)
    ds = h5py.Dataset(h5py.h5d.create(
        writable_file.id, b'a', h5py.h5t.IEEE_F64LE, h5py.h5s.create_simple((5,)), dcpl
    ))
    np.testing.assert_array_equal(ds[:3], np.zeros(3, np.float64))


class TestBoolIndex(TestCase):
    """
    Tests for indexing with Boolean arrays
    """
    def setUp(self):
        super().setUp()
        self.arr = np.arange(9).reshape(3,-1)
        self.dset = self.f.create_dataset('x', data=self.arr)

    def test_select_first_axis(self):
        sel = np.s_[[False, True, False],:]
        self.assertNumpyBehavior(self.dset, self.arr, sel)

    def test_wrong_size(self):
        sel = np.s_[[False, True, False, False],:]
        with self.assertRaises(TypeError):
            self.dset[sel]
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/tests/test_dataset_swmr.py0000644000175000017500000000760714507227212021532 0ustar00takluyvertakluyverimport numpy as np
import h5py

from .common import ut, TestCase


class TestDatasetSwmrRead(TestCase):
    """ Testing SWMR functions when reading a dataset.
    Skip this test if the HDF5 library does not have the SWMR features.
    """

    def setUp(self):
        TestCase.setUp(self)
        self.data = np.arange(13).astype('f')
        self.dset = self.f.create_dataset('data', chunks=(13,), maxshape=(None,), data=self.data)
        fname = self.f.filename
        self.f.close()

        self.f = h5py.File(fname, 'r', swmr=True)
        self.dset = self.f['data']

    def test_initial_swmr_mode_on(self):
        """ Verify that the file is initially in SWMR mode"""
        self.assertTrue(self.f.swmr_mode)

    def test_read_data(self):
        self.assertArrayEqual(self.dset, self.data)

    def test_refresh(self):
        self.dset.refresh()

    def test_force_swmr_mode_on_raises(self):
        """ Verify when reading a file cannot be forcibly switched to swmr mode.
        When reading with SWMR the file must be opened with swmr=True."""
        with self.assertRaises(Exception):
            self.f.swmr_mode = True
        self.assertTrue(self.f.swmr_mode)

    def test_force_swmr_mode_off_raises(self):
        """ Switching SWMR write mode off is only possible by closing the file.
        Attempts to forcibly switch off the SWMR mode should raise a ValueError.
        """
        with self.assertRaises(ValueError):
            self.f.swmr_mode = False
        self.assertTrue(self.f.swmr_mode)

class TestDatasetSwmrWrite(TestCase):
    """ Testing SWMR functions when reading a dataset.
    Skip this test if the HDF5 library does not have the SWMR features.
    """

    def setUp(self):
        """ First setup a file with a small chunked and empty dataset.
        No data written yet.
        """

        # Note that when creating the file, the swmr=True is not required for
        # write, but libver='latest' is required.
        self.f = h5py.File(self.mktemp(), 'w', libver='latest')

        self.data = np.arange(4).astype('f')
        self.dset = self.f.create_dataset('data', shape=(0,), dtype=self.data.dtype, chunks=(2,), maxshape=(None,))


    def test_initial_swmr_mode_off(self):
        """ Verify that the file is not initially in SWMR mode"""
        self.assertFalse(self.f.swmr_mode)

    def test_switch_swmr_mode_on(self):
        """ Switch to SWMR mode and verify """
        self.f.swmr_mode = True
        self.assertTrue(self.f.swmr_mode)

    def test_switch_swmr_mode_off_raises(self):
        """ Switching SWMR write mode off is only possible by closing the file.
        Attempts to forcibly switch off the SWMR mode should raise a ValueError.
        """
        self.f.swmr_mode = True
        self.assertTrue(self.f.swmr_mode)
        with self.assertRaises(ValueError):
            self.f.swmr_mode = False
        self.assertTrue(self.f.swmr_mode)

    def test_extend_dset(self):
        """ Extend and flush a SWMR dataset
        """
        self.f.swmr_mode = True
        self.assertTrue(self.f.swmr_mode)

        self.dset.resize( self.data.shape )
        self.dset[:] = self.data
        self.dset.flush()

        # Refresh and read back data for assertion
        self.dset.refresh()
        self.assertArrayEqual(self.dset, self.data)

    def test_extend_dset_multiple(self):

        self.f.swmr_mode = True
        self.assertTrue(self.f.swmr_mode)

        self.dset.resize( (4,) )
        self.dset[0:] = self.data
        self.dset.flush()

        # Refresh and read back 1st data block for assertion
        self.dset.refresh()
        self.assertArrayEqual(self.dset, self.data)

        self.dset.resize( (8,) )
        self.dset[4:] = self.data
        self.dset.flush()

        # Refresh and read back 1st data block for assertion
        self.dset.refresh()
        self.assertArrayEqual(self.dset[0:4], self.data)
        self.assertArrayEqual(self.dset[4:8], self.data)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/tests/test_datatype.py0000644000175000017500000000175714045746670020663 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    File-resident datatype tests.

    Tests "committed" file-resident datatype objects.
"""

import numpy as np

from .common import ut, TestCase

from h5py import Datatype

class TestCreation(TestCase):

    """
        Feature: repr() works sensibly on datatype objects
    """

    def test_repr(self):
        """ repr() on datatype objects """
        self.f['foo'] = np.dtype('S10')
        dt = self.f['foo']
        self.assertIsInstance(repr(dt), str)
        self.f.close()
        self.assertIsInstance(repr(dt), str)


    def test_appropriate_low_level_id(self):
        " Binding a group to a non-TypeID identifier fails with ValueError "
        with self.assertRaises(ValueError):
            Datatype(self.f['/'].id)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/tests/test_dimension_scales.py0000644000175000017500000001766514507227212022361 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

import sys

import numpy as np

from .common import ut, TestCase
from h5py import File, Group, Dataset
import h5py


class BaseDataset(TestCase):

    """
    data is a 3-dimensional dataset with dimensions [z, y, x]

    The z dimension is labeled. It does not have any attached scales.
    The y dimension is not labeled. It has one attached scale.
    The x dimension is labeled. It has two attached scales.

    data2 is a 3-dimensional dataset with no associated dimension scales.
    """

    def setUp(self):
        self.f = File(self.mktemp(), 'w')
        self.f['data'] = np.ones((4, 3, 2), 'f')
        self.f['data2'] = np.ones((4, 3, 2), 'f')
        self.f['x1'] = np.ones((2), 'f')
        h5py.h5ds.set_scale(self.f['x1'].id)
        h5py.h5ds.attach_scale(self.f['data'].id, self.f['x1'].id, 2)
        self.f['x2'] = np.ones((2), 'f')
        h5py.h5ds.set_scale(self.f['x2'].id, b'x2 name')
        h5py.h5ds.attach_scale(self.f['data'].id, self.f['x2'].id, 2)
        self.f['y1'] = np.ones((3), 'f')
        h5py.h5ds.set_scale(self.f['y1'].id, b'y1 name')
        h5py.h5ds.attach_scale(self.f['data'].id, self.f['y1'].id, 1)
        self.f['z1'] = np.ones((4), 'f')

        h5py.h5ds.set_label(self.f['data'].id, 0, b'z')
        h5py.h5ds.set_label(self.f['data'].id, 2, b'x')

    def tearDown(self):
        if self.f:
            self.f.close()


class TestH5DSBindings(BaseDataset):

    """
        Feature: Datasets can be created from existing data
    """

    def test_create_dimensionscale(self):
        """ Create a dimension scale from existing dataset """
        self.assertTrue(h5py.h5ds.is_scale(self.f['x1'].id))
        self.assertEqual(h5py.h5ds.get_scale_name(self.f['x1'].id), b'')
        self.assertEqual(self.f['x1'].attrs['CLASS'], b"DIMENSION_SCALE")
        self.assertEqual(h5py.h5ds.get_scale_name(self.f['x2'].id), b'x2 name')

    def test_attach_dimensionscale(self):
        self.assertTrue(
            h5py.h5ds.is_attached(self.f['data'].id, self.f['x1'].id, 2)
            )
        self.assertFalse(
            h5py.h5ds.is_attached(self.f['data'].id, self.f['x1'].id, 1))
        self.assertEqual(h5py.h5ds.get_num_scales(self.f['data'].id, 0), 0)
        self.assertEqual(h5py.h5ds.get_num_scales(self.f['data'].id, 1), 1)
        self.assertEqual(h5py.h5ds.get_num_scales(self.f['data'].id, 2), 2)

    def test_detach_dimensionscale(self):
        self.assertTrue(
            h5py.h5ds.is_attached(self.f['data'].id, self.f['x1'].id, 2)
            )
        h5py.h5ds.detach_scale(self.f['data'].id, self.f['x1'].id, 2)
        self.assertFalse(
            h5py.h5ds.is_attached(self.f['data'].id, self.f['x1'].id, 2)
            )
        self.assertEqual(h5py.h5ds.get_num_scales(self.f['data'].id, 2), 1)

    def test_label_dimensionscale(self):
        self.assertEqual(h5py.h5ds.get_label(self.f['data'].id, 0), b'z')
        self.assertEqual(h5py.h5ds.get_label(self.f['data'].id, 1), b'')
        self.assertEqual(h5py.h5ds.get_label(self.f['data'].id, 2), b'x')

    def test_iter_dimensionscales(self):
        def func(dsid):
            res = h5py.h5ds.get_scale_name(dsid)
            if res == b'x2 name':
                return dsid

        res = h5py.h5ds.iterate(self.f['data'].id, 2, func, 0)
        self.assertEqual(h5py.h5ds.get_scale_name(res), b'x2 name')


class TestDimensionManager(BaseDataset):

    def test_make_scale(self):
        # test recreating or renaming an existing scale:
        self.f['x1'].make_scale(b'foobar')
        self.assertEqual(self.f['data'].dims[2]['foobar'], self.f['x1'])
        # test creating entirely new scale:
        self.f['data2'].make_scale(b'foobaz')
        self.f['data'].dims[2].attach_scale(self.f['data2'])
        self.assertEqual(self.f['data'].dims[2]['foobaz'], self.f['data2'])

    def test_get_dimension(self):
        with self.assertRaises(IndexError):
            self.f['data'].dims[3]

    def test_len(self):
        self.assertEqual(len(self.f['data'].dims), 3)
        self.assertEqual(len(self.f['data2'].dims), 3)

    def test_iter(self):
        dims = self.f['data'].dims
        self.assertEqual(
            [d for d in dims],
            [dims[0], dims[1], dims[2]]
            )

    def test_repr(self):
        ds = self.f.create_dataset('x', (2,3))
        self.assertIsInstance(repr(ds.dims), str)
        self.f.close()
        self.assertIsInstance(repr(ds.dims), str)


class TestDimensionsHighLevel(BaseDataset):

    def test_len(self):
        self.assertEqual(len(self.f['data'].dims[0]), 0)
        self.assertEqual(len(self.f['data'].dims[1]), 1)
        self.assertEqual(len(self.f['data'].dims[2]), 2)
        self.assertEqual(len(self.f['data2'].dims[0]), 0)
        self.assertEqual(len(self.f['data2'].dims[1]), 0)
        self.assertEqual(len(self.f['data2'].dims[2]), 0)

    def test_get_label(self):
        self.assertEqual(self.f['data'].dims[2].label, 'x')
        self.assertEqual(self.f['data'].dims[1].label, '')
        self.assertEqual(self.f['data'].dims[0].label, 'z')
        self.assertEqual(self.f['data2'].dims[2].label, '')
        self.assertEqual(self.f['data2'].dims[1].label, '')
        self.assertEqual(self.f['data2'].dims[0].label, '')

    def test_set_label(self):
        self.f['data'].dims[0].label = 'foo'
        self.assertEqual(self.f['data'].dims[2].label, 'x')
        self.assertEqual(self.f['data'].dims[1].label, '')
        self.assertEqual(self.f['data'].dims[0].label, 'foo')

    def test_detach_scale(self):
        self.f['data'].dims[2].detach_scale(self.f['x1'])
        self.assertEqual(len(self.f['data'].dims[2]), 1)
        self.assertEqual(self.f['data'].dims[2][0], self.f['x2'])
        self.f['data'].dims[2].detach_scale(self.f['x2'])
        self.assertEqual(len(self.f['data'].dims[2]), 0)

    def test_attach_scale(self):
        self.f['x3'] = self.f['x2'][...]
        self.f['data'].dims[2].attach_scale(self.f['x3'])
        self.assertEqual(len(self.f['data'].dims[2]), 3)
        self.assertEqual(self.f['data'].dims[2][2], self.f['x3'])

    def test_get_dimension_scale(self):
        self.assertEqual(self.f['data'].dims[2][0], self.f['x1'])
        with self.assertRaises(RuntimeError):
            self.f['data2'].dims[2][0], self.f['x2']
        self.assertEqual(self.f['data'].dims[2][''], self.f['x1'])
        self.assertEqual(self.f['data'].dims[2]['x2 name'], self.f['x2'])

    def test_get_items(self):
        self.assertEqual(
            self.f['data'].dims[2].items(),
            [('', self.f['x1']), ('x2 name', self.f['x2'])]
            )

    def test_get_keys(self):
        self.assertEqual(self.f['data'].dims[2].keys(), ['', 'x2 name'])

    def test_get_values(self):
        self.assertEqual(
            self.f['data'].dims[2].values(),
            [self.f['x1'], self.f['x2']]
            )

    def test_iter(self):
        self.assertEqual([i for i in self.f['data'].dims[2]], ['', 'x2 name'])

    def test_repr(self):
        ds = self.f["data"]
        self.assertEqual(repr(ds.dims[2])[1:16], '"x" dimension 2')
        self.f.close()
        self.assertIsInstance(repr(ds.dims), str)

    def test_attributes(self):
        self.f["data2"].attrs["DIMENSION_LIST"] = self.f["data"].attrs[
            "DIMENSION_LIST"]
        self.assertEqual(len(self.f['data2'].dims[0]), 0)
        self.assertEqual(len(self.f['data2'].dims[1]), 1)
        self.assertEqual(len(self.f['data2'].dims[2]), 2)

    def test_is_scale(self):
        """Test Dataset.is_scale property"""
        self.assertTrue(self.f['x1'].is_scale)
        self.assertTrue(self.f['x2'].is_scale)
        self.assertTrue(self.f['y1'].is_scale)
        self.assertFalse(self.f['z1'].is_scale)
        self.assertFalse(self.f['data'].is_scale)
        self.assertFalse(self.f['data2'].is_scale)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/tests/test_dims_dimensionproxy.py0000644000175000017500000000113114045746670023135 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Tests the h5py.Dataset.dims.DimensionProxy class.
"""

import numpy as np
import h5py

from .common import ut, TestCase

class TestItems(TestCase):

    def test_empty(self):
        """ no dimension scales -> empty list """
        dset = self.f.create_dataset('x', (10,))
        self.assertEqual(dset.dims[0].items(), [])
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/tests/test_dtype.py0000644000175000017500000004277714675110407020174 0ustar00takluyvertakluyver"""
    Tests for converting between numpy dtypes and h5py data types
"""

from itertools import count
import platform
import numpy as np
import h5py
try:
    import tables
except ImportError:
    tables = None

from .common import ut, TestCase

UNSUPPORTED_LONG_DOUBLE = ('i386', 'i486', 'i586', 'i686', 'ppc64le')
UNSUPPORTED_LONG_DOUBLE_TYPES = ('float96', 'float128', 'complex192',
                                 'complex256')


class TestVlen(TestCase):

    """
        Check that storage of vlen strings is carried out correctly.
    """
    def assertVlenArrayEqual(self, dset, arr, message=None, precision=None):
        assert dset.shape == arr.shape, \
            "Shape mismatch (%s vs %s)%s" % (dset.shape, arr.shape, message)
        for (i, d, a) in zip(count(), dset, arr):
            self.assertArrayEqual(d, a, message, precision)

    def test_compound(self):

        fields = []
        fields.append(('field_1', h5py.string_dtype()))
        fields.append(('field_2', np.int32))
        dt = np.dtype(fields)
        self.f['mytype'] = np.dtype(dt)
        dt_out = self.f['mytype'].dtype.fields['field_1'][0]
        string_inf = h5py.check_string_dtype(dt_out)
        self.assertEqual(string_inf.encoding, 'utf-8')

    def test_compound_vlen_bool(self):
        vidt = h5py.vlen_dtype(np.uint8)
        def a(items):
            return np.array(items, dtype=np.uint8)

        f = self.f

        dt_vb = np.dtype([
            ('foo', vidt),
            ('logical', bool)])
        vb = f.create_dataset('dt_vb', shape=(4,), dtype=dt_vb)
        data = np.array([(a([1, 2, 3]), True),
                         (a([1    ]), False),
                         (a([1, 5  ]), True),
                         (a([],), False), ],
                     dtype=dt_vb)
        vb[:] = data
        actual = f['dt_vb'][:]
        self.assertVlenArrayEqual(data['foo'], actual['foo'])
        self.assertArrayEqual(data['logical'], actual['logical'])

        dt_vv = np.dtype([
            ('foo', vidt),
            ('bar', vidt)])
        f.create_dataset('dt_vv', shape=(4,), dtype=dt_vv)

        dt_vvb = np.dtype([
            ('foo', vidt),
            ('bar', vidt),
            ('logical', bool)])
        vvb = f.create_dataset('dt_vvb', shape=(2,), dtype=dt_vvb)

        dt_bvv = np.dtype([
            ('logical', bool),
            ('foo', vidt),
            ('bar', vidt)])
        bvv = f.create_dataset('dt_bvv', shape=(2,), dtype=dt_bvv)
        data = np.array([(True, a([1, 2, 3]), a([1, 2])),
                         (False, a([]), a([2, 4, 6])), ],
                         dtype=bvv)
        bvv[:] = data
        actual = bvv[:]
        self.assertVlenArrayEqual(data['foo'], actual['foo'])
        self.assertVlenArrayEqual(data['bar'], actual['bar'])
        self.assertArrayEqual(data['logical'], actual['logical'])

    def test_compound_vlen_enum(self):
        eidt = h5py.enum_dtype({'OFF': 0, 'ON': 1}, basetype=np.uint8)
        vidt = h5py.vlen_dtype(np.uint8)
        def a(items):
            return np.array(items, dtype=np.uint8)

        f = self.f

        dt_vve = np.dtype([
            ('foo', vidt),
            ('bar', vidt),
            ('switch', eidt)])
        vve = f.create_dataset('dt_vve', shape=(2,), dtype=dt_vve)
        data = np.array([(a([1, 2, 3]), a([1, 2]), 1),
                         (a([]), a([2, 4, 6]), 0), ],
                         dtype=dt_vve)
        vve[:] = data
        actual = vve[:]
        self.assertVlenArrayEqual(data['foo'], actual['foo'])
        self.assertVlenArrayEqual(data['bar'], actual['bar'])
        self.assertArrayEqual(data['switch'], actual['switch'])

    def test_vlen_enum(self):
        fname = self.mktemp()
        arr1 = [[1], [1, 2]]
        dt1 = h5py.vlen_dtype(h5py.enum_dtype(dict(foo=1, bar=2), 'i'))

        with h5py.File(fname, 'w') as f:
            df1 = f.create_dataset('test', (len(arr1),), dtype=dt1)
            df1[:] = np.array(arr1, dtype=object)

        with h5py.File(fname, 'r') as f:
            df2 = f['test']
            dt2 = df2.dtype
            arr2 = [e.tolist() for e in df2[:]]

        self.assertEqual(arr1, arr2)
        self.assertEqual(h5py.check_enum_dtype(h5py.check_vlen_dtype(dt1)),
                         h5py.check_enum_dtype(h5py.check_vlen_dtype(dt2)))


class TestEmptyVlen(TestCase):
    def test_write_empty_vlen(self):
        fname = self.mktemp()
        with h5py.File(fname, 'w') as f:
            d = np.rec.fromarrays([[], []], names='a,b', formats='|V16,O')
            dset = f.create_dataset('test', data=d, dtype=[('a', '|V16'), ('b', h5py.special_dtype(vlen=np.float64))])
            self.assertEqual(dset.size, 0)


class TestExplicitCast(TestCase):
    def test_f2_casting(self):
        fname = self.mktemp()

        np.random.seed(1)
        A = np.random.rand(1500, 20)

        # Save to HDF5 file
        with h5py.File(fname, "w") as Fid:
            Fid.create_dataset("Data", data=A, dtype='f2')

        with h5py.File(fname, "r") as Fid:
            B = Fid["Data"][:]

        # Compare
        self.assertTrue(np.all(A.astype('f2') == B))


class TestOffsets(TestCase):
    """
        Check that compound members with aligned or manual offsets are handled
        correctly.
    """

    def test_compound_vlen(self):
        vidt = h5py.vlen_dtype(np.uint8)
        eidt = h5py.enum_dtype({'OFF': 0, 'ON': 1}, basetype=np.uint8)

        for np_align in (False, True):
            dt = np.dtype([
                ('a', eidt),
                ('foo', vidt),
                ('bar', vidt),
                ('switch', eidt)], align=np_align)
            np_offsets = [dt.fields[i][1] for i in dt.names]

            for logical in (False, True):
                if logical and np_align:
                    # Vlen types have different size in the numpy struct
                    self.assertRaises(TypeError, h5py.h5t.py_create, dt,
                            logical=logical)
                else:
                    ht = h5py.h5t.py_create(dt, logical=logical)
                    offsets = [ht.get_member_offset(i)
                               for i in range(ht.get_nmembers())]
                    if np_align:
                        self.assertEqual(np_offsets, offsets)

    def test_aligned_offsets(self):
        dt = np.dtype('i4,i8,i2', align=True)
        ht = h5py.h5t.py_create(dt)
        self.assertEqual(dt.itemsize, ht.get_size())
        self.assertEqual(
            [dt.fields[i][1] for i in dt.names],
            [ht.get_member_offset(i) for i in range(ht.get_nmembers())]
        )

    def test_aligned_data(self):
        dt = np.dtype('i4,f8,i2', align=True)
        data = np.zeros(10, dtype=dt)

        data['f0'] = np.array(np.random.randint(-100, 100, size=data.size),
                              dtype='i4')
        data['f1'] = np.random.rand(data.size)
        data['f2'] = np.array(np.random.randint(-100, 100, size=data.size),
                              dtype='i2')

        fname = self.mktemp()

        with h5py.File(fname, 'w') as f:
            f['data'] = data

        with h5py.File(fname, 'r') as f:
            self.assertArrayEqual(f['data'], data)

    def test_compound_robustness(self):
        # make an out of order compound type with gaps in it, and larger itemsize than minimum
        # Idea is to be robust to type descriptions we *could* get out of HDF5 files, from custom descriptions
        # of types in addition to numpy's flakey history on unaligned fields with non-standard or padded layouts.
        fields = [
            ('f0', np.float64, 25),
            ('f1', np.uint64, 9),
            ('f2', np.uint32, 0),
            ('f3', np.uint16, 5)
        ]
        lastfield = fields[np.argmax([ x[2] for x in fields ])]
        itemsize = lastfield[2] + np.dtype(lastfield[1]).itemsize + 6
        extract_index = lambda index, sequence: [ x[index] for x in sequence ]

        dt = np.dtype({
            'names' : extract_index(0, fields),
            'formats' : extract_index(1, fields),
            'offsets' : extract_index(2, fields),
            # 'aligned': False, - already defaults to False
            'itemsize': itemsize
        })

        self.assertTrue(dt.itemsize == itemsize)
        data = np.zeros(10, dtype=dt)

        # don't trust numpy struct handling, keep fields out of band in case content insertion is erroneous
        # yes... this has also been known to happen.
        f1 = np.array([1 + i * 4 for i in range(data.shape[0])], dtype=dt.fields['f1'][0])
        f2 = np.array([2 + i * 4 for i in range(data.shape[0])], dtype=dt.fields['f2'][0])
        f3 = np.array([3 + i * 4 for i in range(data.shape[0])], dtype=dt.fields['f3'][0])
        f0c = 3.14
        data['f0'] = f0c
        data['f3'] = f3
        data['f1'] = f1
        data['f2'] = f2

        # numpy consistency checks
        self.assertTrue(np.all(data['f0'] == f0c))
        self.assertArrayEqual(data['f3'], f3)
        self.assertArrayEqual(data['f1'], f1)
        self.assertArrayEqual(data['f2'], f2)

        fname = self.mktemp()

        with h5py.File(fname, 'w') as fd:
            fd.create_dataset('data', data=data)

        with h5py.File(fname, 'r') as fd:
            readback = fd['data']
            self.assertTrue(readback.dtype == dt)
            self.assertArrayEqual(readback, data)
            self.assertTrue(np.all(readback['f0'] == f0c))
            self.assertArrayEqual(readback['f1'], f1)
            self.assertArrayEqual(readback['f2'], f2)
            self.assertArrayEqual(readback['f3'], f3)

    def test_out_of_order_offsets(self):
        dt = np.dtype({
            'names' : ['f1', 'f2', 'f3'],
            'formats' : ['']:
                dt_descr = f'{dt_order}M8[{dt_unit}]'
                dt = h5py.opaque_dtype(np.dtype(dt_descr))
                arr = np.array([0], dtype=np.int64).view(dtype=dt)

                with h5py.File(fname, 'w') as f:
                    dset = f.create_dataset("default", data=arr, dtype=dt)
                    self.assertArrayEqual(arr, dset)
                    self.assertEqual(arr.dtype, dset.dtype)

    def test_timedelta(self):
        fname = self.mktemp()

        for dt_unit in self.datetime_units:
            for dt_order in ['<', '>']:
                dt_descr = f'{dt_order}m8[{dt_unit}]'
                dt = h5py.opaque_dtype(np.dtype(dt_descr))
                arr = np.array([np.timedelta64(500, dt_unit)], dtype=dt)

                with h5py.File(fname, 'w') as f:
                    dset = f.create_dataset("default", data=arr, dtype=dt)
                    self.assertArrayEqual(arr, dset)
                    self.assertEqual(arr.dtype, dset.dtype)

@ut.skipUnless(tables is not None, 'tables is required')
class TestBitfield(TestCase):

    """
    Test H5T_NATIVE_B8 reading
    """

    def test_b8_bool(self):
        arr1 = np.array([False, True], dtype=bool)
        self._test_b8(
            arr1,
            expected_default_cast_dtype=np.uint8
        )
        self._test_b8(
            arr1,
            expected_default_cast_dtype=np.uint8,
            cast_dtype=np.uint8
        )

    def test_b8_bool_compound(self):
        arr1 = np.array([(False,), (True,)], dtype=np.dtype([('x', '?')]))
        self._test_b8(
            arr1,
            expected_default_cast_dtype=np.dtype([('x', 'u1')])
        )
        self._test_b8(
            arr1,
            expected_default_cast_dtype=np.dtype([('x', 'u1')]),
            cast_dtype=np.dtype([('x', 'u1')])
        )

    def test_b8_bool_compound_nested(self):
        arr1 = np.array(
            [(True, (True, False)), (True, (False, True))],
            dtype=np.dtype([('x', '?'), ('y', [('a', '?'), ('b', '?')])]),
        )
        self._test_b8(
            arr1,
            expected_default_cast_dtype=np.dtype(
                [('x', 'u1'), ('y', [('a', 'u1'), ('b', 'u1')])]
            )
        )
        self._test_b8(
            arr1,
            expected_default_cast_dtype=np.dtype(
                [('x', 'u1'), ('y', [('a', 'u1'), ('b', 'u1')])]
            ),
            cast_dtype=np.dtype([('x', 'u1'), ('y', [('a', 'u1'), ('b', 'u1')])]),
        )

    def test_b8_bool_compound_mixed_types(self):
        arr1 = np.array(
            [(True, 0.5), (False, 0.2)], dtype=np.dtype([('x','?'), ('y', ' (1, 14, 3),
                        reason='Requires HDF5 <= 1.14.3')
    def test_too_small_pbs(self):
        """Page buffer size must be greater than file space page size."""
        fname = self.mktemp()
        fsp = 16 * 1024
        with File(fname, mode='w', fs_strategy='page', fs_page_size=fsp):
            pass
        with self.assertRaises(OSError):
            File(fname, mode="r", page_buf_size=fsp-1)

    @pytest.mark.skipif(h5py.version.hdf5_version_tuple < (1, 14, 4),
                        reason='Requires HDF5 >= 1.14.4')
    def test_open_nonpage_pbs(self):
        """Open non-PAGE file with page buffer set."""
        fname = self.mktemp()
        fsp = 16 * 1024
        with File(fname, mode='w'):
            pass
        with File(fname, mode='r', page_buf_size=fsp) as f:
            fapl = f.id.get_access_plist()
            assert fapl.get_page_buffer_size()[0] == 0

    @pytest.mark.skipif(h5py.version.hdf5_version_tuple < (1, 14, 4),
                    reason='Requires HDF5 >= 1.14.4')
    def test_smaller_pbs(self):
        """Adjust page buffer size automatically when smaller than file page."""
        fname = self.mktemp()
        fsp = 16 * 1024
        with File(fname, mode='w', fs_strategy='page', fs_page_size=fsp):
            pass
        with File(fname, mode='r', page_buf_size=fsp-100) as f:
            fapl = f.id.get_access_plist()
            assert fapl.get_page_buffer_size()[0] == fsp

    def test_actual_pbs(self):
        """Verify actual page buffer size."""
        fname = self.mktemp()
        fsp = 16 * 1024
        pbs = 2 * fsp
        with File(fname, mode='w', fs_strategy='page', fs_page_size=fsp):
            pass
        with File(fname, mode='r', page_buf_size=pbs-1) as f:
            fapl = f.id.get_access_plist()
            self.assertEqual(fapl.get_page_buffer_size()[0], fsp)


class TestModes(TestCase):

    """
        Feature: File mode can be retrieved via file.mode
    """

    def test_mode_attr(self):
        """ Mode equivalent can be retrieved via property """
        fname = self.mktemp()
        with File(fname, 'w') as f:
            self.assertEqual(f.mode, 'r+')
        with File(fname, 'r') as f:
            self.assertEqual(f.mode, 'r')

    def test_mode_external(self):
        """ Mode property works for files opened via external links

        Issue 190.
        """
        fname1 = self.mktemp()
        fname2 = self.mktemp()

        f1 = File(fname1, 'w')
        f1.close()

        f2 = File(fname2, 'w')
        try:
            f2['External'] = h5py.ExternalLink(fname1, '/')
            f3 = f2['External'].file
            self.assertEqual(f3.mode, 'r+')
        finally:
            f2.close()
            f3.close()

        f2 = File(fname2, 'r')
        try:
            f3 = f2['External'].file
            self.assertEqual(f3.mode, 'r')
        finally:
            f2.close()
            f3.close()


class TestDrivers(TestCase):

    """
        Feature: Files can be opened with low-level HDF5 drivers. Does not
        include MPI drivers (see bottom).
    """

    @ut.skipUnless(os.name == 'posix', "Stdio driver is supported on posix")
    def test_stdio(self):
        """ Stdio driver is supported on posix """
        fid = File(self.mktemp(), 'w', driver='stdio')
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'stdio')
        fid.close()

        # Testing creation with append flag
        fid = File(self.mktemp(), 'a', driver='stdio')
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'stdio')
        fid.close()

    @ut.skipUnless(direct_vfd,
                   "DIRECT driver is supported on Linux if hdf5 is "
                   "built with the appriorate flags.")
    def test_direct(self):
        """ DIRECT driver is supported on Linux"""
        fid = File(self.mktemp(), 'w', driver='direct')
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'direct')
        default_fapl = fid.id.get_access_plist().get_fapl_direct()
        fid.close()

        # Testing creation with append flag
        fid = File(self.mktemp(), 'a', driver='direct')
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'direct')
        fid.close()

        # 2022/02/26: hnmaarrfk
        # I'm actually not too sure of the restriction on the
        # different valid block_sizes and cbuf_sizes on different hardware
        # platforms.
        #
        # I've learned a few things:
        #    * cbuf_size: Copy buffer size must be a multiple of block size
        # The alignment (on my platform x86-64bit with an NVMe SSD
        # could be an integer multiple of 512
        #
        # To allow HDF5 to do the heavy lifting for different platform,
        # We didn't provide any arguments to the first call
        # and obtained HDF5's default values there.

        # Testing creation with a few different property lists
        for alignment, block_size, cbuf_size in [
                default_fapl,
                (default_fapl[0], default_fapl[1], 3 * default_fapl[1]),
                (default_fapl[0] * 2, default_fapl[1], 3 * default_fapl[1]),
                (default_fapl[0], 2 * default_fapl[1], 6 * default_fapl[1]),
                ]:
            with File(self.mktemp(), 'w', driver='direct',
                      alignment=alignment,
                      block_size=block_size,
                      cbuf_size=cbuf_size) as fid:
                actual_fapl = fid.id.get_access_plist().get_fapl_direct()
                actual_alignment = actual_fapl[0]
                actual_block_size = actual_fapl[1]
                actual_cbuf_size = actual_fapl[2]
                assert actual_alignment == alignment
                assert actual_block_size == block_size
                assert actual_cbuf_size == actual_cbuf_size

    @ut.skipUnless(os.name == 'posix', "Sec2 driver is supported on posix")
    def test_sec2(self):
        """ Sec2 driver is supported on posix """
        fid = File(self.mktemp(), 'w', driver='sec2')
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'sec2')
        fid.close()

        # Testing creation with append flag
        fid = File(self.mktemp(), 'a', driver='sec2')
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'sec2')
        fid.close()

    def test_core(self):
        """ Core driver is supported (no backing store) """
        fname = self.mktemp()
        fid = File(fname, 'w', driver='core', backing_store=False)
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'core')
        fid.close()
        self.assertFalse(os.path.exists(fname))

        # Testing creation with append flag
        fid = File(self.mktemp(), 'a', driver='core')
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'core')
        fid.close()

    def test_backing(self):
        """ Core driver saves to file when backing store used """
        fname = self.mktemp()
        fid = File(fname, 'w', driver='core', backing_store=True)
        fid.create_group('foo')
        fid.close()
        fid = File(fname, 'r')
        assert 'foo' in fid
        fid.close()
        # keywords for other drivers are invalid when using the default driver
        with self.assertRaises(TypeError):
            File(fname, 'w', backing_store=True)

    def test_readonly(self):
        """ Core driver can be used to open existing files """
        fname = self.mktemp()
        fid = File(fname, 'w')
        fid.create_group('foo')
        fid.close()
        fid = File(fname, 'r', driver='core')
        self.assertTrue(fid)
        assert 'foo' in fid
        with self.assertRaises(ValueError):
            fid.create_group('bar')
        fid.close()

    def test_blocksize(self):
        """ Core driver supports variable block size """
        fname = self.mktemp()
        fid = File(fname, 'w', driver='core', block_size=1024,
                   backing_store=False)
        self.assertTrue(fid)
        fid.close()

    def test_split(self):
        """ Split stores metadata in a separate file """
        fname = self.mktemp()
        fid = File(fname, 'w', driver='split')
        fid.close()
        self.assertTrue(os.path.exists(fname + '-m.h5'))
        fid = File(fname, 'r', driver='split')
        self.assertTrue(fid)
        fid.close()

    def test_fileobj(self):
        """ Python file object driver is supported """
        tf = tempfile.TemporaryFile()
        fid = File(tf, 'w', driver='fileobj')
        self.assertTrue(fid)
        self.assertEqual(fid.driver, 'fileobj')
        fid.close()
        # Driver must be 'fileobj' for file-like object if specified
        with self.assertRaises(ValueError):
            File(tf, 'w', driver='core')
        tf.close()

    # TODO: family driver tests


@pytest.mark.skipif(
    h5py.version.hdf5_version_tuple[1] % 2 != 0 ,
    reason='Not HDF5 release version'
)
class TestNewLibver(TestCase):

    """
        Feature: File format compatibility bounds can be specified when
        opening a file.
    """

    @classmethod
    def setUpClass(cls):
        super().setUpClass()

        # Current latest library bound label
        if h5py.version.hdf5_version_tuple < (1, 11, 4):
            cls.latest = 'v110'
        elif h5py.version.hdf5_version_tuple < (1, 13, 0):
            cls.latest = 'v112'
        else:
            cls.latest = 'v114'

    def test_default(self):
        """ Opening with no libver arg """
        f = File(self.mktemp(), 'w')
        self.assertEqual(f.libver, ('earliest', self.latest))
        f.close()

    def test_single(self):
        """ Opening with single libver arg """
        f = File(self.mktemp(), 'w', libver='latest')
        self.assertEqual(f.libver, (self.latest, self.latest))
        f.close()

    def test_single_v108(self):
        """ Opening with "v108" libver arg """
        f = File(self.mktemp(), 'w', libver='v108')
        self.assertEqual(f.libver, ('v108', self.latest))
        f.close()

    def test_single_v110(self):
        """ Opening with "v110" libver arg """
        f = File(self.mktemp(), 'w', libver='v110')
        self.assertEqual(f.libver, ('v110', self.latest))
        f.close()

    @ut.skipIf(h5py.version.hdf5_version_tuple < (1, 11, 4),
               'Requires HDF5 1.11.4 or later')
    def test_single_v112(self):
        """ Opening with "v112" libver arg """
        f = File(self.mktemp(), 'w', libver='v112')
        self.assertEqual(f.libver, ('v112', self.latest))
        f.close()

    def test_multiple(self):
        """ Opening with two libver args """
        f = File(self.mktemp(), 'w', libver=('earliest', 'v108'))
        self.assertEqual(f.libver, ('earliest', 'v108'))
        f.close()

    def test_none(self):
        """ Omitting libver arg results in maximum compatibility """
        f = File(self.mktemp(), 'w')
        self.assertEqual(f.libver, ('earliest', self.latest))
        f.close()


class TestUserblock(TestCase):

    """
        Feature: Files can be create with user blocks
    """

    def test_create_blocksize(self):
        """ User blocks created with w, w-, x and properties work correctly """
        f = File(self.mktemp(), 'w-', userblock_size=512)
        try:
            self.assertEqual(f.userblock_size, 512)
        finally:
            f.close()

        f = File(self.mktemp(), 'x', userblock_size=512)
        try:
            self.assertEqual(f.userblock_size, 512)
        finally:
            f.close()

        f = File(self.mktemp(), 'w', userblock_size=512)
        try:
            self.assertEqual(f.userblock_size, 512)
        finally:
            f.close()
        # User block size must be an integer
        with self.assertRaises(ValueError):
            File(self.mktemp(), 'w', userblock_size='non')

    def test_write_only(self):
        """ User block only allowed for write """
        name = self.mktemp()
        f = File(name, 'w')
        f.close()

        with self.assertRaises(ValueError):
            f = h5py.File(name, 'r', userblock_size=512)

        with self.assertRaises(ValueError):
            f = h5py.File(name, 'r+', userblock_size=512)

    def test_match_existing(self):
        """ User block size must match that of file when opening for append """
        name = self.mktemp()
        f = File(name, 'w', userblock_size=512)
        f.close()

        with self.assertRaises(ValueError):
            f = File(name, 'a', userblock_size=1024)

        f = File(name, 'a', userblock_size=512)
        try:
            self.assertEqual(f.userblock_size, 512)
        finally:
            f.close()

    def test_power_of_two(self):
        """ User block size must be a power of 2 and at least 512 """
        name = self.mktemp()

        with self.assertRaises(ValueError):
            f = File(name, 'w', userblock_size=128)

        with self.assertRaises(ValueError):
            f = File(name, 'w', userblock_size=513)

        with self.assertRaises(ValueError):
            f = File(name, 'w', userblock_size=1023)

    def test_write_block(self):
        """ Test that writing to a user block does not destroy the file """
        name = self.mktemp()

        f = File(name, 'w', userblock_size=512)
        f.create_group("Foobar")
        f.close()

        pyfile = open(name, 'r+b')
        try:
            pyfile.write(b'X' * 512)
        finally:
            pyfile.close()

        f = h5py.File(name, 'r')
        try:
            assert "Foobar" in f
        finally:
            f.close()

        pyfile = open(name, 'rb')
        try:
            self.assertEqual(pyfile.read(512), b'X' * 512)
        finally:
            pyfile.close()


class TestContextManager(TestCase):

    """
        Feature: File objects can be used as context managers
    """

    def test_context_manager(self):
        """ File objects can be used in with statements """
        with File(self.mktemp(), 'w') as fid:
            self.assertTrue(fid)
        self.assertTrue(not fid)


@ut.skipIf(not UNICODE_FILENAMES, "Filesystem unicode support required")
class TestUnicode(TestCase):

    """
        Feature: Unicode filenames are supported
    """

    def test_unicode(self):
        """ Unicode filenames can be used, and retrieved properly via .filename
        """
        fname = self.mktemp(prefix=chr(0x201a))
        fid = File(fname, 'w')
        try:
            self.assertEqual(fid.filename, fname)
            self.assertIsInstance(fid.filename, str)
        finally:
            fid.close()

    def test_unicode_hdf5_python_consistent(self):
        """ Unicode filenames can be used, and seen correctly from python
        """
        fname = self.mktemp(prefix=chr(0x201a))
        print(h5py.version.info)
        from h5py._hl.compat import WINDOWS_ENCODING
        print("Windows file encoding in use", WINDOWS_ENCODING)
        print(f"Creating {fname!r}")
        with File(fname, 'w') as f:
            print(os.listdir(self.tempdir))
        assert os.path.exists(fname)

    def test_nonexistent_file_unicode(self):
        """
        Modes 'r' and 'r+' do not create files even when given unicode names
        """
        fname = self.mktemp(prefix=chr(0x201a))
        with self.assertRaises(OSError):
            File(fname, 'r')
        with self.assertRaises(OSError):
            File(fname, 'r+')


class TestFileProperty(TestCase):

    """
        Feature: A File object can be retrieved from any child object,
        via the .file property
    """

    def test_property(self):
        """ File object can be retrieved from subgroup """
        fname = self.mktemp()
        hfile = File(fname, 'w')
        try:
            hfile2 = hfile['/'].file
            self.assertEqual(hfile, hfile2)
        finally:
            hfile.close()

    def test_close(self):
        """ All retrieved File objects are closed at the same time """
        fname = self.mktemp()
        hfile = File(fname, 'w')
        grp = hfile.create_group('foo')
        hfile2 = grp.file
        hfile3 = hfile['/'].file
        hfile2.close()
        self.assertFalse(hfile)
        self.assertFalse(hfile2)
        self.assertFalse(hfile3)

    def test_mode(self):
        """ Retrieved File objects have a meaningful mode attribute """
        hfile = File(self.mktemp(), 'w')
        try:
            grp = hfile.create_group('foo')
            self.assertEqual(grp.file.mode, hfile.mode)
        finally:
            hfile.close()


class TestClose(TestCase):

    """
        Feature: Files can be closed
    """

    def test_close(self):
        """ Close file via .close method """
        fid = File(self.mktemp(), 'w')
        self.assertTrue(fid)
        fid.close()
        self.assertFalse(fid)

    def test_closed_file(self):
        """ Trying to modify closed file raises ValueError """
        fid = File(self.mktemp(), 'w')
        fid.close()
        with self.assertRaises(ValueError):
            fid.create_group('foo')

    def test_close_multiple_default_driver(self):
        fname = self.mktemp()
        f = h5py.File(fname, 'w')
        f.create_group("test")
        f.close()
        f.close()


class TestFlush(TestCase):

    """
        Feature: Files can be flushed
    """

    def test_flush(self):
        """ Flush via .flush method """
        fid = File(self.mktemp(), 'w')
        fid.flush()
        fid.close()


class TestRepr(TestCase):

    """
        Feature: File objects provide a helpful __repr__ string
    """

    def test_repr(self):
        """ __repr__ behaves itself when files are open and closed """
        fid = File(self.mktemp(), 'w')
        self.assertIsInstance(repr(fid), str)
        fid.close()
        self.assertIsInstance(repr(fid), str)


class TestFilename(TestCase):

    """
        Feature: The name of a File object can be retrieved via .filename
    """

    def test_filename(self):
        """ .filename behaves properly for string data """
        fname = self.mktemp()
        fid = File(fname, 'w')
        try:
            self.assertEqual(fid.filename, fname)
            self.assertIsInstance(fid.filename, str)
        finally:
            fid.close()


class TestCloseInvalidatesOpenObjectIDs(TestCase):

    """
        Ensure that closing a file invalidates object IDs, as appropriate
    """

    def test_close(self):
        """ Closing a file invalidates any of the file's open objects """
        with File(self.mktemp(), 'w') as f1:
            g1 = f1.create_group('foo')
            self.assertTrue(bool(f1.id))
            self.assertTrue(bool(g1.id))
            f1.close()
            self.assertFalse(bool(f1.id))
            self.assertFalse(bool(g1.id))
        with File(self.mktemp(), 'w') as f2:
            g2 = f2.create_group('foo')
            self.assertTrue(bool(f2.id))
            self.assertTrue(bool(g2.id))
            self.assertFalse(bool(f1.id))
            self.assertFalse(bool(g1.id))

    def test_close_one_handle(self):
        fname = self.mktemp()
        with File(fname, 'w') as f:
            f.create_group('foo')

        f1 = File(fname)
        f2 = File(fname)
        g1 = f1['foo']
        g2 = f2['foo']
        assert g1.id.valid
        assert g2.id.valid
        f1.close()
        assert not g1.id.valid
        # Closing f1 shouldn't close f2 or objects belonging to it
        assert f2.id.valid
        assert g2.id.valid

        f2.close()
        assert not f2.id.valid
        assert not g2.id.valid


class TestPathlibSupport(TestCase):

    """
        Check that h5py doesn't break on pathlib
    """
    def test_pathlib_accepted_file(self):
        """ Check that pathlib is accepted by h5py.File """
        with closed_tempfile() as f:
            path = pathlib.Path(f)
            with File(path, 'w') as f2:
                self.assertTrue(True)

    def test_pathlib_name_match(self):
        """ Check that using pathlib does not affect naming """
        with closed_tempfile() as f:
            path = pathlib.Path(f)
            with File(path, 'w') as h5f1:
                pathlib_name = h5f1.filename
            with File(f, 'w') as h5f2:
                normal_name = h5f2.filename
            self.assertEqual(pathlib_name, normal_name)


class TestPickle(TestCase):
    """Check that h5py.File can't be pickled"""
    def test_dump_error(self):
        with File(self.mktemp(), 'w') as f1:
            with self.assertRaises(TypeError):
                pickle.dumps(f1)


# unittest doesn't work with pytest fixtures (and possibly other features),
# hence no subclassing TestCase
@pytest.mark.mpi
class TestMPI:
    def test_mpio(self, mpi_file_name):
        """ MPIO driver and options """
        from mpi4py import MPI

        with File(mpi_file_name, 'w', driver='mpio', comm=MPI.COMM_WORLD) as f:
            assert f
            assert f.driver == 'mpio'

    def test_mpio_append(self, mpi_file_name):
        """ Testing creation of file with append """
        from mpi4py import MPI

        with File(mpi_file_name, 'a', driver='mpio', comm=MPI.COMM_WORLD) as f:
            assert f
            assert f.driver == 'mpio'

    def test_mpi_atomic(self, mpi_file_name):
        """ Enable atomic mode for MPIO driver """
        from mpi4py import MPI

        with File(mpi_file_name, 'w', driver='mpio', comm=MPI.COMM_WORLD) as f:
            assert not f.atomic
            f.atomic = True
            assert f.atomic

    def test_close_multiple_mpio_driver(self, mpi_file_name):
        """ MPIO driver and options """
        from mpi4py import MPI

        f = File(mpi_file_name, 'w', driver='mpio', comm=MPI.COMM_WORLD)
        f.create_group("test")
        f.close()
        f.close()


class TestSWMRMode(TestCase):

    """
        Feature: Create file that switches on SWMR mode
    """

    def test_file_mode_generalizes(self):
        fname = self.mktemp()
        fid = File(fname, 'w', libver='latest')
        g = fid.create_group('foo')
        # fid and group member file attribute should have the same mode
        assert fid.mode == g.file.mode == 'r+'
        fid.swmr_mode = True
        # fid and group member file attribute should still be 'r+'
        # even though file intent has changed
        assert fid.mode == g.file.mode == 'r+'
        fid.close()

    def test_swmr_mode_consistency(self):
        fname = self.mktemp()
        fid = File(fname, 'w', libver='latest')
        g = fid.create_group('foo')
        assert fid.swmr_mode == g.file.swmr_mode == False
        fid.swmr_mode = True
        # This setter should affect both fid and group member file attribute
        assert fid.swmr_mode == g.file.swmr_mode == True
        fid.close()


@pytest.mark.skipif(
    h5py.version.hdf5_version_tuple < (1, 12, 1) and (
    h5py.version.hdf5_version_tuple[:2] != (1, 10) or h5py.version.hdf5_version_tuple[2] < 7),
    reason="Requires HDF5 >= 1.12.1 or 1.10.x >= 1.10.7")
@pytest.mark.skipif("HDF5_USE_FILE_LOCKING" in os.environ,
                    reason="HDF5_USE_FILE_LOCKING env. var. is set")
class TestFileLocking:
    """Test h5py.File file locking option"""

    def test_reopen(self, tmp_path):
        """Test file locking when opening twice the same file"""
        fname = tmp_path / "test.h5"

        with h5py.File(fname, mode="w", locking=True) as f:
            f.flush()

            # Opening same file in same process without locking is expected to fail
            with pytest.raises(OSError):
                with h5py.File(fname, mode="r", locking=False) as h5f_read:
                    pass

            with h5py.File(fname, mode="r", locking=True) as h5f_read:
                pass

            if h5py.version.hdf5_version_tuple < (1, 14, 4):
                with h5py.File(fname, mode="r", locking='best-effort') as h5f_read:
                    pass
            else:
                with pytest.raises(OSError):
                    with h5py.File(fname, mode="r", locking='best-effort') as h5f_read:
                        pass


    def test_unsupported_locking(self, tmp_path):
        """Test with erroneous file locking value"""
        fname = tmp_path / "test.h5"
        with pytest.raises(ValueError):
            with h5py.File(fname, mode="r", locking='unsupported-value') as h5f_read:
                pass

    def test_multiprocess(self, tmp_path):
        """Test file locking option from different concurrent processes"""
        fname = tmp_path / "test.h5"

        def open_in_subprocess(filename, mode, locking):
            """Open HDF5 file in a subprocess and return True on success"""
            h5py_import_dir = str(pathlib.Path(h5py.__file__).parent.parent)

            process = subprocess.run(
                [
                    sys.executable,
                    "-c",
                    f"""
import sys
sys.path.insert(0, {h5py_import_dir!r})
import h5py
f = h5py.File({str(filename)!r}, mode={mode!r}, locking={locking})
                    """,
                ],
                capture_output=True)
            return process.returncode == 0 and not process.stderr

        # Create test file
        with h5py.File(fname, mode="w", locking=True) as f:
            f["data"] = 1

        with h5py.File(fname, mode="r", locking=False) as f:
            # Opening in write mode with locking is expected to work
            assert open_in_subprocess(fname, mode="w", locking=True)


@pytest.mark.skipif(
    h5py.version.hdf5_version_tuple < (1, 14, 4),
    reason="Requires HDF5 >= 1.14.4",
)
@pytest.mark.skipif(
    "HDF5_USE_FILE_LOCKING" in os.environ,
    reason="HDF5_USE_FILE_LOCKING env. var. is set",
)
@pytest.mark.parametrize(
    'locking_arg,file_locking_props',
    [
        (False, (0, 0)),
        (True, (1, 0)),
        ('best-effort', (1, 1)),
    ]
)
def test_file_locking_external_link(tmp_path, locking_arg, file_locking_props):
    """Test that same file locking is used for external link"""
    fname_main = tmp_path / "test_main.h5"
    fname_elink = tmp_path / "test_linked.h5"

    # Create test files
    with h5py.File(fname_elink, "w") as f:
        f["data"] = 1
    with h5py.File(fname_main, "w") as f:
        f["link"] = h5py.ExternalLink(fname_elink, "/data")

    with h5py.File(fname_main, "r", locking=locking_arg) as f:
        locking_info = f.id.get_access_plist().get_file_locking()
        assert locking_info == file_locking_props

        # Test that external link file is also opened without file locking
        elink_dataset = f["link"]
        elink_locking_info = elink_dataset.file.id.get_access_plist().get_file_locking()
        assert elink_locking_info == file_locking_props


def test_close_gc(writable_file):
    # https://github.com/h5py/h5py/issues/1852
    for i in range(100):
        writable_file[str(i)] = []

    filename = writable_file.filename
    writable_file.close()

    # Ensure that Python's garbage collection doesn't interfere with closing
    # a file. Try a few times - the problem is not 100% consistent, but
    # normally showed up on the 1st or 2nd iteration for me. -TAK, 2021
    for i in range(10):
        with h5py.File(filename, 'r') as f:
            refs = [d.id for d in f.values()]
            refs.append(refs)   # Make a reference cycle so GC is involved
            del refs  # GC is likely to fire while closing the file
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/tests/test_file2.py0000644000175000017500000002470614675110407020040 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Tests the h5py.File object.
"""

import h5py
from h5py._hl.files import _drivers
from h5py import File

from .common import ut, TestCase

import pytest
import io
import tempfile
import os


def nfiles():
    return h5py.h5f.get_obj_count(h5py.h5f.OBJ_ALL, h5py.h5f.OBJ_FILE)

def ngroups():
    return h5py.h5f.get_obj_count(h5py.h5f.OBJ_ALL, h5py.h5f.OBJ_GROUP)


class TestDealloc(TestCase):

    """
        Behavior on object deallocation.  Note most of this behavior is
        delegated to FileID.
    """

    def test_autoclose(self):
        """ File objects close automatically when out of scope, but
        other objects remain open. """

        start_nfiles = nfiles()
        start_ngroups = ngroups()

        fname = self.mktemp()
        f = h5py.File(fname, 'w')
        g = f['/']

        self.assertEqual(nfiles(), start_nfiles+1)
        self.assertEqual(ngroups(), start_ngroups+1)

        del f

        self.assertTrue(g)
        self.assertEqual(nfiles(), start_nfiles)
        self.assertEqual(ngroups(), start_ngroups+1)

        f = g.file

        self.assertTrue(f)
        self.assertEqual(nfiles(), start_nfiles+1)
        self.assertEqual(ngroups(), start_ngroups+1)

        del g

        self.assertEqual(nfiles(), start_nfiles+1)
        self.assertEqual(ngroups(), start_ngroups)

        del f

        self.assertEqual(nfiles(), start_nfiles)
        self.assertEqual(ngroups(), start_ngroups)


class TestDriverRegistration(TestCase):
    def test_register_driver(self):
        called_with = [None]

        def set_fapl(plist, *args, **kwargs):
            called_with[0] = args, kwargs
            return _drivers['sec2'](plist)

        h5py.register_driver('new-driver', set_fapl)
        self.assertIn('new-driver', h5py.registered_drivers())

        fname = self.mktemp()
        h5py.File(fname, driver='new-driver', driver_arg_0=0, driver_arg_1=1,
                  mode='w')

        self.assertEqual(
            called_with,
            [((), {'driver_arg_0': 0, 'driver_arg_1': 1})],
        )

    def test_unregister_driver(self):
        h5py.register_driver('new-driver', lambda plist: None)
        self.assertIn('new-driver', h5py.registered_drivers())

        h5py.unregister_driver('new-driver')
        self.assertNotIn('new-driver', h5py.registered_drivers())

        with self.assertRaises(ValueError) as e:
            fname = self.mktemp()
            h5py.File(fname, driver='new-driver', mode='w')

        self.assertEqual(str(e.exception), 'Unknown driver type "new-driver"')


class TestCache(TestCase):
    def test_defaults(self):
        fname = self.mktemp()
        f = h5py.File(fname, 'w')
        self.assertEqual(list(f.id.get_access_plist().get_cache()),
                         [0, 521, 1048576, 0.75])

    def test_nbytes(self):
        fname = self.mktemp()
        f = h5py.File(fname, 'w', rdcc_nbytes=1024)
        self.assertEqual(list(f.id.get_access_plist().get_cache()),
                         [0, 521, 1024, 0.75])

    def test_nslots(self):
        fname = self.mktemp()
        f = h5py.File(fname, 'w', rdcc_nslots=125)
        self.assertEqual(list(f.id.get_access_plist().get_cache()),
                         [0, 125, 1048576, 0.75])

    def test_w0(self):
        fname = self.mktemp()
        f = h5py.File(fname, 'w', rdcc_w0=0.25)
        self.assertEqual(list(f.id.get_access_plist().get_cache()),
                         [0, 521, 1048576, 0.25])


class TestFileObj(TestCase):

    def check_write(self, fileobj):
        f = h5py.File(fileobj, 'w')
        self.assertEqual(f.driver, 'fileobj')
        self.assertEqual(f.filename, repr(fileobj))
        f.create_dataset('test', data=list(range(12)))
        self.assertEqual(list(f), ['test'])
        self.assertEqual(list(f['test'][:]), list(range(12)))
        f.close()

    def check_read(self, fileobj):
        f = h5py.File(fileobj, 'r')
        self.assertEqual(list(f), ['test'])
        self.assertEqual(list(f['test'][:]), list(range(12)))
        self.assertRaises(Exception, f.create_dataset, 'another.test', data=list(range(3)))
        f.close()

    def test_BytesIO(self):
        with io.BytesIO() as fileobj:
            self.assertEqual(len(fileobj.getvalue()), 0)
            self.check_write(fileobj)
            self.assertGreater(len(fileobj.getvalue()), 0)
            self.check_read(fileobj)

    def test_file(self):
        fname = self.mktemp()
        try:
            with open(fname, 'wb+') as fileobj:
                self.assertEqual(os.path.getsize(fname), 0)
                self.check_write(fileobj)
                self.assertGreater(os.path.getsize(fname), 0)
                self.check_read(fileobj)
            with open(fname, 'rb') as fileobj:
                self.check_read(fileobj)
        finally:
            os.remove(fname)

    @pytest.mark.filterwarnings(
        # on Windows, a resource warning may be emitted
        # when this test returns
        "ignore:unclosed file:ResourceWarning"
    )
    def test_TemporaryFile(self):
        # in this test, we check explicitly that temp file gets
        # automatically deleted upon h5py.File.close()...
        fileobj = tempfile.NamedTemporaryFile()
        fname = fileobj.name
        f = h5py.File(fileobj, 'w')
        del fileobj
        # ... but in your code feel free to simply
        # f = h5py.File(tempfile.TemporaryFile())

        f.create_dataset('test', data=list(range(12)))
        self.assertEqual(list(f), ['test'])
        self.assertEqual(list(f['test'][:]), list(range(12)))
        self.assertTrue(os.path.isfile(fname))
        f.close()
        self.assertFalse(os.path.isfile(fname))

    def test_exception_open(self):
        self.assertRaises(Exception, h5py.File, None,
                          driver='fileobj', mode='x')
        self.assertRaises(Exception, h5py.File, 'rogue',
                          driver='fileobj', mode='x')
        self.assertRaises(Exception, h5py.File, self,
                          driver='fileobj', mode='x')

    def test_exception_read(self):

        class BrokenBytesIO(io.BytesIO):
            def readinto(self, b):
                raise Exception('I am broken')

        f = h5py.File(BrokenBytesIO(), 'w')
        f.create_dataset('test', data=list(range(12)))
        self.assertRaises(Exception, list, f['test'])

    def test_exception_write(self):

        class BrokenBytesIO(io.BytesIO):
            allow_write = False
            def write(self, b):
                if self.allow_write:
                    return super().write(b)
                else:
                    raise Exception('I am broken')

        bio = BrokenBytesIO()
        f = h5py.File(bio, 'w')
        try:
            self.assertRaises(Exception, f.create_dataset, 'test',
                              data=list(range(12)))
        finally:
            # Un-break writing so we can close: errors while closing get messy.
            bio.allow_write = True
            f.close()

    @ut.skip("Incompletely closed files can cause segfaults")
    def test_exception_close(self):
        fileobj = io.BytesIO()
        f = h5py.File(fileobj, 'w')
        fileobj.close()
        self.assertRaises(Exception, f.close)

    def test_exception_writeonly(self):
        # HDF5 expects read & write access to a file it's writing;
        # check that we get the correct exception on a write-only file object.
        fileobj = open(os.path.join(self.tempdir, 'a.h5'), 'wb')
        f = h5py.File(fileobj, 'w')
        group = f.create_group("group")
        with self.assertRaises(io.UnsupportedOperation):
            group.create_dataset("data", data='foo', dtype=h5py.string_dtype())
        f.close()
        fileobj.close()


    def test_method_vanish(self):
        fileobj = io.BytesIO()
        f = h5py.File(fileobj, 'w')
        f.create_dataset('test', data=list(range(12)))
        self.assertEqual(list(f['test'][:]), list(range(12)))
        fileobj.readinto = None
        self.assertRaises(Exception, list, f['test'])


class TestTrackOrder(TestCase):
    def populate(self, f):
        for i in range(100):
            # Mix group and dataset creation.
            if i % 10 == 0:
                f.create_group(str(i))
            else:
                f[str(i)] = [i]

    def test_track_order(self):
        fname = self.mktemp()
        f = h5py.File(fname, 'w', track_order=True)  # creation order
        self.populate(f)
        self.assertEqual(list(f), [str(i) for i in range(100)])
        f.close()

        # Check order tracking after reopening the file
        f2 = h5py.File(fname)
        self.assertEqual(list(f2), [str(i) for i in range(100)])

    def test_no_track_order(self):
        fname = self.mktemp()
        f = h5py.File(fname, 'w', track_order=False)  # name alphanumeric
        self.populate(f)
        self.assertEqual(list(f),
                         sorted([str(i) for i in range(100)]))


class TestFileMetaBlockSize(TestCase):

    """
        Feature: The meta block size can be manipulated, changing how metadata
        is aggregated and the offset of the first dataset.
    """

    def test_file_create_with_meta_block_size_4096(self):
        # Test a large meta block size of 4 kibibytes
        meta_block_size = 4096
        with File(
            self.mktemp(), 'w',
            meta_block_size=meta_block_size,
            libver="latest"
        ) as f:
            f["test"] = 5
            self.assertEqual(f.meta_block_size, meta_block_size)
            # Equality is expected for HDF5 1.10
            self.assertGreaterEqual(f["test"].id.get_offset(), meta_block_size)

    def test_file_create_with_meta_block_size_512(self):
        # Test a small meta block size of 512 bytes
        # The smallest verifiable meta_block_size is 463
        meta_block_size = 512
        libver = "latest"
        with File(
            self.mktemp(), 'w',
            meta_block_size=meta_block_size,
            libver=libver
        ) as f:
            f["test"] = 3
            self.assertEqual(f.meta_block_size, meta_block_size)
            # Equality is expected for HDF5 1.10
            self.assertGreaterEqual(f["test"].id.get_offset(), meta_block_size)
            # Default meta_block_size is 2048. This should fail if meta_block_size is not set.
            self.assertLess(f["test"].id.get_offset(), meta_block_size*2)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0
h5py-3.13.0/h5py/tests/test_file_alignment.py0000644000175000017500000001046614532312724022010 0ustar00takluyvertakluyverimport h5py
from .common import TestCase


def is_aligned(dataset, offset=4096):
    # Here we check if the dataset is aligned
    return dataset.id.get_offset() % offset == 0


def dataset_name(i):
    return f"data{i:03}"


class TestFileAlignment(TestCase):
    """
        Ensure that setting the file alignment has the desired effect
        in the internal structure.
    """
    def test_no_alignment_set(self):
        fname = self.mktemp()
        # 881 is a prime number, so hopefully this help randomize the alignment
        # enough
        # A nice even number might give a pathological case where
        # While we don't want the data to be aligned, it ends up aligned...
        shape = (881,)

        with h5py.File(fname, 'w') as h5file:
            # Create up to 1000 datasets
            # At least one of them should be misaligned.
            # While this isn't perfect, it seems that there
            # The case where 1000 datasets get created is one where the data
            # is aligned. Therefore, during correct operation, this test is
            # expected to finish quickly
            for i in range(1000):
                dataset = h5file.create_dataset(
                    dataset_name(i), shape, dtype='uint8')
                # Assign data so that the dataset is instantiated in
                # the file
                dataset[...] = i
                if not is_aligned(dataset):
                    # Break early asserting that the file is not aligned
                    break
            else:
                raise RuntimeError("Data was all found to be aligned to 4096")

    def test_alignment_set_above_threshold(self):
        # 2022/01/19 hmaarrfk
        # UnitTest (TestCase) doesn't play well with pytest parametrization.
        alignment_threshold = 1000
        alignment_interval = 4096

        for shape in [
            (1033,),  # A prime number above the threshold
            (1000,),  # Exactly equal to the threshold
            (1001,),  # one above the threshold
        ]:
            fname = self.mktemp()
            with h5py.File(fname, 'w',
                           alignment_threshold=alignment_threshold,
                           alignment_interval=alignment_interval) as h5file:
                # Create up to 1000 datasets
                # They are all expected to be aligned
                for i in range(1000):
                    dataset = h5file.create_dataset(
                        dataset_name(i), shape, dtype='uint8')
                    # Assign data so that the dataset is instantiated in
                    # the file
                    dataset[...] = (i % 256)  # Truncate to uint8
                    assert is_aligned(dataset, offset=alignment_interval)

    def test_alignment_set_below_threshold(self):
        # 2022/01/19 hmaarrfk
        # UnitTest (TestCase) doesn't play well with pytest parametrization.
        alignment_threshold = 1000
        alignment_interval = 1024

        for shape in [
            (881,),  # A prime number below the threshold
            (999,),  # Exactly one below the threshold
        ]:
            fname = self.mktemp()
            with h5py.File(fname, 'w',
                           alignment_threshold=alignment_threshold,
                           alignment_interval=alignment_interval) as h5file:
                # Create up to 1000 datasets
                # At least one of them should be misaligned.
                # While this isn't perfect, it seems that there
                # The case where 1000 datasets get created is one where the
                # data is aligned. Therefore, during correct operation, this
                # test is expected to finish quickly
                for i in range(1000):
                    dataset = h5file.create_dataset(
                        dataset_name(i), shape, dtype='uint8')
                    # Assign data so that the dataset is instantiated in
                    # the file
                    dataset[...] = i
                    if not is_aligned(dataset, offset=alignment_interval):
                        # Break early asserting that the file is not aligned
                        break
                else:
                    raise RuntimeError(
                        "Data was all found to be aligned to "
                        f"{alignment_interval}. This is highly unlikely.")
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872510.0
h5py-3.13.0/h5py/tests/test_file_image.py0000644000175000017500000000373514755054376021132 0ustar00takluyvertakluyverimport numpy as np

import h5py
from h5py import h5f, h5p

from .common import ut, TestCase

class TestFileImage(TestCase):
    def test_load_from_image(self):
        from binascii import a2b_base64
        from zlib import decompress

        compressed_image = 'eJzr9HBx4+WS4mIAAQ4OBhYGAQZk8B8KKjhQ+TD5BCjNCKU7oPQKJpg4I1hOAiouCDUfXV1IkKsrSPV/NACzx4AFQnMwjIKRCDxcHQNAdASUD0ulJ5hQ1ZWkFpeAaFh69KDQXkYGNohZjDA+JCUzMkIEmKHqELQAWKkAByytOoBJViAPJM7ExATWyAE0B8RgZkyAJmlYDoEAIahukJoNU6+HMTA0UOgT6oBgP38XUI6G5UMFZrzKR8EoGAUjGMDKYVgxDSsuAHcfMK8='

        image = decompress(a2b_base64(compressed_image))

        fapl = h5p.create(h5py.h5p.FILE_ACCESS)
        fapl.set_fapl_core()
        fapl.set_file_image(image)

        fid = h5f.open(self.mktemp().encode(), h5py.h5f.ACC_RDONLY, fapl=fapl)
        f = h5py.File(fid)

        self.assertTrue('test' in f)

    def test_open_from_image(self):
        from binascii import a2b_base64
        from zlib import decompress

        compressed_image = 'eJzr9HBx4+WS4mIAAQ4OBhYGAQZk8B8KKjhQ+TD5BCjNCKU7oPQKJpg4I1hOAiouCDUfXV1IkKsrSPV/NACzx4AFQnMwjIKRCDxcHQNAdASUD0ulJ5hQ1ZWkFpeAaFh69KDQXkYGNohZjDA+JCUzMkIEmKHqELQAWKkAByytOoBJViAPJM7ExATWyAE0B8RgZkyAJmlYDoEAIahukJoNU6+HMTA0UOgT6oBgP38XUI6G5UMFZrzKR8EoGAUjGMDKYVgxDSsuAHcfMK8='

        image = decompress(a2b_base64(compressed_image))

        fid = h5f.open_file_image(image)
        f = h5py.File(fid)

        self.assertTrue('test' in f)


def test_in_memory():
    arr = np.arange(10)
    # Passing one fcpl & one fapl parameter to exercise the code splitting them:
    with h5py.File.in_memory(track_order=True, rdcc_nbytes=2_000_000) as f1:
        f1['a'] = arr
        f1.flush()
        img = f1.id.get_file_image()

        # Open while f1 is still open
        with h5py.File.in_memory(img) as f2:
            np.testing.assert_array_equal(f2['a'][:], arr)

    # Reuse image now that previous files are closed
    with h5py.File.in_memory(img) as f3:
        np.testing.assert_array_equal(f3['a'][:], arr)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1682521711.0
h5py-3.13.0/h5py/tests/test_filters.py0000644000175000017500000000577314422237157020514 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Tests the h5py._hl.filters module.

"""
import os
import numpy as np
import h5py

from .common import ut, TestCase


class TestFilters(TestCase):

    def setUp(self):
        """ like TestCase.setUp but also store the file path """
        self.path = self.mktemp()
        self.f = h5py.File(self.path, 'w')

    @ut.skipUnless(h5py.h5z.filter_avail(h5py.h5z.FILTER_SZIP), 'szip filter required')
    def test_wr_szip_fletcher32_64bit(self):
        """ test combination of szip, fletcher32, and 64bit arrays

        The fletcher32 checksum must be computed after the szip
        compression is applied.

        References:
        - GitHub issue #953
        - https://lists.hdfgroup.org/pipermail/
          hdf-forum_lists.hdfgroup.org/2018-January/010753.html
        """
        self.f.create_dataset("test_data",
                              data=np.zeros(10000, dtype=np.float64),
                              fletcher32=True,
                              compression="szip",
                              )
        self.f.close()

        with h5py.File(self.path, "r") as h5:
            # Access the data which will compute the fletcher32
            # checksum and raise an OSError if something is wrong.
            h5["test_data"][0]

    def test_wr_scaleoffset_fletcher32(self):
        """ make sure that scaleoffset + fletcher32 is prevented
        """
        data = np.linspace(0, 1, 100)
        with self.assertRaises(ValueError):
            self.f.create_dataset("test_data",
                                  data=data,
                                  fletcher32=True,
                                  # retain 3 digits after the decimal point
                                  scaleoffset=3,
                                  )


@ut.skipIf('gzip' not in h5py.filters.encode, "DEFLATE is not installed")
def test_filter_ref_obj(writable_file):
    gzip8 = h5py.filters.Gzip(level=8)
    # **kwargs unpacking (compatible with earlier h5py versions)
    assert dict(**gzip8) == {
        'compression': h5py.h5z.FILTER_DEFLATE,
        'compression_opts': (8,)
    }

    # Pass object as compression argument (new in h5py 3.0)
    ds = writable_file.create_dataset(
        'x', shape=(100,), dtype=np.uint32, compression=gzip8
    )
    assert ds.compression == 'gzip'
    assert ds.compression_opts == 8


def test_filter_ref_obj_eq():
    gzip8 = h5py.filters.Gzip(level=8)

    assert gzip8 == h5py.filters.Gzip(level=8)
    assert gzip8 != h5py.filters.Gzip(level=7)


@ut.skipIf(not os.getenv('H5PY_TEST_CHECK_FILTERS'),  "H5PY_TEST_CHECK_FILTERS not set")
def test_filters_available():
    assert 'gzip' in h5py.filters.decode
    assert 'gzip' in h5py.filters.encode
    assert 'lzf' in h5py.filters.decode
    assert 'lzf' in h5py.filters.encode
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1712311207.0
h5py-3.13.0/h5py/tests/test_group.py0000644000175000017500000010725314603745647020205 0ustar00takluyvertakluyver# -*- coding: utf-8 -*-
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Group test module.

    Tests all methods and properties of Group objects, with the following
    exceptions:

    1. Method create_dataset is tested in module test_dataset
"""

import numpy as np
import os
import os.path
import sys
from tempfile import mkdtemp

from collections.abc import MutableMapping

from .common import ut, TestCase
import h5py
from h5py import File, Group, SoftLink, HardLink, ExternalLink
from h5py import Dataset, Datatype
from h5py import h5t
from h5py._hl.compat import filename_encode

# If we can't encode unicode filenames, there's not much point failing tests
# which must fail
try:
    filename_encode(u"α")
except UnicodeEncodeError:
    NO_FS_UNICODE = True
else:
    NO_FS_UNICODE = False


class BaseGroup(TestCase):

    def setUp(self):
        self.f = File(self.mktemp(), 'w')

    def tearDown(self):
        if self.f:
            self.f.close()

class TestCreate(BaseGroup):

    """
        Feature: New groups can be created via .create_group method
    """

    def test_create(self):
        """ Simple .create_group call """
        grp = self.f.create_group('foo')
        self.assertIsInstance(grp, Group)

        grp2 = self.f.create_group(b'bar')
        self.assertIsInstance(grp, Group)

    def test_create_intermediate(self):
        """ Intermediate groups can be created automatically """
        grp = self.f.create_group('foo/bar/baz')
        self.assertEqual(grp.name, '/foo/bar/baz')

        grp2 = self.f.create_group(b'boo/bar/baz')
        self.assertEqual(grp2.name, '/boo/bar/baz')

    def test_create_exception(self):
        """ Name conflict causes group creation to fail with ValueError """
        self.f.create_group('foo')
        with self.assertRaises(ValueError):
            self.f.create_group('foo')

    def test_unicode(self):
        """ Unicode names are correctly stored """
        name = u"/Name" + chr(0x4500)
        group = self.f.create_group(name)
        self.assertEqual(group.name, name)
        self.assertEqual(group.id.links.get_info(name.encode('utf8')).cset, h5t.CSET_UTF8)

    def test_unicode_default(self):
        """ Unicode names convertible to ASCII are stored as ASCII (issue 239)
        """
        name = u"/Hello, this is a name"
        group = self.f.create_group(name)
        self.assertEqual(group.name, name)
        self.assertEqual(group.id.links.get_info(name.encode('utf8')).cset, h5t.CSET_ASCII)

    def test_type(self):
        """ Names should be strings or bytes """
        with self.assertRaises(TypeError):
            self.f.create_group(1.)

    def test_appropriate_low_level_id(self):
        " Binding a group to a non-group identifier fails with ValueError "
        dset = self.f.create_dataset('foo', [1])
        with self.assertRaises(ValueError):
            Group(dset.id)

class TestDatasetAssignment(BaseGroup):

    """
        Feature: Datasets can be created by direct assignment of data
    """

    def test_ndarray(self):
        """ Dataset auto-creation by direct assignment """
        data = np.ones((4,4),dtype='f')
        self.f['a'] = data
        self.assertIsInstance(self.f['a'], Dataset)
        self.assertArrayEqual(self.f['a'][...], data)

    def test_name_bytes(self):
        data = np.ones((4, 4), dtype='f')
        self.f[b'b'] = data
        self.assertIsInstance(self.f[b'b'], Dataset)

class TestDtypeAssignment(BaseGroup):

    """
        Feature: Named types can be created by direct assignment of dtypes
    """

    def test_dtype(self):
        """ Named type creation """
        dtype = np.dtype('|S10')
        self.f['a'] = dtype
        self.assertIsInstance(self.f['a'], Datatype)
        self.assertEqual(self.f['a'].dtype, dtype)

    def test_name_bytes(self):
        """ Named type creation """
        dtype = np.dtype('|S10')
        self.f[b'b'] = dtype
        self.assertIsInstance(self.f[b'b'], Datatype)


class TestRequire(BaseGroup):

    """
        Feature: Groups can be auto-created, or opened via .require_group
    """

    def test_open_existing(self):
        """ Existing group is opened and returned """
        grp = self.f.create_group('foo')
        grp2 = self.f.require_group('foo')
        self.assertEqual(grp2, grp)

        grp3 = self.f.require_group(b'foo')
        self.assertEqual(grp3, grp)

    def test_create(self):
        """ Group is created if it doesn't exist """
        grp = self.f.require_group('foo')
        self.assertIsInstance(grp, Group)
        self.assertEqual(grp.name, '/foo')

    def test_require_exception(self):
        """ Opening conflicting object results in TypeError """
        self.f.create_dataset('foo', (1,), 'f')
        with self.assertRaises(TypeError):
            self.f.require_group('foo')

    def test_intermediate_create_dataset(self):
        """ Intermediate is created if it doesn't exist """
        dt = h5py.string_dtype()
        self.f.require_dataset("foo/bar/baz", (1,), dtype=dt)
        group = self.f.get('foo')
        assert isinstance(group, Group)
        group = self.f.get('foo/bar')
        assert isinstance(group, Group)

    def test_intermediate_create_group(self):
        dt = h5py.string_dtype()
        self.f.require_group("foo/bar/baz")
        group = self.f.get('foo')
        assert isinstance(group, Group)
        group = self.f.get('foo/bar')
        assert isinstance(group, Group)
        group = self.f.get('foo/bar/baz')
        assert isinstance(group, Group)

    def test_require_shape(self):
        ds = self.f.require_dataset("foo/resizable", shape=(0, 3), maxshape=(None, 3), dtype=int)
        ds.resize(20, axis=0)
        self.f.require_dataset("foo/resizable", shape=(0, 3), maxshape=(None, 3), dtype=int)
        self.f.require_dataset("foo/resizable", shape=(20, 3), dtype=int)
        with self.assertRaises(TypeError):
            self.f.require_dataset("foo/resizable", shape=(0, 0), maxshape=(3, None), dtype=int)
        with self.assertRaises(TypeError):
            self.f.require_dataset("foo/resizable", shape=(0, 0), maxshape=(None, 5), dtype=int)
        with self.assertRaises(TypeError):
            self.f.require_dataset("foo/resizable", shape=(0, 0), maxshape=(None, 5, 2), dtype=int)
        with self.assertRaises(TypeError):
            self.f.require_dataset("foo/resizable", shape=(10, 3), dtype=int)


class TestDelete(BaseGroup):

    """
        Feature: Objects can be unlinked via "del" operator
    """

    def test_delete(self):
        """ Object deletion via "del" """
        self.f.create_group('foo')
        self.assertIn('foo', self.f)
        del self.f['foo']
        self.assertNotIn('foo', self.f)

    def test_nonexisting(self):
        """ Deleting non-existent object raises KeyError """
        with self.assertRaises(KeyError):
            del self.f['foo']

    def test_readonly_delete_exception(self):
        """ Deleting object in readonly file raises KeyError """
        # Note: it is impossible to restore the old behavior (ValueError)
        # without breaking the above test (non-existing objects)
        fname = self.mktemp()
        hfile = File(fname, 'w')
        try:
            hfile.create_group('foo')
        finally:
            hfile.close()

        hfile = File(fname, 'r')
        try:
            with self.assertRaises(KeyError):
                del hfile['foo']
        finally:
            hfile.close()

class TestOpen(BaseGroup):

    """
        Feature: Objects can be opened via indexing syntax obj[name]
    """

    def test_open(self):
        """ Simple obj[name] opening """
        grp = self.f.create_group('foo')
        grp2 = self.f['foo']
        grp3 = self.f['/foo']
        self.assertEqual(grp, grp2)
        self.assertEqual(grp, grp3)

    def test_nonexistent(self):
        """ Opening missing objects raises KeyError """
        with self.assertRaises(KeyError):
            self.f['foo']

    def test_reference(self):
        """ Objects can be opened by HDF5 object reference """
        grp = self.f.create_group('foo')
        grp2 = self.f[grp.ref]
        self.assertEqual(grp2, grp)

    def test_reference_numpyobj(self):
        """ Object can be opened by numpy.object_ containing object ref

        Test for issue 181, issue 202.
        """
        g = self.f.create_group('test')

        dt = np.dtype([('a', 'i'),('b', h5py.ref_dtype)])
        dset = self.f.create_dataset('test_dset', (1,), dt)

        dset[0] =(42,g.ref)
        data = dset[0]
        self.assertEqual(self.f[data[1]], g)

    def test_invalid_ref(self):
        """ Invalid region references should raise an exception """

        ref = h5py.h5r.Reference()

        with self.assertRaises(ValueError):
            self.f[ref]

        self.f.create_group('x')
        ref = self.f['x'].ref
        del self.f['x']

        with self.assertRaises(Exception):
            self.f[ref]

    def test_path_type_validation(self):
        """ Access with non bytes or str types should raise an exception """
        self.f.create_group('group')

        with self.assertRaises(TypeError):
            self.f[0]

        with self.assertRaises(TypeError):
            self.f[...]

    # TODO: check that regionrefs also work with __getitem__

class TestRepr(BaseGroup):
    """Opened and closed groups provide a useful __repr__ string"""

    def test_repr(self):
        """ Opened and closed groups provide a useful __repr__ string """
        g = self.f.create_group('foo')
        self.assertIsInstance(repr(g), str)
        g.id._close()
        self.assertIsInstance(repr(g), str)
        g = self.f['foo']
        # Closing the file shouldn't break it
        self.f.close()
        self.assertIsInstance(repr(g), str)

class BaseMapping(BaseGroup):

    """
        Base class for mapping tests
    """
    def setUp(self):
        self.f = File(self.mktemp(), 'w')
        self.groups = ('a', 'b', 'c', 'd')
        for x in self.groups:
            self.f.create_group(x)
        self.f['x'] = h5py.SoftLink('/mongoose')
        self.groups = self.groups + ('x',)

    def tearDown(self):
        if self.f:
            self.f.close()

class TestLen(BaseMapping):

    """
        Feature: The Python len() function returns the number of groups
    """

    def test_len(self):
        """ len() returns number of group members """
        self.assertEqual(len(self.f), len(self.groups))
        self.f.create_group('e')
        self.assertEqual(len(self.f), len(self.groups)+1)


class TestContains(BaseGroup):

    """
        Feature: The Python "in" builtin tests for membership
    """

    def test_contains(self):
        """ "in" builtin works for membership (byte and Unicode) """
        self.f.create_group('a')
        self.assertIn(b'a', self.f)
        self.assertIn('a', self.f)
        self.assertIn(b'/a', self.f)
        self.assertIn('/a', self.f)
        self.assertNotIn(b'mongoose', self.f)
        self.assertNotIn('mongoose', self.f)

    def test_exc(self):
        """ "in" on closed group returns False (see also issue 174) """
        self.f.create_group('a')
        self.f.close()
        self.assertFalse(b'a' in self.f)
        self.assertFalse('a' in self.f)

    def test_empty(self):
        """ Empty strings work properly and aren't contained """
        self.assertNotIn('', self.f)
        self.assertNotIn(b'', self.f)

    def test_dot(self):
        """ Current group "." is always contained """
        self.assertIn(b'.', self.f)
        self.assertIn('.', self.f)

    def test_root(self):
        """ Root group (by itself) is contained """
        self.assertIn(b'/', self.f)
        self.assertIn('/', self.f)

    def test_trailing_slash(self):
        """ Trailing slashes are unconditionally ignored """
        self.f.create_group('group')
        self.f['dataset'] = 42
        self.assertIn('/group/', self.f)
        self.assertIn('group/', self.f)
        self.assertIn('/dataset/', self.f)
        self.assertIn('dataset/', self.f)

    def test_softlinks(self):
        """ Broken softlinks are contained, but their members are not """
        self.f.create_group('grp')
        self.f['/grp/soft'] = h5py.SoftLink('/mongoose')
        self.f['/grp/external'] = h5py.ExternalLink('mongoose.hdf5', '/mongoose')
        self.assertIn('/grp/soft', self.f)
        self.assertNotIn('/grp/soft/something', self.f)
        self.assertIn('/grp/external', self.f)
        self.assertNotIn('/grp/external/something', self.f)

    def test_oddball_paths(self):
        """ Technically legitimate (but odd-looking) paths """
        self.f.create_group('x/y/z')
        self.f['dset'] = 42
        self.assertIn('/', self.f)
        self.assertIn('//', self.f)
        self.assertIn('///', self.f)
        self.assertIn('.///', self.f)
        self.assertIn('././/', self.f)
        grp = self.f['x']
        self.assertIn('.//x/y/z', self.f)
        self.assertNotIn('.//x/y/z', grp)
        self.assertIn('x///', self.f)
        self.assertIn('./x///', self.f)
        self.assertIn('dset///', self.f)
        self.assertIn('/dset//', self.f)

class TestIter(BaseMapping):

    """
        Feature: You can iterate over group members via "for x in y", etc.
    """

    def test_iter(self):
        """ "for x in y" iteration """
        lst = [x for x in self.f]
        self.assertSameElements(lst, self.groups)

    def test_iter_zero(self):
        """ Iteration works properly for the case with no group members """
        hfile = File(self.mktemp(), 'w')
        try:
            lst = [x for x in hfile]
            self.assertEqual(lst, [])
        finally:
            hfile.close()

class TestTrackOrder(BaseGroup):
    def populate(self, g):
        for i in range(100):
            # Mix group and dataset creation.
            if i % 10 == 0:
                g.create_group(str(i))
            else:
                g[str(i)] = [i]

    def test_track_order(self):
        g = self.f.create_group('order', track_order=True)  # creation order
        self.populate(g)

        ref = [str(i) for i in range(100)]
        self.assertEqual(list(g), ref)
        self.assertEqual(list(reversed(g)), list(reversed(ref)))

    def test_no_track_order(self):
        g = self.f.create_group('order', track_order=False)  # name alphanumeric
        self.populate(g)

        ref = sorted([str(i) for i in range(100)])
        self.assertEqual(list(g), ref)
        self.assertEqual(list(reversed(g)), list(reversed(ref)))

class TestPy3Dict(BaseMapping):

    def test_keys(self):
        """ .keys provides a key view """
        kv = getattr(self.f, 'keys')()
        ref = self.groups
        self.assertSameElements(list(kv), ref)
        self.assertSameElements(list(reversed(kv)), list(reversed(ref)))

        for x in self.groups:
            self.assertIn(x, kv)
        self.assertEqual(len(kv), len(self.groups))

    def test_values(self):
        """ .values provides a value view """
        vv = getattr(self.f, 'values')()
        ref = [self.f.get(x) for x in self.groups]
        self.assertSameElements(list(vv), ref)
        self.assertSameElements(list(reversed(vv)), list(reversed(ref)))

        self.assertEqual(len(vv), len(self.groups))
        for x in self.groups:
            self.assertIn(self.f.get(x), vv)

    def test_items(self):
        """ .items provides an item view """
        iv = getattr(self.f, 'items')()
        ref = [(x,self.f.get(x)) for x in self.groups]
        self.assertSameElements(list(iv), ref)
        self.assertSameElements(list(reversed(iv)), list(reversed(ref)))

        self.assertEqual(len(iv), len(self.groups))
        for x in self.groups:
            self.assertIn((x, self.f.get(x)), iv)

class TestAdditionalMappingFuncs(BaseMapping):
    """
    Feature: Other dict methods (pop, pop_item, clear, update, setdefault) are
    available.
    """
    def setUp(self):
        self.f = File(self.mktemp(), 'w')
        for x in ('/test/a', '/test/b', '/test/c', '/test/d'):
            self.f.create_group(x)
        self.group = self.f['test']

    def tearDown(self):
        if self.f:
            self.f.close()

    def test_pop_item(self):
        """.pop_item exists and removes item"""
        key, val = self.group.popitem()
        self.assertNotIn(key, self.group)

    def test_pop(self):
        """.pop exists and removes specified item"""
        self.group.pop('a')
        self.assertNotIn('a', self.group)

    def test_pop_default(self):
        """.pop falls back to default"""
        # e shouldn't exist as a group
        value = self.group.pop('e', None)
        self.assertEqual(value, None)

    def test_pop_raises(self):
        """.pop raises KeyError for non-existence"""
        # e shouldn't exist as a group
        with self.assertRaises(KeyError):
            key = self.group.pop('e')

    def test_clear(self):
        """.clear removes groups"""
        self.group.clear()
        self.assertEqual(len(self.group), 0)

    def test_update_dict(self):
        """.update works with dict"""
        new_items = {'e': np.array([42])}
        self.group.update(new_items)
        self.assertIn('e', self.group)

    def test_update_iter(self):
        """.update works with list"""
        new_items = [
            ('e', np.array([42])),
            ('f', np.array([42]))
        ]
        self.group.update(new_items)
        self.assertIn('e', self.group)

    def test_update_kwargs(self):
        """.update works with kwargs"""
        new_items = {'e': np.array([42])}
        self.group.update(**new_items)
        self.assertIn('e', self.group)

    def test_setdefault(self):
        """.setdefault gets group if it exists"""
        value = self.group.setdefault('a')
        self.assertEqual(value, self.group.get('a'))

    def test_setdefault_with_default(self):
        """.setdefault gets default if group doesn't exist"""
        # e shouldn't exist as a group
        # 42 used as groups should be strings
        value = self.group.setdefault('e', np.array([42]))
        self.assertEqual(value, 42)

    def test_setdefault_no_default(self):
        """
        .setdefault gets None if group doesn't exist, but as None isn't defined
        as data for a dataset, this should raise a TypeError.
        """
        # e shouldn't exist as a group
        with self.assertRaises(TypeError):
            self.group.setdefault('e')


class TestGet(BaseGroup):

    """
        Feature: The .get method allows access to objects and metadata
    """

    def test_get_default(self):
        """ Object is returned, or default if it doesn't exist """
        default = object()
        out = self.f.get('mongoose', default)
        self.assertIs(out, default)

        grp = self.f.create_group('a')
        out = self.f.get(b'a')
        self.assertEqual(out, grp)

    def test_get_class(self):
        """ Object class is returned with getclass option """
        self.f.create_group('foo')
        out = self.f.get('foo', getclass=True)
        self.assertEqual(out, Group)

        self.f.create_dataset('bar', (4,))
        out = self.f.get('bar', getclass=True)
        self.assertEqual(out, Dataset)

        self.f['baz'] = np.dtype('|S10')
        out = self.f.get('baz', getclass=True)
        self.assertEqual(out, Datatype)

    def test_get_link_class(self):
        """ Get link classes """
        default = object()

        sl = SoftLink('/mongoose')
        el = ExternalLink('somewhere.hdf5', 'mongoose')

        self.f.create_group('hard')
        self.f['soft'] = sl
        self.f['external'] = el

        out_hl = self.f.get('hard', default, getlink=True, getclass=True)
        out_sl = self.f.get('soft', default, getlink=True, getclass=True)
        out_el = self.f.get('external', default, getlink=True, getclass=True)

        self.assertEqual(out_hl, HardLink)
        self.assertEqual(out_sl, SoftLink)
        self.assertEqual(out_el, ExternalLink)

    def test_get_link(self):
        """ Get link values """
        sl = SoftLink('/mongoose')
        el = ExternalLink('somewhere.hdf5', 'mongoose')

        self.f.create_group('hard')
        self.f['soft'] = sl
        self.f['external'] = el

        out_hl = self.f.get('hard', getlink=True)
        out_sl = self.f.get('soft', getlink=True)
        out_el = self.f.get('external', getlink=True)

        #TODO: redo with SoftLink/ExternalLink built-in equality
        self.assertIsInstance(out_hl, HardLink)
        self.assertIsInstance(out_sl, SoftLink)
        self.assertEqual(out_sl._path, sl._path)
        self.assertIsInstance(out_el, ExternalLink)
        self.assertEqual(out_el._path, el._path)
        self.assertEqual(out_el._filename, el._filename)

class TestVisit(TestCase):

    """
        Feature: The .visit and .visititems methods allow iterative access to
        group and subgroup members
    """

    def setUp(self):
        self.f = File(self.mktemp(), 'w')
        self.groups = [
            'grp1', 'grp1/sg1', 'grp1/sg2', 'grp2', 'grp2/sg1', 'grp2/sg1/ssg1'
            ]
        for x in self.groups:
            self.f.create_group(x)

    def tearDown(self):
        self.f.close()

    def test_visit(self):
        """ All subgroups are visited """
        l = []
        self.f.visit(l.append)
        self.assertSameElements(l, self.groups)

    def test_visititems(self):
        """ All subgroups and contents are visited """
        l = []
        comp = [(x, self.f[x]) for x in self.groups]
        self.f.visititems(lambda x, y: l.append((x,y)))
        self.assertSameElements(comp, l)

    def test_bailout(self):
        """ Returning a non-None value immediately aborts iteration """
        x = self.f.visit(lambda x: x)
        self.assertEqual(x, self.groups[0])
        x = self.f.visititems(lambda x, y: (x,y))
        self.assertEqual(x, (self.groups[0], self.f[self.groups[0]]))

class TestVisitLinks(TestCase):
    """
        Feature: The .visit_links and .visititems_links methods allow iterative access to
        links contained in the group and its subgroups.
    """

    def setUp(self):
        self.f = File(self.mktemp(), 'w')
        self.groups = [
            'grp1', 'grp1/grp11', 'grp1/grp12', 'grp2', 'grp2/grp21', 'grp2/grp21/grp211'
            ]
        self.links = [
            'linkto_grp1', 'grp1/linkto_grp11', 'grp1/linkto_grp12', 'linkto_grp2', 'grp2/linkto_grp21', 'grp2/grp21/linkto_grp211'
        ]
        for g, l in zip(self.groups, self.links):
            self.f.create_group(g)
            self.f[l] = SoftLink(f'/{g}')

    def tearDown(self):
        self.f.close()

    def test_visit_links(self):
        """ All subgroups and links are visited """
        l = []
        self.f.visit_links(l.append)
        self.assertSameElements(l, self.groups + self.links)

    def test_visititems(self):
        """ All links are visited """
        l = []
        comp = [(x, type(self.f.get(x, getlink=True))) for x in self.groups + self.links]
        self.f.visititems_links(lambda x, y: l.append((x, type(y))))
        self.assertSameElements(comp, l)

    def test_bailout(self):
        """ Returning a non-None value immediately aborts iteration """
        x = self.f.visit_links(lambda x: x)
        self.assertEqual(x, self.groups[0])
        x = self.f.visititems_links(lambda x, y: (x,type(y)))
        self.assertEqual(x, (self.groups[0], type(self.f.get(self.groups[0], getlink=True))))


class TestSoftLinks(BaseGroup):

    """
        Feature: Create and manage soft links with the high-level interface
    """

    def test_spath(self):
        """ SoftLink path attribute """
        sl = SoftLink('/foo')
        self.assertEqual(sl.path, '/foo')

    def test_srepr(self):
        """ SoftLink path repr """
        sl = SoftLink('/foo')
        self.assertIsInstance(repr(sl), str)

    def test_create(self):
        """ Create new soft link by assignment """
        g = self.f.create_group('new')
        sl = SoftLink('/new')
        self.f['alias'] = sl
        g2 = self.f['alias']
        self.assertEqual(g, g2)

    def test_exc(self):
        """ Opening dangling soft link results in KeyError """
        self.f['alias'] = SoftLink('new')
        with self.assertRaises(KeyError):
            self.f['alias']

class TestExternalLinks(TestCase):

    """
        Feature: Create and manage external links
    """

    def setUp(self):
        self.f = File(self.mktemp(), 'w')
        self.ename = self.mktemp()
        self.ef = File(self.ename, 'w')
        self.ef.create_group('external')
        self.ef.close()

    def tearDown(self):
        if self.f:
            self.f.close()
        if self.ef:
            self.ef.close()

    def test_epath(self):
        """ External link paths attributes """
        el = ExternalLink('foo.hdf5', '/foo')
        self.assertEqual(el.filename, 'foo.hdf5')
        self.assertEqual(el.path, '/foo')

    def test_erepr(self):
        """ External link repr """
        el = ExternalLink('foo.hdf5','/foo')
        self.assertIsInstance(repr(el), str)

    def test_create(self):
        """ Creating external links """
        self.f['ext'] = ExternalLink(self.ename, '/external')
        grp = self.f['ext']
        self.ef = grp.file
        self.assertNotEqual(self.ef, self.f)
        self.assertEqual(grp.name, '/external')

    def test_exc(self):
        """ KeyError raised when attempting to open broken link """
        self.f['ext'] = ExternalLink(self.ename, '/missing')
        with self.assertRaises(KeyError):
            self.f['ext']

    # I would prefer OSError but there's no way to fix this as the exception
    # class is determined by HDF5.
    def test_exc_missingfile(self):
        """ KeyError raised when attempting to open missing file """
        self.f['ext'] = ExternalLink('mongoose.hdf5','/foo')
        with self.assertRaises(KeyError):
            self.f['ext']

    def test_close_file(self):
        """ Files opened by accessing external links can be closed

        Issue 189.
        """
        self.f['ext'] = ExternalLink(self.ename, '/')
        grp = self.f['ext']
        f2 = grp.file
        f2.close()
        self.assertFalse(f2)

    @ut.skipIf(NO_FS_UNICODE, "No unicode filename support")
    def test_unicode_encode(self):
        """
        Check that external links encode unicode filenames properly
        Testing issue #732
        """
        ext_filename = os.path.join(mkdtemp(), u"α.hdf5")
        with File(ext_filename, "w") as ext_file:
            ext_file.create_group('external')
        self.f['ext'] = ExternalLink(ext_filename, '/external')

    @ut.skipIf(NO_FS_UNICODE, "No unicode filename support")
    def test_unicode_decode(self):
        """
        Check that external links decode unicode filenames properly
        Testing issue #732
        """
        ext_filename = os.path.join(mkdtemp(), u"α.hdf5")
        with File(ext_filename, "w") as ext_file:
            ext_file.create_group('external')
            ext_file["external"].attrs["ext_attr"] = "test"
        self.f['ext'] = ExternalLink(ext_filename, '/external')
        self.assertEqual(self.f["ext"].attrs["ext_attr"], "test")

    def test_unicode_hdf5_path(self):
        """
        Check that external links handle unicode hdf5 paths properly
        Testing issue #333
        """
        ext_filename = os.path.join(mkdtemp(), "external.hdf5")
        with File(ext_filename, "w") as ext_file:
            ext_file.create_group('α')
            ext_file["α"].attrs["ext_attr"] = "test"
        self.f['ext'] = ExternalLink(ext_filename, '/α')
        self.assertEqual(self.f["ext"].attrs["ext_attr"], "test")

class TestExtLinkBugs(TestCase):

    """
        Bugs: Specific regressions for external links
    """

    def test_issue_212(self):
        """ Issue 212

        Fails with:

        AttributeError: 'SharedConfig' object has no attribute 'lapl'
        """
        def closer(x):
            def w():
                try:
                    if x:
                        x.close()
                except OSError:
                    pass
            return w
        orig_name = self.mktemp()
        new_name = self.mktemp()
        f = File(orig_name, 'w')
        self.addCleanup(closer(f))
        f.create_group('a')
        f.close()

        g = File(new_name, 'w')
        self.addCleanup(closer(g))
        g['link'] = ExternalLink(orig_name, '/')  # note root group
        g.close()

        h = File(new_name, 'r')
        self.addCleanup(closer(h))
        self.assertIsInstance(h['link']['a'], Group)


class TestCopy(TestCase):

    def setUp(self):
        self.f1 = File(self.mktemp(), 'w')
        self.f2 = File(self.mktemp(), 'w')

    def tearDown(self):
        if self.f1:
            self.f1.close()
        if self.f2:
            self.f2.close()

    def test_copy_path_to_path(self):
        foo = self.f1.create_group('foo')
        foo['bar'] = [1,2,3]

        self.f1.copy('foo', 'baz')
        baz = self.f1['baz']
        self.assertIsInstance(baz, Group)
        self.assertArrayEqual(baz['bar'], np.array([1,2,3]))

    def test_copy_path_to_group(self):
        foo = self.f1.create_group('foo')
        foo['bar'] = [1,2,3]
        baz = self.f1.create_group('baz')

        self.f1.copy('foo', baz)
        baz = self.f1['baz']
        self.assertIsInstance(baz, Group)
        self.assertArrayEqual(baz['foo/bar'], np.array([1,2,3]))

        self.f1.copy('foo', self.f2['/'])
        self.assertIsInstance(self.f2['/foo'], Group)
        self.assertArrayEqual(self.f2['foo/bar'], np.array([1,2,3]))

    def test_copy_group_to_path(self):

        foo = self.f1.create_group('foo')
        foo['bar'] = [1,2,3]

        self.f1.copy(foo, 'baz')
        baz = self.f1['baz']
        self.assertIsInstance(baz, Group)
        self.assertArrayEqual(baz['bar'], np.array([1,2,3]))

        self.f2.copy(foo, 'foo')
        self.assertIsInstance(self.f2['/foo'], Group)
        self.assertArrayEqual(self.f2['foo/bar'], np.array([1,2,3]))

    def test_copy_group_to_group(self):

        foo = self.f1.create_group('foo')
        foo['bar'] = [1,2,3]
        baz = self.f1.create_group('baz')

        self.f1.copy(foo, baz)
        baz = self.f1['baz']
        self.assertIsInstance(baz, Group)
        self.assertArrayEqual(baz['foo/bar'], np.array([1,2,3]))

        self.f1.copy(foo, self.f2['/'])
        self.assertIsInstance(self.f2['/foo'], Group)
        self.assertArrayEqual(self.f2['foo/bar'], np.array([1,2,3]))

    def test_copy_dataset(self):
        self.f1['foo'] = [1,2,3]
        foo = self.f1['foo']
        grp = self.f1.create_group("grp")

        self.f1.copy(foo, 'bar')
        self.assertArrayEqual(self.f1['bar'], np.array([1,2,3]))

        self.f1.copy('foo', 'baz')
        self.assertArrayEqual(self.f1['baz'], np.array([1,2,3]))

        self.f1.copy(foo, grp)
        self.assertArrayEqual(self.f1['/grp/foo'], np.array([1,2,3]))

        self.f1.copy('foo', self.f2)
        self.assertArrayEqual(self.f2['foo'], np.array([1,2,3]))

        self.f2.copy(self.f1['foo'], self.f2, 'bar')
        self.assertArrayEqual(self.f2['bar'], np.array([1,2,3]))

    def test_copy_shallow(self):

        foo = self.f1.create_group('foo')
        bar = foo.create_group('bar')
        foo['qux'] = [1,2,3]
        bar['quux'] = [4,5,6]

        self.f1.copy(foo, 'baz', shallow=True)
        baz = self.f1['baz']
        self.assertIsInstance(baz, Group)
        self.assertIsInstance(baz['bar'], Group)
        self.assertEqual(len(baz['bar']), 0)
        self.assertArrayEqual(baz['qux'], np.array([1,2,3]))

        self.f2.copy(foo, 'foo', shallow=True)
        self.assertIsInstance(self.f2['/foo'], Group)
        self.assertIsInstance(self.f2['foo/bar'], Group)
        self.assertEqual(len(self.f2['foo/bar']), 0)
        self.assertArrayEqual(self.f2['foo/qux'], np.array([1,2,3]))

    def test_copy_without_attributes(self):

        self.f1['foo'] = [1,2,3]
        foo = self.f1['foo']
        foo.attrs['bar'] = [4,5,6]

        self.f1.copy(foo, 'baz', without_attrs=True)
        self.assertArrayEqual(self.f1['baz'], np.array([1,2,3]))
        assert 'bar' not in self.f1['baz'].attrs

        self.f2.copy(foo, 'baz', without_attrs=True)
        self.assertArrayEqual(self.f2['baz'], np.array([1,2,3]))
        assert 'bar' not in self.f2['baz'].attrs

    def test_copy_soft_links(self):

        self.f1['bar'] = [1, 2, 3]
        foo = self.f1.create_group('foo')
        foo['baz'] = SoftLink('/bar')

        self.f1.copy(foo, 'qux', expand_soft=True)
        self.f2.copy(foo, 'foo', expand_soft=True)
        del self.f1['bar']

        self.assertIsInstance(self.f1['qux'], Group)
        self.assertArrayEqual(self.f1['qux/baz'], np.array([1, 2, 3]))

        self.assertIsInstance(self.f2['/foo'], Group)
        self.assertArrayEqual(self.f2['foo/baz'], np.array([1, 2, 3]))

    def test_copy_external_links(self):

        filename = self.f1.filename
        self.f1['foo'] = [1,2,3]
        self.f2['bar'] = ExternalLink(filename, 'foo')
        self.f1.close()
        self.f1 = None

        self.assertArrayEqual(self.f2['bar'], np.array([1,2,3]))

        self.f2.copy('bar', 'baz', expand_external=True)
        os.unlink(filename)
        self.assertArrayEqual(self.f2['baz'], np.array([1,2,3]))

    def test_copy_refs(self):

        self.f1['foo'] = [1,2,3]
        self.f1['bar'] = [4,5,6]
        foo = self.f1['foo']
        bar = self.f1['bar']
        foo.attrs['bar'] = bar.ref

        self.f1.copy(foo, 'baz', expand_refs=True)
        self.assertArrayEqual(self.f1['baz'], np.array([1,2,3]))
        baz_bar = self.f1['baz'].attrs['bar']
        self.assertArrayEqual(self.f1[baz_bar], np.array([4,5,6]))
        # The reference points to a copy of bar, not to bar itself.
        self.assertNotEqual(self.f1[baz_bar].name, bar.name)

        self.f1.copy('foo', self.f2, 'baz', expand_refs=True)
        self.assertArrayEqual(self.f2['baz'], np.array([1,2,3]))
        baz_bar = self.f2['baz'].attrs['bar']
        self.assertArrayEqual(self.f2[baz_bar], np.array([4,5,6]))

        self.f1.copy('/', self.f2, 'root', expand_refs=True)
        self.assertArrayEqual(self.f2['root/foo'], np.array([1,2,3]))
        self.assertArrayEqual(self.f2['root/bar'], np.array([4,5,6]))
        foo_bar = self.f2['root/foo'].attrs['bar']
        self.assertArrayEqual(self.f2[foo_bar], np.array([4,5,6]))
        # There's only one copy of bar, which the reference points to.
        self.assertEqual(self.f2[foo_bar], self.f2['root/bar'])


class TestMove(BaseGroup):

    """
        Feature: Group.move moves links in a file
    """

    def test_move_hardlink(self):
        """ Moving an object """
        grp = self.f.create_group("X")
        self.f.move("X", "Y")
        self.assertEqual(self.f["Y"], grp)
        self.f.move("Y", "new/nested/path")
        self.assertEqual(self.f['new/nested/path'], grp)

    def test_move_softlink(self):
        """ Moving a soft link """
        self.f['soft'] = h5py.SoftLink("relative/path")
        self.f.move('soft', 'new_soft')
        lnk = self.f.get('new_soft', getlink=True)
        self.assertEqual(lnk.path, "relative/path")

    def test_move_conflict(self):
        """ Move conflict raises ValueError """
        self.f.create_group("X")
        self.f.create_group("Y")
        with self.assertRaises(ValueError):
            self.f.move("X", "Y")

    def test_short_circuit(self):
        ''' Test that a null-move works '''
        self.f.create_group("X")
        self.f.move("X", "X")


class TestMutableMapping(BaseGroup):
    '''Tests if the registration of Group as a MutableMapping
    behaves as expected
    '''
    def test_resolution(self):
        assert issubclass(Group, MutableMapping)
        grp = self.f.create_group("K")
        assert isinstance(grp, MutableMapping)

    def test_validity(self):
        '''
        Test that the required functions are implemented.
        '''
        Group.__getitem__
        Group.__setitem__
        Group.__delitem__
        Group.__iter__
        Group.__len__
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/tests/test_h5.py0000644000175000017500000000230014045746670017345 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from h5py import h5

from .common import TestCase

def fixnames():
    cfg = h5.get_config()
    cfg.complex_names = ('r','i')

class TestH5(TestCase):

    def test_config(self):
        cfg = h5.get_config()
        self.assertIsInstance(cfg, h5.H5PYConfig)
        cfg2 = h5.get_config()
        self.assertIs(cfg, cfg2)

    def test_cnames_get(self):
        cfg = h5.get_config()
        self.assertEqual(cfg.complex_names, ('r','i'))

    def test_cnames_set(self):
        self.addCleanup(fixnames)
        cfg = h5.get_config()
        cfg.complex_names = ('q','x')
        self.assertEqual(cfg.complex_names, ('q','x'))

    def test_cnames_set_exc(self):
        self.addCleanup(fixnames)
        cfg = h5.get_config()
        with self.assertRaises(TypeError):
            cfg.complex_names = ('q','i','v')
        self.assertEqual(cfg.complex_names, ('r','i'))

    def test_repr(self):
        cfg = h5.get_config()
        repr(cfg)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/h5py/tests/test_h5d_direct_chunk.py0000644000175000017500000001710314675110407022232 0ustar00takluyvertakluyverimport h5py
import numpy
import numpy.testing
import pytest

from .common import ut, TestCase


class TestWriteDirectChunk(TestCase):
    def test_write_direct_chunk(self):

        filename = self.mktemp().encode()
        with h5py.File(filename, "w") as filehandle:

            dataset = filehandle.create_dataset("data", (100, 100, 100),
                                                maxshape=(None, 100, 100),
                                                chunks=(1, 100, 100),
                                                dtype='float32')

            # writing
            array = numpy.zeros((10, 100, 100))
            for index in range(10):
                a = numpy.random.rand(100, 100).astype('float32')
                dataset.id.write_direct_chunk((index, 0, 0), a.tobytes(), filter_mask=1)
                array[index] = a


        # checking
        with h5py.File(filename, "r") as filehandle:
            for i in range(10):
                read_data = filehandle["data"][i]
                numpy.testing.assert_array_equal(array[i], read_data)


@ut.skipIf('gzip' not in h5py.filters.encode, "DEFLATE is not installed")
class TestReadDirectChunk(TestCase):
    def test_read_compressed_offsets(self):

        filename = self.mktemp().encode()
        with h5py.File(filename, "w") as filehandle:

            frame = numpy.arange(16).reshape(4, 4)
            frame_dataset = filehandle.create_dataset("frame",
                                                      data=frame,
                                                      compression="gzip",
                                                      compression_opts=9)
            dataset = filehandle.create_dataset("compressed_chunked",
                                                data=[frame, frame, frame],
                                                compression="gzip",
                                                compression_opts=9,
                                                chunks=(1, ) + frame.shape)
            filter_mask, compressed_frame = frame_dataset.id.read_direct_chunk((0, 0))
            # No filter must be disabled
            self.assertEqual(filter_mask, 0)

            for i in range(dataset.shape[0]):
                filter_mask, data = dataset.id.read_direct_chunk((i, 0, 0))
                self.assertEqual(compressed_frame, data)
                # No filter must be disabled
                self.assertEqual(filter_mask, 0)

    def test_read_uncompressed_offsets(self):

        filename = self.mktemp().encode()
        frame = numpy.arange(16).reshape(4, 4)
        with h5py.File(filename, "w") as filehandle:
            dataset = filehandle.create_dataset("frame",
                                                maxshape=(1,) + frame.shape,
                                                shape=(1,) + frame.shape,
                                                compression="gzip",
                                                compression_opts=9)
            # Write uncompressed data
            DISABLE_ALL_FILTERS = 0xFFFFFFFF
            dataset.id.write_direct_chunk((0, 0, 0), frame.tobytes(), filter_mask=DISABLE_ALL_FILTERS)

        # FIXME: Here we have to close the file and load it back else
        #     a runtime error occurs:
        #     RuntimeError: Can't get storage size of chunk (chunk storage is not allocated)
        with h5py.File(filename, "r") as filehandle:
            dataset = filehandle["frame"]
            filter_mask, compressed_frame = dataset.id.read_direct_chunk((0, 0, 0))

        # At least 1 filter is supposed to be disabled
        self.assertNotEqual(filter_mask, 0)
        self.assertEqual(compressed_frame, frame.tobytes())

    def test_read_write_chunk(self):

        filename = self.mktemp().encode()
        with h5py.File(filename, "w") as filehandle:

            # create a reference
            frame = numpy.arange(16).reshape(4, 4)
            frame_dataset = filehandle.create_dataset("source",
                                                      data=frame,
                                                      compression="gzip",
                                                      compression_opts=9)
            # configure an empty dataset
            filter_mask, compressed_frame = frame_dataset.id.read_direct_chunk((0, 0))
            dataset = filehandle.create_dataset("created",
                                                shape=frame_dataset.shape,
                                                maxshape=frame_dataset.shape,
                                                chunks=frame_dataset.chunks,
                                                dtype=frame_dataset.dtype,
                                                compression="gzip",
                                                compression_opts=9)

            # copy the data
            dataset.id.write_direct_chunk((0, 0), compressed_frame, filter_mask=filter_mask)

        # checking
        with h5py.File(filename, "r") as filehandle:
            dataset = filehandle["created"][...]
            numpy.testing.assert_array_equal(dataset, frame)


class TestReadDirectChunkToOut:

    def test_uncompressed_data(self, writable_file):
        ref_data = numpy.arange(16).reshape(4, 4)
        dataset = writable_file.create_dataset(
            "uncompressed", data=ref_data, chunks=ref_data.shape)

        out = bytearray(ref_data.nbytes)
        filter_mask, chunk = dataset.id.read_direct_chunk((0, 0), out=out)

        assert numpy.array_equal(
            numpy.frombuffer(out, dtype=ref_data.dtype).reshape(ref_data.shape),
            ref_data,
        )
        assert filter_mask == 0
        assert len(chunk) == ref_data.nbytes

    @pytest.mark.skipif(
        'gzip' not in h5py.filters.encode,
        reason="DEFLATE is not installed",
    )
    def test_compressed_data(self, writable_file):
        ref_data = numpy.arange(16).reshape(4, 4)
        dataset = writable_file.create_dataset(
            "gzip",
            data=ref_data,
            chunks=ref_data.shape,
            compression="gzip",
            compression_opts=9,
        )
        chunk_info = dataset.id.get_chunk_info(0)

        out = bytearray(chunk_info.size)
        filter_mask, chunk = dataset.id.read_direct_chunk(
            chunk_info.chunk_offset,
            out=out,
        )
        assert filter_mask == chunk_info.filter_mask
        assert len(chunk) == chunk_info.size
        assert out == dataset.id.read_direct_chunk(chunk_info.chunk_offset)[1]

    def test_fail_buffer_too_small(self, writable_file):
        ref_data = numpy.arange(16).reshape(4, 4)
        dataset = writable_file.create_dataset(
            "uncompressed", data=ref_data, chunks=ref_data.shape)

        out = bytearray(ref_data.nbytes // 2)
        with pytest.raises(ValueError):
            dataset.id.read_direct_chunk((0, 0), out=out)

    def test_fail_buffer_readonly(self, writable_file):
        ref_data = numpy.arange(16).reshape(4, 4)
        dataset = writable_file.create_dataset(
            "uncompressed", data=ref_data, chunks=ref_data.shape)

        out = bytes(ref_data.nbytes)
        with pytest.raises(BufferError):
            dataset.id.read_direct_chunk((0, 0), out=out)

    def test_fail_buffer_not_contiguous(self, writable_file):
        ref_data = numpy.arange(16).reshape(4, 4)
        dataset = writable_file.create_dataset(
            "uncompressed", data=ref_data, chunks=ref_data.shape)

        array = numpy.empty(ref_data.shape + (2,), dtype=ref_data.dtype)
        out = array[:, :, ::2]  # Array is not contiguous
        with pytest.raises(ValueError):
            dataset.id.read_direct_chunk((0, 0), out=out)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/test_h5f.py0000644000175000017500000000766214350630273017521 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

import tempfile
import shutil
import os
import numpy as np
from h5py import File, special_dtype
from h5py._hl.files import direct_vfd

from .common import ut, TestCase
from .common import ut, TestCase, UNICODE_FILENAMES, closed_tempfile


class TestFileID(TestCase):
    def test_descriptor_core(self):
        with File('TestFileID.test_descriptor_core', driver='core',
                  backing_store=False, mode='x') as f:
            assert isinstance(f.id.get_vfd_handle(), int)

    def test_descriptor_sec2(self):
        dn_tmp = tempfile.mkdtemp('h5py.lowtest.test_h5f.TestFileID.test_descriptor_sec2')
        fn_h5 = os.path.join(dn_tmp, 'test.h5')
        try:
            with File(fn_h5, driver='sec2', mode='x') as f:
                descriptor = f.id.get_vfd_handle()
                self.assertNotEqual(descriptor, 0)
                os.fsync(descriptor)
        finally:
            shutil.rmtree(dn_tmp)

    @ut.skipUnless(direct_vfd,
                   "DIRECT driver is supported on Linux if hdf5 is "
                   "built with the appriorate flags.")
    def test_descriptor_direct(self):
        dn_tmp = tempfile.mkdtemp('h5py.lowtest.test_h5f.TestFileID.test_descriptor_direct')
        fn_h5 = os.path.join(dn_tmp, 'test.h5')
        try:
            with File(fn_h5, driver='direct', mode='x') as f:
                descriptor = f.id.get_vfd_handle()
                self.assertNotEqual(descriptor, 0)
                os.fsync(descriptor)
        finally:
            shutil.rmtree(dn_tmp)


class TestCacheConfig(TestCase):
    def test_simple_gets(self):
        dn_tmp = tempfile.mkdtemp('h5py.lowtest.test_h5f.TestFileID.TestCacheConfig.test_simple_gets')
        fn_h5 = os.path.join(dn_tmp, 'test.h5')
        try:
            with File(fn_h5, mode='x') as f:
                hit_rate = f._id.get_mdc_hit_rate()
                mdc_size = f._id.get_mdc_size()

        finally:
            shutil.rmtree(dn_tmp)

    def test_hitrate_reset(self):
        dn_tmp = tempfile.mkdtemp('h5py.lowtest.test_h5f.TestFileID.TestCacheConfig.test_hitrate_reset')
        fn_h5 = os.path.join(dn_tmp, 'test.h5')
        try:
            with File(fn_h5, mode='x') as f:
                hit_rate = f._id.get_mdc_hit_rate()
                f._id.reset_mdc_hit_rate_stats()
                hit_rate = f._id.get_mdc_hit_rate()
                assert hit_rate == 0

        finally:
            shutil.rmtree(dn_tmp)

    def test_mdc_config_get(self):
        dn_tmp = tempfile.mkdtemp('h5py.lowtest.test_h5f.TestFileID.TestCacheConfig.test_mdc_config_get')
        fn_h5 = os.path.join(dn_tmp, 'test.h5')
        try:
            with File(fn_h5, mode='x') as f:
                conf = f._id.get_mdc_config()
                f._id.set_mdc_config(conf)
        finally:
            shutil.rmtree(dn_tmp)


class TestVlenData(TestCase):
    def test_vlen_strings(self):
        # Create file with dataset containing vlen arrays of vlen strings
        dn_tmp = tempfile.mkdtemp('h5py.lowtest.test_h5f.TestVlenStrings.test_vlen_strings')
        fn_h5 = os.path.join(dn_tmp, 'test.h5')
        try:
            with File(fn_h5, mode='w') as h:
                vlen_str = special_dtype(vlen=str)
                vlen_vlen_str = special_dtype(vlen=vlen_str)

                ds = h.create_dataset('/com', (2,), dtype=vlen_vlen_str)
                ds[0] = (np.array(["a", "b", "c"], dtype=vlen_vlen_str))
                ds[1] = (np.array(["d", "e", "f","g"], dtype=vlen_vlen_str))

            with File(fn_h5, "r") as h:
                ds = h["com"]
                assert ds[0].tolist() == [b'a', b'b', b'c']
                assert ds[1].tolist() == [b'd', b'e', b'f', b'g']

        finally:
            shutil.rmtree(dn_tmp)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/test_h5o.py0000644000175000017500000000077414350630273017527 0ustar00takluyvertakluyverimport pytest

from .common import TestCase
from h5py import File


class SampleException(Exception):
    pass

def throwing(name, obj):
    print(name, obj)
    raise SampleException("throwing exception")

class TestVisit(TestCase):
    def test_visit(self):
        fname = self.mktemp()
        fid = File(fname, 'w')
        fid.create_dataset('foo', (100,), dtype='uint8')
        with pytest.raises(SampleException, match='throwing exception'):
            fid.visititems(throwing)
        fid.close()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/tests/test_h5p.py0000644000175000017500000001530114507227212017517 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

import unittest as ut

from h5py import h5p, h5f, version

from .common import TestCase


class TestLibver(TestCase):

    """
        Feature: Setting/getting lib ver bounds
    """

    def test_libver(self):
        """ Test libver bounds set/get """
        plist = h5p.create(h5p.FILE_ACCESS)
        plist.set_libver_bounds(h5f.LIBVER_EARLIEST, h5f.LIBVER_LATEST)
        self.assertEqual((h5f.LIBVER_EARLIEST, h5f.LIBVER_LATEST),
                         plist.get_libver_bounds())

    def test_libver_v18(self):
        """ Test libver bounds set/get for H5F_LIBVER_V18"""
        plist = h5p.create(h5p.FILE_ACCESS)
        plist.set_libver_bounds(h5f.LIBVER_EARLIEST, h5f.LIBVER_V18)
        self.assertEqual((h5f.LIBVER_EARLIEST, h5f.LIBVER_V18),
                         plist.get_libver_bounds())

    def test_libver_v110(self):
        """ Test libver bounds set/get for H5F_LIBVER_V110"""
        plist = h5p.create(h5p.FILE_ACCESS)
        plist.set_libver_bounds(h5f.LIBVER_V18, h5f.LIBVER_V110)
        self.assertEqual((h5f.LIBVER_V18, h5f.LIBVER_V110),
                         plist.get_libver_bounds())

    @ut.skipIf(version.hdf5_version_tuple < (1, 11, 4),
               'Requires HDF5 1.11.4 or later')
    def test_libver_v112(self):
        """ Test libver bounds set/get for H5F_LIBVER_V112"""
        plist = h5p.create(h5p.FILE_ACCESS)
        plist.set_libver_bounds(h5f.LIBVER_V18, h5f.LIBVER_V112)
        self.assertEqual((h5f.LIBVER_V18, h5f.LIBVER_V112),
                         plist.get_libver_bounds())

class TestDA(TestCase):
    '''
    Feature: setting/getting chunk cache size on a dataset access property list
    '''
    def test_chunk_cache(self):
        '''test get/set chunk cache '''
        dalist = h5p.create(h5p.DATASET_ACCESS)
        nslots = 10000  # 40kb hash table
        nbytes = 1000000  # 1MB cache size
        w0 = .5  # even blend of eviction strategy

        dalist.set_chunk_cache(nslots, nbytes, w0)
        self.assertEqual((nslots, nbytes, w0),
                         dalist.get_chunk_cache())

    def test_efile_prefix(self):
        '''test get/set efile prefix '''
        dalist = h5p.create(h5p.DATASET_ACCESS)
        self.assertEqual(dalist.get_efile_prefix().decode(), '')

        efile_prefix = "path/to/external/dataset"
        dalist.set_efile_prefix(efile_prefix.encode('utf-8'))
        self.assertEqual(dalist.get_efile_prefix().decode(),
                         efile_prefix)

        efile_prefix = "${ORIGIN}"
        dalist.set_efile_prefix(efile_prefix.encode('utf-8'))
        self.assertEqual(dalist.get_efile_prefix().decode(),
                         efile_prefix)

    def test_virtual_prefix(self):
        '''test get/set virtual prefix '''
        dalist = h5p.create(h5p.DATASET_ACCESS)
        self.assertEqual(dalist.get_virtual_prefix().decode(), '')

        virtual_prefix = "path/to/virtual/dataset"
        dalist.set_virtual_prefix(virtual_prefix.encode('utf-8'))
        self.assertEqual(dalist.get_virtual_prefix().decode(),
                         virtual_prefix)


class TestFA(TestCase):
    '''
    Feature: setting/getting mdc config on a file access property list
    '''
    def test_mdc_config(self):
        '''test get/set mdc config '''
        falist = h5p.create(h5p.FILE_ACCESS)

        config = falist.get_mdc_config()
        falist.set_mdc_config(config)

    def test_set_alignment(self):
        '''test get/set chunk cache '''
        falist = h5p.create(h5p.FILE_ACCESS)
        threshold = 10 * 1024  # threshold of 10kiB
        alignment = 1024 * 1024  # threshold of 1kiB

        falist.set_alignment(threshold, alignment)
        self.assertEqual((threshold, alignment),
                         falist.get_alignment())

    @ut.skipUnless(
        version.hdf5_version_tuple >= (1, 12, 1) or
        (version.hdf5_version_tuple[:2] == (1, 10) and version.hdf5_version_tuple[2] >= 7),
        'Requires HDF5 1.12.1 or later or 1.10.x >= 1.10.7')
    def test_set_file_locking(self):
        '''test get/set file locking'''
        falist = h5p.create(h5p.FILE_ACCESS)
        use_file_locking = False
        ignore_when_disabled = False

        falist.set_file_locking(use_file_locking, ignore_when_disabled)
        self.assertEqual((use_file_locking, ignore_when_disabled),
                         falist.get_file_locking())


class TestPL(TestCase):
    def test_obj_track_times(self):
        """
        tests if the object track times  set/get
        """
        # test for groups
        gcid = h5p.create(h5p.GROUP_CREATE)
        gcid.set_obj_track_times(False)
        self.assertEqual(False, gcid.get_obj_track_times())

        gcid.set_obj_track_times(True)
        self.assertEqual(True, gcid.get_obj_track_times())
        # test for datasets
        dcid = h5p.create(h5p.DATASET_CREATE)
        dcid.set_obj_track_times(False)
        self.assertEqual(False, dcid.get_obj_track_times())

        dcid.set_obj_track_times(True)
        self.assertEqual(True, dcid.get_obj_track_times())

        # test for generic objects
        ocid = h5p.create(h5p.OBJECT_CREATE)
        ocid.set_obj_track_times(False)
        self.assertEqual(False, ocid.get_obj_track_times())

        ocid.set_obj_track_times(True)
        self.assertEqual(True, ocid.get_obj_track_times())

    def test_link_creation_tracking(self):
        """
        tests the link creation order set/get
        """

        gcid = h5p.create(h5p.GROUP_CREATE)
        gcid.set_link_creation_order(0)
        self.assertEqual(0, gcid.get_link_creation_order())

        flags = h5p.CRT_ORDER_TRACKED | h5p.CRT_ORDER_INDEXED
        gcid.set_link_creation_order(flags)
        self.assertEqual(flags, gcid.get_link_creation_order())

        # test for file creation
        fcpl = h5p.create(h5p.FILE_CREATE)
        fcpl.set_link_creation_order(flags)
        self.assertEqual(flags, fcpl.get_link_creation_order())

    def test_attr_phase_change(self):
        """
        test the attribute phase change
        """

        cid = h5p.create(h5p.OBJECT_CREATE)
        # test default value
        ret = cid.get_attr_phase_change()
        self.assertEqual((8,6), ret)

        # max_compact must < 65536 (64kb)
        with self.assertRaises(ValueError):
            cid.set_attr_phase_change(65536, 6)

        # Using dense attributes storage to avoid 64kb size limitation
        # for a single attribute in compact attribute storage.
        cid.set_attr_phase_change(0, 0)
        self.assertEqual((0,0), cid.get_attr_phase_change())
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/tests/test_h5pl.py0000644000175000017500000000340014507227212017670 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

import pytest

from h5py import h5pl
from h5py.tests.common import insubprocess, subproc_env


@pytest.mark.mpi_skip
@insubprocess
@subproc_env({'HDF5_PLUGIN_PATH': 'h5py_plugin_test'})
def test_default(request):
    assert h5pl.size() == 1
    assert h5pl.get(0) == b'h5py_plugin_test'


@pytest.mark.mpi_skip
@insubprocess
@subproc_env({'HDF5_PLUGIN_PATH': 'h5py_plugin_test'})
def test_append(request):
    h5pl.append(b'/opt/hdf5/vendor-plugin')
    assert h5pl.size() == 2
    assert h5pl.get(0) == b'h5py_plugin_test'
    assert h5pl.get(1) == b'/opt/hdf5/vendor-plugin'


@pytest.mark.mpi_skip
@insubprocess
@subproc_env({'HDF5_PLUGIN_PATH': 'h5py_plugin_test'})
def test_prepend(request):
    h5pl.prepend(b'/opt/hdf5/vendor-plugin')
    assert h5pl.size() == 2
    assert h5pl.get(0) == b'/opt/hdf5/vendor-plugin'
    assert h5pl.get(1) == b'h5py_plugin_test'


@pytest.mark.mpi_skip
@insubprocess
@subproc_env({'HDF5_PLUGIN_PATH': 'h5py_plugin_test'})
def test_insert(request):
    h5pl.insert(b'/opt/hdf5/vendor-plugin', 0)
    assert h5pl.size() == 2
    assert h5pl.get(0) == b'/opt/hdf5/vendor-plugin'
    assert h5pl.get(1) == b'h5py_plugin_test'


@pytest.mark.mpi_skip
@insubprocess
@subproc_env({'HDF5_PLUGIN_PATH': 'h5py_plugin_test'})
def test_replace(request):
    h5pl.replace(b'/opt/hdf5/vendor-plugin', 0)
    assert  h5pl.size() == 1
    assert  h5pl.get(0) == b'/opt/hdf5/vendor-plugin'


@pytest.mark.mpi_skip
@insubprocess
def test_remove(request):
    h5pl.remove(0)
    assert h5pl.size() == 0
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/h5py/tests/test_h5s.py0000644000175000017500000000133314755054371017533 0ustar00takluyvertakluyverimport pytest

from h5py import h5s, version
from h5py._selector import Selector

class Helper:
    def __init__(self, shape: tuple):
        self.shape = shape

    def __getitem__(self, item) -> h5s.SpaceID:
        if not isinstance(item, tuple):
            item = (item,)
        space = h5s.create_simple(self.shape)
        sel = Selector(space)
        sel.make_selection(item)
        return space


@pytest.mark.skipif(version.hdf5_version_tuple < (1, 10, 7),
                    reason='H5Sselect_shape_same not available')
def test_same_shape():
    s1 = Helper((5, 6))[:3, :4]
    s2 = Helper((5, 6))[2:, 2:]
    assert s1.select_shape_same(s2)

    s3 = Helper((5, 6))[:4, :3]
    assert not s1.select_shape_same(s3)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/tests/test_h5t.py0000644000175000017500000001466614045746670017553 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

import numpy as np

import h5py
from h5py import h5t

from .common import TestCase, ut


class TestCompound(ut.TestCase):

    """
        Feature: Compound types can be created from Python dtypes
    """

    def test_ref(self):
        """ Reference types are correctly stored in compound types (issue 144)
        """
        dt = np.dtype([('a', h5py.ref_dtype), ('b', ' all fields
        out, format = sel2.read_dtypes(dt, ())
        self.assertEqual(out, format)
        self.assertEqual(out, dt)

        # Explicit selection of fields -> requested fields
        out, format = sel2.read_dtypes(dt, ('a','b'))
        self.assertEqual(out, format)
        self.assertEqual(out, np.dtype( [('a','i'), ('b','f')] ))

        # Explicit selection of exactly one field -> no fields
        out, format = sel2.read_dtypes(dt, ('a',))
        self.assertEqual(out, np.dtype('i'))
        self.assertEqual(format, np.dtype( [('a','i')] ))

        # Field does not appear in named typed
        with self.assertRaises(ValueError):
            out, format = sel2.read_dtypes(dt, ('j', 'k'))

class TestScalarSliceRules(BaseSelection):

    """
        Internal feature: selections rules for scalar datasets
    """

    def test_args(self):
        """ Permissible arguments for scalar slicing """
        shape, selection = sel2.read_selections_scalar(self.dsid, ())
        self.assertEqual(shape, None)
        self.assertEqual(selection.get_select_npoints(), 1)

        shape, selection = sel2.read_selections_scalar(self.dsid, (Ellipsis,))
        self.assertEqual(shape, ())
        self.assertEqual(selection.get_select_npoints(), 1)

        with self.assertRaises(ValueError):
            shape, selection = sel2.read_selections_scalar(self.dsid, (1,))

        dsid = self.f.create_dataset('y', (1,)).id
        with self.assertRaises(RuntimeError):
            shape, selection = sel2.read_selections_scalar(dsid, (1,))

class TestSelection(BaseSelection):

    """ High-level routes to generate a selection
    """

    def test_selection(self):
        dset = self.f.create_dataset('dset', (100,100))
        regref = dset.regionref[0:100, 0:100]

        # args is list, return a FancySelection
        st = sel.select((10,), list([1,2,3]), dset)
        self.assertIsInstance(st, sel.FancySelection)

        # args[0] is tuple, return a FancySelection
        st = sel.select((10,), ((1, 2, 3),), dset)
        self.assertIsInstance(st, sel.FancySelection)

        # args is a Boolean mask, return a PointSelection
        st1 = sel.select((5,), np.array([True,False,False,False,True]), dset)
        self.assertIsInstance(st1, sel.PointSelection)

        # args is int, return a SimpleSelection
        st2 = sel.select((10,), 1, dset)
        self.assertIsInstance(st2, sel.SimpleSelection)

        # args is str, should be rejected
        with self.assertRaises(TypeError):
            sel.select((100,), "foo", dset)

        # args is RegionReference, return a Selection instance
        st3 = sel.select((100,100), regref, dset)
        self.assertIsInstance(st3, sel.Selection)

        # args is RegionReference, but dataset is None
        with self.assertRaises(TypeError):
            sel.select((100,), regref, None)

        # args is RegionReference, but its shape doesn't match dataset shape
        with self.assertRaises(TypeError):
            sel.select((100,), regref, dset)

        # args is a single Selection instance, return the arg
        st4 = sel.select((100,100), st3, dset)
        self.assertEqual(st4,st3)

        # args is a single Selection instance, but args shape doesn't match Shape
        with self.assertRaises(TypeError):
            sel.select((100,), st3, dset)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/test_slicing.py0000644000175000017500000003306214350630273020460 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Dataset slicing test module.

    Tests all supported slicing operations, including read/write and
    broadcasting operations.  Does not test type conversion except for
    corner cases overlapping with slicing; for example, when selecting
    specific fields of a compound type.
"""

import numpy as np

from .common import ut, TestCase

import h5py
from h5py import h5s, h5t, h5d
from h5py import File, MultiBlockSlice

class BaseSlicing(TestCase):

    def setUp(self):
        self.f = File(self.mktemp(), 'w')

    def tearDown(self):
        if self.f:
            self.f.close()

class TestSingleElement(BaseSlicing):

    """
        Feature: Retrieving a single element works with NumPy semantics
    """

    def test_single_index(self):
        """ Single-element selection with [index] yields array scalar """
        dset = self.f.create_dataset('x', (1,), dtype='i1')
        out = dset[0]
        self.assertIsInstance(out, np.int8)

    def test_single_null(self):
        """ Single-element selection with [()] yields ndarray """
        dset = self.f.create_dataset('x', (1,), dtype='i1')
        out = dset[()]
        self.assertIsInstance(out, np.ndarray)
        self.assertEqual(out.shape, (1,))

    def test_scalar_index(self):
        """ Slicing with [...] yields scalar ndarray """
        dset = self.f.create_dataset('x', shape=(), dtype='f')
        out = dset[...]
        self.assertIsInstance(out, np.ndarray)
        self.assertEqual(out.shape, ())

    def test_scalar_null(self):
        """ Slicing with [()] yields array scalar """
        dset = self.f.create_dataset('x', shape=(), dtype='i1')
        out = dset[()]
        self.assertIsInstance(out, np.int8)

    def test_compound(self):
        """ Compound scalar is numpy.void, not tuple (issue 135) """
        dt = np.dtype([('a','i4'),('b','f8')])
        v = np.ones((4,), dtype=dt)
        dset = self.f.create_dataset('foo', (4,), data=v)
        self.assertEqual(dset[0], v[0])
        self.assertIsInstance(dset[0], np.void)

class TestObjectIndex(BaseSlicing):

    """
        Feature: numpy.object_ subtypes map to real Python objects
    """

    def test_reference(self):
        """ Indexing a reference dataset returns a h5py.Reference instance """
        dset = self.f.create_dataset('x', (1,), dtype=h5py.ref_dtype)
        dset[0] = self.f.ref
        self.assertEqual(type(dset[0]), h5py.Reference)

    def test_regref(self):
        """ Indexing a region reference dataset returns a h5py.RegionReference
        """
        dset1 = self.f.create_dataset('x', (10,10))
        regref = dset1.regionref[...]
        dset2 = self.f.create_dataset('y', (1,), dtype=h5py.regionref_dtype)
        dset2[0] = regref
        self.assertEqual(type(dset2[0]), h5py.RegionReference)

    def test_reference_field(self):
        """ Compound types of which a reference is an element work right """
        dt = np.dtype([('a', 'i'),('b', h5py.ref_dtype)])

        dset = self.f.create_dataset('x', (1,), dtype=dt)
        dset[0] = (42, self.f['/'].ref)

        out = dset[0]
        self.assertEqual(type(out[1]), h5py.Reference)  # isinstance does NOT work

    def test_scalar(self):
        """ Indexing returns a real Python object on scalar datasets """
        dset = self.f.create_dataset('x', (), dtype=h5py.ref_dtype)
        dset[()] = self.f.ref
        self.assertEqual(type(dset[()]), h5py.Reference)

    def test_bytestr(self):
        """ Indexing a byte string dataset returns a real python byte string
        """
        dset = self.f.create_dataset('x', (1,), dtype=h5py.string_dtype(encoding='ascii'))
        dset[0] = b"Hello there!"
        self.assertEqual(type(dset[0]), bytes)

class TestSimpleSlicing(TestCase):

    """
        Feature: Simple NumPy-style slices (start:stop:step) are supported.
    """

    def setUp(self):
        self.f = File(self.mktemp(), 'w')
        self.arr = np.arange(10)
        self.dset = self.f.create_dataset('x', data=self.arr)

    def tearDown(self):
        if self.f:
            self.f.close()

    def test_negative_stop(self):
        """ Negative stop indexes work as they do in NumPy """
        self.assertArrayEqual(self.dset[2:-2], self.arr[2:-2])

    def test_write(self):
        """Assigning to a 1D slice of a 2D dataset
        """
        dset = self.f.create_dataset('x2', (10, 2))

        x = np.zeros((10, 1))
        dset[:, 0] = x[:, 0]
        with self.assertRaises(TypeError):
            dset[:, 1] = x

class TestArraySlicing(BaseSlicing):

    """
        Feature: Array types are handled appropriately
    """

    def test_read(self):
        """ Read arrays tack array dimensions onto end of shape tuple """
        dt = np.dtype('(3,)f8')
        dset = self.f.create_dataset('x',(10,),dtype=dt)
        self.assertEqual(dset.shape, (10,))
        self.assertEqual(dset.dtype, dt)

        # Full read
        out = dset[...]
        self.assertEqual(out.dtype, np.dtype('f8'))
        self.assertEqual(out.shape, (10,3))

        # Single element
        out = dset[0]
        self.assertEqual(out.dtype, np.dtype('f8'))
        self.assertEqual(out.shape, (3,))

        # Range
        out = dset[2:8:2]
        self.assertEqual(out.dtype, np.dtype('f8'))
        self.assertEqual(out.shape, (3,3))

    def test_write_broadcast(self):
        """ Array fill from constant is not supported (issue 211).
        """
        dt = np.dtype('(3,)i')

        dset = self.f.create_dataset('x', (10,), dtype=dt)

        with self.assertRaises(TypeError):
            dset[...] = 42

    def test_write_element(self):
        """ Write a single element to the array

        Issue 211.
        """
        dt = np.dtype('(3,)f8')
        dset = self.f.create_dataset('x', (10,), dtype=dt)

        data = np.array([1,2,3.0])
        dset[4] = data

        out = dset[4]
        self.assertTrue(np.all(out == data))

    def test_write_slices(self):
        """ Write slices to array type """
        dt = np.dtype('(3,)i')

        data1 = np.ones((2,), dtype=dt)
        data2 = np.ones((4,5), dtype=dt)

        dset = self.f.create_dataset('x', (10,9,11), dtype=dt)

        dset[0,0,2:4] = data1
        self.assertArrayEqual(dset[0,0,2:4], data1)

        dset[3, 1:5, 6:11] = data2
        self.assertArrayEqual(dset[3, 1:5, 6:11], data2)


    def test_roundtrip(self):
        """ Read the contents of an array and write them back

        Issue 211.
        """
        dt = np.dtype('(3,)f8')
        dset = self.f.create_dataset('x', (10,), dtype=dt)

        out = dset[...]
        dset[...] = out

        self.assertTrue(np.all(dset[...] == out))


class TestZeroLengthSlicing(BaseSlicing):

    """
        Slices resulting in empty arrays
    """

    def test_slice_zero_length_dimension(self):
        """ Slice a dataset with a zero in its shape vector
            along the zero-length dimension """
        for i, shape in enumerate([(0,), (0, 3), (0, 2, 1)]):
            dset = self.f.create_dataset('x%d'%i, shape, dtype=int, maxshape=(None,)*len(shape))
            self.assertEqual(dset.shape, shape)
            out = dset[...]
            self.assertIsInstance(out, np.ndarray)
            self.assertEqual(out.shape, shape)
            out = dset[:]
            self.assertIsInstance(out, np.ndarray)
            self.assertEqual(out.shape, shape)
            if len(shape) > 1:
                out = dset[:, :1]
                self.assertIsInstance(out, np.ndarray)
                self.assertEqual(out.shape[:2], (0, 1))

    def test_slice_other_dimension(self):
        """ Slice a dataset with a zero in its shape vector
            along a non-zero-length dimension """
        for i, shape in enumerate([(3, 0), (1, 2, 0), (2, 0, 1)]):
            dset = self.f.create_dataset('x%d'%i, shape, dtype=int, maxshape=(None,)*len(shape))
            self.assertEqual(dset.shape, shape)
            out = dset[:1]
            self.assertIsInstance(out, np.ndarray)
            self.assertEqual(out.shape, (1,)+shape[1:])

    def test_slice_of_length_zero(self):
        """ Get a slice of length zero from a non-empty dataset """
        for i, shape in enumerate([(3,), (2, 2,), (2,  1, 5)]):
            dset = self.f.create_dataset('x%d'%i, data=np.zeros(shape, int), maxshape=(None,)*len(shape))
            self.assertEqual(dset.shape, shape)
            out = dset[1:1]
            self.assertIsInstance(out, np.ndarray)
            self.assertEqual(out.shape, (0,)+shape[1:])

class TestFieldNames(BaseSlicing):

    """
        Field names for read & write
    """

    dt = np.dtype([('a', 'f'), ('b', 'i'), ('c', 'f4')])
    data = np.ones((100,), dtype=dt)

    def setUp(self):
        BaseSlicing.setUp(self)
        self.dset = self.f.create_dataset('x', (100,), dtype=self.dt)
        self.dset[...] = self.data

    def test_read(self):
        """ Test read with field selections """
        self.assertArrayEqual(self.dset['a'], self.data['a'])

    def test_unicode_names(self):
        """ Unicode field names for for read and write """
        self.assertArrayEqual(self.dset['a'], self.data['a'])
        self.dset['a'] = 42
        data = self.data.copy()
        data['a'] = 42
        self.assertArrayEqual(self.dset['a'], data['a'])

    def test_write(self):
        """ Test write with field selections """
        data2 = self.data.copy()
        data2['a'] *= 2
        self.dset['a'] = data2
        self.assertTrue(np.all(self.dset[...] == data2))
        data2['b'] *= 4
        self.dset['b'] = data2
        self.assertTrue(np.all(self.dset[...] == data2))
        data2['a'] *= 3
        data2['c'] *= 3
        self.dset['a','c'] = data2
        self.assertTrue(np.all(self.dset[...] == data2))

    def test_write_noncompound(self):
        """ Test write with non-compound source (single-field) """
        data2 = self.data.copy()
        data2['b'] = 1.0
        self.dset['b'] = 1.0
        self.assertTrue(np.all(self.dset[...] == data2))


class TestMultiBlockSlice(BaseSlicing):

    def setUp(self):
        super().setUp()
        self.arr = np.arange(10)
        self.dset = self.f.create_dataset('x', data=self.arr)

    def test_default(self):
        # Default selects entire dataset as one block
        mbslice = MultiBlockSlice()

        self.assertEqual(mbslice.indices(10), (0, 1, 10, 1))
        np.testing.assert_array_equal(self.dset[mbslice], self.arr)

    def test_default_explicit(self):
        mbslice = MultiBlockSlice(start=0, count=10, stride=1, block=1)

        self.assertEqual(mbslice.indices(10), (0, 1, 10, 1))
        np.testing.assert_array_equal(self.dset[mbslice], self.arr)

    def test_start(self):
        mbslice = MultiBlockSlice(start=4)

        self.assertEqual(mbslice.indices(10), (4, 1, 6, 1))
        np.testing.assert_array_equal(self.dset[mbslice], np.array([4, 5, 6, 7, 8, 9]))

    def test_count(self):
        mbslice = MultiBlockSlice(count=7)

        self.assertEqual(mbslice.indices(10), (0, 1, 7, 1))
        np.testing.assert_array_equal(
            self.dset[mbslice], np.array([0, 1, 2, 3, 4, 5, 6])
        )

    def test_count_more_than_length_error(self):
        mbslice = MultiBlockSlice(count=11)
        with self.assertRaises(ValueError):
            mbslice.indices(10)

    def test_stride(self):
        mbslice = MultiBlockSlice(stride=2)

        self.assertEqual(mbslice.indices(10), (0, 2, 5, 1))
        np.testing.assert_array_equal(self.dset[mbslice], np.array([0, 2, 4, 6, 8]))

    def test_stride_zero_error(self):
        with self.assertRaises(ValueError):
            # This would cause a ZeroDivisionError if not caught
            MultiBlockSlice(stride=0, block=0).indices(10)

    def test_stride_block_equal(self):
        mbslice = MultiBlockSlice(stride=2, block=2)

        self.assertEqual(mbslice.indices(10), (0, 2, 5, 2))
        np.testing.assert_array_equal(self.dset[mbslice], self.arr)

    def test_block_more_than_stride_error(self):
        with self.assertRaises(ValueError):
            MultiBlockSlice(block=3)

        with self.assertRaises(ValueError):
            MultiBlockSlice(stride=2, block=3)

    def test_stride_more_than_block(self):
        mbslice = MultiBlockSlice(stride=3, block=2)

        self.assertEqual(mbslice.indices(10), (0, 3, 3, 2))
        np.testing.assert_array_equal(self.dset[mbslice], np.array([0, 1, 3, 4, 6, 7]))

    def test_block_overruns_extent_error(self):
        # If fully described then must fit within extent
        mbslice = MultiBlockSlice(start=2, count=2, stride=5, block=4)
        with self.assertRaises(ValueError):
            mbslice.indices(10)

    def test_fully_described(self):
        mbslice = MultiBlockSlice(start=1, count=2, stride=5, block=4)

        self.assertEqual(mbslice.indices(10), (1, 5, 2, 4))
        np.testing.assert_array_equal(
            self.dset[mbslice], np.array([1, 2, 3, 4, 6, 7, 8, 9])
        )

    def test_count_calculated(self):
        # If not given, count should be calculated to select as many full blocks as possible
        mbslice = MultiBlockSlice(start=1, stride=3, block=2)

        self.assertEqual(mbslice.indices(10), (1, 3, 3, 2))
        np.testing.assert_array_equal(self.dset[mbslice], np.array([1, 2, 4, 5, 7, 8]))

    def test_zero_count_calculated_error(self):
        # In this case, there is no possible count to select even one block, so error
        mbslice = MultiBlockSlice(start=8, stride=4, block=3)

        with self.assertRaises(ValueError):
            mbslice.indices(10)
././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5828822
h5py-3.13.0/h5py/tests/test_vds/0000755000175000017500000000000014755127217017256 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/tests/test_vds/__init__.py0000644000175000017500000000014714045746670021373 0ustar00takluyvertakluyver
from .test_virtual_source import *
from .test_highlevel_vds import *
from .test_lowlevel_vds import *
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/h5py/tests/test_vds/test_highlevel_vds.py0000644000175000017500000004155614350630273023515 0ustar00takluyvertakluyver'''
Unit test for the high level vds interface for eiger
https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''
import numpy as np
from numpy.testing import assert_array_equal
import os
import os.path as osp
import shutil
import tempfile

import h5py as h5
from ..common import ut
from ..._hl.vds import vds_support


@ut.skipUnless(vds_support,
               'VDS requires HDF5 >= 1.9.233')
class TestEigerHighLevel(ut.TestCase):
    def setUp(self):
        self.working_dir = tempfile.mkdtemp()
        self.fname = ['raw_file_1.h5', 'raw_file_2.h5', 'raw_file_3.h5']
        for k, outfile in enumerate(self.fname):
            filename = osp.join(self.working_dir, outfile)
            f = h5.File(filename, 'w')
            f['data'] = np.ones((20, 200, 200)) * k
            f.close()

        f = h5.File(osp.join(self.working_dir, 'raw_file_4.h5'), 'w')
        f['data'] = np.ones((18, 200, 200)) * 3
        self.fname.append('raw_file_4.h5')
        self.fname = [osp.join(self.working_dir, ix) for ix in self.fname]
        f.close()

    def test_eiger_high_level(self):
        outfile = osp.join(self.working_dir, 'eiger.h5')
        layout = h5.VirtualLayout(shape=(78, 200, 200), dtype=float)

        M_minus_1 = 0
        # Create the virtual dataset file
        with h5.File(outfile, 'w', libver='latest') as f:
            for foo in self.fname:
                in_data = h5.File(foo, 'r')['data']
                src_shape = in_data.shape
                in_data.file.close()
                M = M_minus_1 + src_shape[0]
                vsource = h5.VirtualSource(foo, 'data', shape=src_shape)
                layout[M_minus_1:M, :, :] = vsource
                M_minus_1 = M
            f.create_virtual_dataset('data', layout, fillvalue=45)

        f = h5.File(outfile, 'r')['data']
        self.assertEqual(f[10, 100, 10], 0.0)
        self.assertEqual(f[30, 100, 100], 1.0)
        self.assertEqual(f[50, 100, 100], 2.0)
        self.assertEqual(f[70, 100, 100], 3.0)
        f.file.close()

    def tearDown(self):
        shutil.rmtree(self.working_dir)

'''
Unit test for the high level vds interface for excalibur
https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''

class ExcaliburData:
    FEM_PIXELS_PER_CHIP_X = 256
    FEM_PIXELS_PER_CHIP_Y = 256
    FEM_CHIPS_PER_STRIPE_X = 8
    FEM_CHIPS_PER_STRIPE_Y = 1
    FEM_STRIPES_PER_MODULE = 2

    @property
    def sensor_module_dimensions(self):
        x_pixels = self.FEM_PIXELS_PER_CHIP_X * self.FEM_CHIPS_PER_STRIPE_X
        y_pixels = self.FEM_PIXELS_PER_CHIP_Y * self.FEM_CHIPS_PER_STRIPE_Y * self.FEM_STRIPES_PER_MODULE
        return y_pixels, x_pixels,

    @property
    def fem_stripe_dimensions(self):
        x_pixels = self.FEM_PIXELS_PER_CHIP_X * self.FEM_CHIPS_PER_STRIPE_X
        y_pixels = self.FEM_PIXELS_PER_CHIP_Y * self.FEM_CHIPS_PER_STRIPE_Y
        return y_pixels, x_pixels,

    def generate_sensor_module_image(self, value, dtype='uint16'):
        dset = np.empty(shape=self.sensor_module_dimensions, dtype=dtype)
        dset.fill(value)
        return dset

    def generate_fem_stripe_image(self, value, dtype='uint16'):
        dset = np.empty(shape=self.fem_stripe_dimensions, dtype=dtype)
        dset.fill(value)
        return dset


@ut.skipUnless(vds_support,
               'VDS requires HDF5 >= 1.9.233')
class TestExcaliburHighLevel(ut.TestCase):
    def create_excalibur_fem_stripe_datafile(self, fname, nframes, excalibur_data,scale):
        shape = (nframes,) + excalibur_data.fem_stripe_dimensions
        max_shape = shape#(None,) + excalibur_data.fem_stripe_dimensions
        chunk = (1,) + excalibur_data.fem_stripe_dimensions
        with h5.File(fname, 'w', libver='latest') as f:
            dset = f.create_dataset('data', shape=shape, maxshape=max_shape, chunks=chunk, dtype='uint16')
            for data_value_index in np.arange(nframes):
                dset[data_value_index] = excalibur_data.generate_fem_stripe_image(data_value_index*scale)

    def setUp(self):
        self.working_dir = tempfile.mkdtemp()
        self.fname = ["stripe_%d.h5" % stripe for stripe in range(1,7)]
        self.fname = [osp.join(self.working_dir, f) for f in self.fname]
        nframes = 5
        self.edata = ExcaliburData()
        for k, raw_file in enumerate(self.fname):
            self.create_excalibur_fem_stripe_datafile(raw_file, nframes, self.edata,k)

    def test_excalibur_high_level(self):
        outfile = osp.join(self.working_dir, 'excalibur.h5')
        f = h5.File(outfile,'w',libver='latest') # create an output file.
        in_key = 'data' # where is the data at the input?
        in_sh = h5.File(self.fname[0],'r')[in_key].shape # get the input shape
        dtype = h5.File(self.fname[0],'r')[in_key].dtype # get the datatype

        # now generate the output shape
        vertical_gap = 10 # pixels spacing in the vertical
        nfiles = len(self.fname)
        nframes = in_sh[0]
        width = in_sh[2]
        height = (in_sh[1]*nfiles) + (vertical_gap*(nfiles-1))
        out_sh = (nframes, height, width)

        # Virtual layout is a representation of the output dataset
        layout = h5.VirtualLayout(shape=out_sh, dtype=dtype)
        offset = 0 # initial offset
        for i, filename in enumerate(self.fname):
            # A representation of the input dataset
            vsource = h5.VirtualSource(filename, in_key, shape=in_sh)
            layout[:, offset:(offset + in_sh[1]), :] = vsource # map them with indexing
            offset += in_sh[1] + vertical_gap # increment the offset

        # pass the fill value and list of maps
        f.create_virtual_dataset('data', layout, fillvalue=0x1)
        f.close()

        f = h5.File(outfile,'r')['data']
        self.assertEqual(f[3,100,0], 0.0)
        self.assertEqual(f[3,260,0], 1.0)
        self.assertEqual(f[3,350,0], 3.0)
        self.assertEqual(f[3,650,0], 6.0)
        self.assertEqual(f[3,900,0], 9.0)
        self.assertEqual(f[3,1150,0], 12.0)
        self.assertEqual(f[3,1450,0], 15.0)
        f.file.close()

    def tearDown(self):
        shutil.rmtree(self.working_dir)


'''
Unit test for the high level vds interface for percival
https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''


@ut.skipUnless(vds_support,
               'VDS requires HDF5 >= 1.9.233')
class TestPercivalHighLevel(ut.TestCase):

    def setUp(self):
        self.working_dir = tempfile.mkdtemp()
        self.fname = ['raw_file_1.h5','raw_file_2.h5','raw_file_3.h5']
        k = 0
        for outfile in self.fname:
            filename = osp.join(self.working_dir, outfile)
            f = h5.File(filename,'w')
            f['data'] = np.ones((20,200,200))*k
            k +=1
            f.close()

        f = h5.File(osp.join(self.working_dir, 'raw_file_4.h5'), 'w')
        f['data'] = np.ones((19,200,200))*3
        self.fname.append('raw_file_4.h5')
        self.fname = [osp.join(self.working_dir, ix) for ix in self.fname]
        f.close()

    def test_percival_high_level(self):
        outfile = osp.join(self.working_dir,  'percival.h5')

        # Virtual layout is a representation of the output dataset
        layout = h5.VirtualLayout(shape=(79, 200, 200), dtype=np.float64)
        for k, filename in enumerate(self.fname):
            dim1 = 19 if k == 3 else 20
            vsource = h5.VirtualSource(filename, 'data',shape=(dim1, 200, 200))
            layout[k:79:4, :, :] = vsource[:, :, :]

        # Create the virtual dataset file
        with h5.File(outfile, 'w', libver='latest') as f:
            f.create_virtual_dataset('data', layout, fillvalue=-5)

        foo = np.array(2 * list(range(4)))
        with h5.File(outfile,'r') as f:
            ds = f['data']
            line = ds[:8,100,100]
            self.assertEqual(ds.shape, (79,200,200),)
            assert_array_equal(line, foo)

    def test_percival_source_from_dataset(self):
        outfile = osp.join(self.working_dir,  'percival.h5')

        # Virtual layout is a representation of the output dataset
        layout = h5.VirtualLayout(shape=(79, 200, 200), dtype=np.float64)
        for k, filename in enumerate(self.fname):
            with h5.File(filename, 'r') as f:
                vsource = h5.VirtualSource(f['data'])
                layout[k:79:4, :, :] = vsource

        # Create the virtual dataset file
        with h5.File(outfile, 'w', libver='latest') as f:
            f.create_virtual_dataset('data', layout, fillvalue=-5)

        foo = np.array(2 * list(range(4)))
        with h5.File(outfile,'r') as f:
            ds = f['data']
            line = ds[:8,100,100]
            self.assertEqual(ds.shape, (79,200,200),)
            assert_array_equal(line, foo)

    def tearDown(self):
        shutil.rmtree(self.working_dir)

@ut.skipUnless(vds_support,
               'VDS requires HDF5 >= 1.9.233')
class SlicingTestCase(ut.TestCase):

    def setUp(self):
        self.tmpdir = tempfile.mkdtemp()
        # Create source files (1.h5 to 4.h5)
        for n in range(1, 5):
            with h5.File(osp.join(self.tmpdir, '{}.h5'.format(n)), 'w') as f:
                d = f.create_dataset('data', (100,), 'i4')
                d[:] = np.arange(100) + n

    def make_virtual_ds(self):
        # Assemble virtual dataset
        layout = h5.VirtualLayout((4, 100), 'i4', maxshape=(4, None))

        for n in range(1, 5):
            filename = osp.join(self.tmpdir, "{}.h5".format(n))
            vsource = h5.VirtualSource(filename, 'data', shape=(100,))
            # Fill the first half with positions 0, 2, 4... from the source
            layout[n - 1, :50] = vsource[0:100:2]
            # Fill the second half with places 1, 3, 5... from the source
            layout[n - 1, 50:] = vsource[1:100:2]

        outfile = osp.join(self.tmpdir, 'VDS.h5')

        # Add virtual dataset to output file
        with h5.File(outfile, 'w', libver='latest') as f:
            f.create_virtual_dataset('/group/data', layout, fillvalue=-5)

        return outfile

    def test_slice_source(self):
        outfile = self.make_virtual_ds()

        with h5.File(outfile, 'r') as f:
            assert_array_equal(f['/group/data'][0][:3], [1, 3, 5])
            assert_array_equal(f['/group/data'][0][50:53], [2, 4, 6])
            assert_array_equal(f['/group/data'][3][:3], [4, 6, 8])
            assert_array_equal(f['/group/data'][3][50:53], [5, 7, 9])

    def test_inspection(self):
        with h5.File(osp.join(self.tmpdir, '1.h5'), 'r') as f:
            assert not f['data'].is_virtual

        outfile = self.make_virtual_ds()

        with h5.File(outfile, 'r') as f:
            ds = f['/group/data']
            assert ds.is_virtual

            src_files = {osp.join(self.tmpdir, '{}.h5'.format(n))
                         for n in range(1, 5)}
            assert {s.file_name for s in ds.virtual_sources()} == src_files

    def test_mismatched_selections(self):
        layout = h5.VirtualLayout((4, 100), 'i4', maxshape=(4, None))

        filename = osp.join(self.tmpdir, "1.h5")
        vsource = h5.VirtualSource(filename, 'data', shape=(100,))
        with self.assertRaisesRegex(ValueError, r'different number'):
            layout[0, :49] = vsource[0:100:2]

    def tearDown(self):
        shutil.rmtree(self.tmpdir)

@ut.skipUnless(vds_support,
               'VDS requires HDF5 >= 1.9.233')
class IndexingTestCase(ut.TestCase):

    def setUp(self):
        self.tmpdir = tempfile.mkdtemp()
        # Create source file (1.h5)
        with h5.File(osp.join(self.tmpdir, '1.h5'), 'w') as f:
            d = f.create_dataset('data', (10,), 'i4')
            d[:] = np.arange(10)*10

    def test_index_layout(self):
        # Assemble virtual dataset (indexing target)
        layout = h5.VirtualLayout((100,), 'i4')

        inds = [3,6,20,25,33,47,70,75,96,98]

        filename = osp.join(self.tmpdir, "1.h5")
        vsource = h5.VirtualSource(filename, 'data', shape=(10,))
        layout[inds] = vsource

        outfile = osp.join(self.tmpdir, 'VDS.h5')

        # Assembly virtual dataset (indexing source)
        layout2 = h5.VirtualLayout((6,), 'i4')

        inds2 = [0,1,4,5,8]
        layout2[1:] = vsource[inds2]

        # Add virtual datasets to output file and close
        with h5.File(outfile, 'w', libver='latest') as f:
            f.create_virtual_dataset('/data', layout, fillvalue=-5)
            f.create_virtual_dataset(b'/data2', layout2, fillvalue=-3)

        # Read data from virtual datasets
        with h5.File(outfile, 'r') as f:
            data = f['/data'][()]
            data2 = f['/data2'][()]

        # Verify
        assert_array_equal(data[inds], np.arange(10)*10)
        assert_array_equal(data2[1:], [0,10,40,50,80])

        mask = np.zeros(100)
        mask[inds] = 1
        self.assertEqual(data[mask == 0].min(), -5)
        self.assertEqual(data[mask == 0].max(), -5)
        self.assertEqual(data2[0], -3)

    def tearDown(self):
        shutil.rmtree(self.tmpdir)

@ut.skipUnless(vds_support,
               'VDS requires HDF5 >= 1.9.233')
class RelativeLinkTestCase(ut.TestCase):

    def setUp(self):
        self.tmpdir = tempfile.mkdtemp()
        self.f1 = osp.join(self.tmpdir, 'testfile1.h5')
        self.f2 = osp.join(self.tmpdir, 'testfile2.h5')

        self.data1 = np.arange(10)
        self.data2 = np.arange(10) * -1

        with h5.File(self.f1, 'w') as f:
            # dataset
            ds = f.create_dataset('data', (10,), 'f4')
            ds[:] = self.data1

        with h5.File(self.f2, 'w') as f:
            # dataset
            ds = f.create_dataset('data', (10,), 'f4')
            ds[:] = self.data2
            self.make_vds(f)

    def make_vds(self, f):
        # virtual dataset
        layout = h5.VirtualLayout((2, 10), 'f4')
        vsource1 = h5.VirtualSource(self.f1, 'data', shape=(10,))
        vsource2 = h5.VirtualSource(self.f2, 'data', shape=(10,))
        layout[0] = vsource1
        layout[1] = vsource2
        f.create_virtual_dataset('virtual', layout)

    def test_relative_vds(self):
        with h5.File(self.f2) as f:
            data = f['virtual'][:]
            np.testing.assert_array_equal(data[0], self.data1)
            np.testing.assert_array_equal(data[1], self.data2)

        # move f2 -> f3
        f3 = osp.join(self.tmpdir, 'testfile3.h5')
        os.rename(self.f2, f3)

        with h5.File(f3) as f:
            data = f['virtual'][:]
            assert data.dtype == 'f4'
            np.testing.assert_array_equal(data[0], self.data1)
            np.testing.assert_array_equal(data[1], self.data2)

        # moving other file
        f4 = osp.join(self.tmpdir, 'testfile4.h5')
        os.rename(self.f1, f4)

        with h5.File(f3) as f:
            data = f['virtual'][:]
            assert data.dtype == 'f4'
            # unavailable data is silently converted to default value
            np.testing.assert_array_equal(data[0], 0)
            np.testing.assert_array_equal(data[1], self.data2)

    def tearDown(self):
        shutil.rmtree(self.tmpdir)

class RelativeLinkBuildVDSTestCase(RelativeLinkTestCase):
    # Test a link to the same file with the virtual dataset created by
    # File.build_virtual_dataset()
    def make_vds(self, f):
        with f.build_virtual_dataset('virtual', (2, 10), dtype='f4') as layout:
            layout[0] = h5.VirtualSource(self.f1, 'data', shape=(10,))
            layout[1] = h5.VirtualSource(self.f2, 'data', shape=(10,))

@ut.skipUnless(vds_support,
               'VDS requires HDF5 >= 1.9.233')
class VDSUnlimitedTestCase(ut.TestCase):

    def setUp(self):
        self.tmpdir = tempfile.mkdtemp()
        self.path = osp.join(self.tmpdir, "resize.h5")
        with h5.File(self.path, "w") as f:
            source_dset = f.create_dataset(
                "source",
                data=np.arange(20),
                shape=(10, 2),
                maxshape=(None, 2),
                chunks=(10, 1),
                fillvalue=-1
            )
            self.layout = h5.VirtualLayout((10, 1), int, maxshape=(None, 1))
            layout_source = h5.VirtualSource(source_dset)
            self.layout[:h5.UNLIMITED, 0] = layout_source[:h5.UNLIMITED, 1]

            f.create_virtual_dataset("virtual", self.layout)

    def test_unlimited_axis(self):
        comp1 = np.arange(1, 20, 2).reshape(10, 1)
        comp2 = np.vstack((
            comp1,
            np.full(shape=(10, 1), fill_value=-1)
        ))
        comp3 = np.vstack((
            comp1,
            np.full(shape=(10, 1), fill_value=0)
        ))
        with h5.File(self.path, "a") as f:
            source_dset = f['source']
            virtual_dset = f['virtual']
            np.testing.assert_array_equal(comp1, virtual_dset)
            source_dset.resize(20, axis=0)
            np.testing.assert_array_equal(comp2, virtual_dset)
            source_dset[10:, 1] = np.zeros((10,), dtype=int)
            np.testing.assert_array_equal(comp3, virtual_dset)

    def tearDown(self):
        shutil.rmtree(self.tmpdir)

if __name__ == "__main__":
    ut.main()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/tests/test_vds/test_lowlevel_vds.py0000644000175000017500000002713314507227212023371 0ustar00takluyvertakluyver'''
Unit test for the low level vds interface for eiger
https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''


from ..common import ut
import numpy as np
import h5py as h5
import tempfile


class TestEigerLowLevel(ut.TestCase):
    def setUp(self):
        self.working_dir = tempfile.mkdtemp()
        self.fname = ['raw_file_1.h5', 'raw_file_2.h5', 'raw_file_3.h5']
        k = 0
        for outfile in self.fname:
            filename = self.working_dir + outfile
            f = h5.File(filename, 'w')
            f['data'] = np.ones((20, 200, 200))*k
            k += 1
            f.close()

        f = h5.File(self.working_dir+'raw_file_4.h5', 'w')
        f['data'] = np.ones((18, 200, 200))*3
        self.fname.append('raw_file_4.h5')
        self.fname = [self.working_dir+ix for ix in self.fname]
        f.close()

    def test_eiger_low_level(self):
        self.outfile = self.working_dir + 'eiger.h5'
        with h5.File(self.outfile, 'w', libver='latest') as f:
            vdset_shape = (78, 200, 200)
            vdset_max_shape = vdset_shape
            virt_dspace = h5.h5s.create_simple(vdset_shape, vdset_max_shape)
            dcpl = h5.h5p.create(h5.h5p.DATASET_CREATE)
            dcpl.set_fill_value(np.array([-1]))
            # Create the source dataset dataspace
            k = 0
            for foo in self.fname:
                in_data = h5.File(foo, 'r')['data']
                src_shape = in_data.shape
                max_src_shape = src_shape
                in_data.file.close()
                src_dspace = h5.h5s.create_simple(src_shape, max_src_shape)
                # Select the source dataset hyperslab
                src_dspace.select_hyperslab(start=(0, 0, 0),
                                            stride=(1, 1, 1),
                                            count=(1, 1, 1),
                                            block=src_shape)

                virt_dspace.select_hyperslab(start=(k, 0, 0),
                                             stride=(1, 1, 1),
                                             count=(1, 1, 1),
                                             block=src_shape)

                dcpl.set_virtual(virt_dspace, foo.encode('utf-8'),
                                 b'data', src_dspace)
                k += src_shape[0]

            # Create the virtual dataset
            h5.h5d.create(f.id, name=b"data", tid=h5.h5t.NATIVE_INT16,
                          space=virt_dspace, dcpl=dcpl)

        f = h5.File(self.outfile, 'r')['data']
        self.assertEqual(f[10, 100, 10], 0.0)
        self.assertEqual(f[30, 100, 100], 1.0)
        self.assertEqual(f[50, 100, 100], 2.0)
        self.assertEqual(f[70, 100, 100], 3.0)
        f.file.close()

    def tearDown(self):
        import os
        for f in self.fname:
            os.remove(f)
        os.remove(self.outfile)


if __name__ == "__main__":
    ut.main()
'''
Unit test for the low level vds interface for excalibur
https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''


class ExcaliburData:
    FEM_PIXELS_PER_CHIP_X = 256
    FEM_PIXELS_PER_CHIP_Y = 256
    FEM_CHIPS_PER_STRIPE_X = 8
    FEM_CHIPS_PER_STRIPE_Y = 1
    FEM_STRIPES_PER_MODULE = 2

    @property
    def sensor_module_dimensions(self):
        x_pixels = self.FEM_PIXELS_PER_CHIP_X * self.FEM_CHIPS_PER_STRIPE_X
        y_pixels = self.FEM_PIXELS_PER_CHIP_Y * self.FEM_CHIPS_PER_STRIPE_Y * self.FEM_STRIPES_PER_MODULE
        return y_pixels, x_pixels,

    @property
    def fem_stripe_dimensions(self):
        x_pixels = self.FEM_PIXELS_PER_CHIP_X * self.FEM_CHIPS_PER_STRIPE_X
        y_pixels = self.FEM_PIXELS_PER_CHIP_Y * self.FEM_CHIPS_PER_STRIPE_Y
        return y_pixels, x_pixels,

    def generate_sensor_module_image(self, value, dtype='uint16'):
        dset = np.empty(shape=self.sensor_module_dimensions, dtype=dtype)
        dset.fill(value)
        return dset

    def generate_fem_stripe_image(self, value, dtype='uint16'):
        dset = np.empty(shape=self.fem_stripe_dimensions, dtype=dtype)
        dset.fill(value)
        return dset


class TestExcaliburLowLevel(ut.TestCase):
    def create_excalibur_fem_stripe_datafile(self, fname, nframes, excalibur_data,scale):
        shape = (nframes,) + excalibur_data.fem_stripe_dimensions
        max_shape = (nframes,) + excalibur_data.fem_stripe_dimensions
        chunk = (1,) + excalibur_data.fem_stripe_dimensions
        with h5.File(fname, 'w', libver='latest') as f:
            dset = f.create_dataset('data', shape=shape, maxshape=max_shape, chunks=chunk, dtype='uint16')
            for data_value_index in np.arange(nframes):
                dset[data_value_index] = excalibur_data.generate_fem_stripe_image(data_value_index*scale)

    def setUp(self):
        self.working_dir = tempfile.mkdtemp()
        self.fname = ["stripe_%d.h5" % stripe for stripe in range(1,7)]
        self.fname = [self.working_dir+ix for ix in self.fname]
        nframes = 5
        self.edata = ExcaliburData()
        k=0
        for raw_file in self.fname:
            self.create_excalibur_fem_stripe_datafile(raw_file, nframes, self.edata,k)
            k+=1

    def test_excalibur_low_level(self):

        excalibur_data = self.edata
        self.outfile = self.working_dir+'excalibur.h5'
        vdset_stripe_shape = (1,) + excalibur_data.fem_stripe_dimensions
        vdset_stripe_max_shape = (5, ) + excalibur_data.fem_stripe_dimensions
        vdset_shape = (5,
                       excalibur_data.fem_stripe_dimensions[0] * len(self.fname) + (10 * (len(self.fname)-1)),
                       excalibur_data.fem_stripe_dimensions[1])
        vdset_max_shape = (5,
                           excalibur_data.fem_stripe_dimensions[0] * len(self.fname) + (10 * (len(self.fname)-1)),
                           excalibur_data.fem_stripe_dimensions[1])
        vdset_y_offset = 0

        # Create the virtual dataset file
        with h5.File(self.outfile, 'w', libver='latest') as f:

            # Create the source dataset dataspace
            src_dspace = h5.h5s.create_simple(vdset_stripe_shape, vdset_stripe_max_shape)
            # Create the virtual dataset dataspace
            virt_dspace = h5.h5s.create_simple(vdset_shape, vdset_max_shape)

            # Create the virtual dataset property list
            dcpl = h5.h5p.create(h5.h5p.DATASET_CREATE)
            dcpl.set_fill_value(np.array([0x01]))

            # Select the source dataset hyperslab
            src_dspace.select_hyperslab(start=(0, 0, 0), count=(1, 1, 1), block=vdset_stripe_max_shape)

            for raw_file in self.fname:
                # Select the virtual dataset hyperslab (for the source dataset)
                virt_dspace.select_hyperslab(start=(0, vdset_y_offset, 0),
                                             count=(1, 1, 1),
                                             block=vdset_stripe_max_shape)
                # Set the virtual dataset hyperslab to point to the real first dataset
                dcpl.set_virtual(virt_dspace, raw_file.encode('utf-8'),
                                 b"/data", src_dspace)
                vdset_y_offset += vdset_stripe_shape[1] + 10

            # Create the virtual dataset
            dset = h5.h5d.create(f.id, name=b"data",
                                 tid=h5.h5t.NATIVE_INT16, space=virt_dspace, dcpl=dcpl)
            assert(f['data'].fillvalue == 0x01)

        f = h5.File(self.outfile,'r')['data']
        self.assertEqual(f[3,100,0], 0.0)
        self.assertEqual(f[3,260,0], 1.0)
        self.assertEqual(f[3,350,0], 3.0)
        self.assertEqual(f[3,650,0], 6.0)
        self.assertEqual(f[3,900,0], 9.0)
        self.assertEqual(f[3,1150,0], 12.0)
        self.assertEqual(f[3,1450,0], 15.0)
        f.file.close()

    def tearDown(self):
        import os
        for f in self.fname:
            os.remove(f)
        os.remove(self.outfile)

'''
Unit test for the low level vds interface for percival
https://support.hdfgroup.org/HDF5/docNewFeatures/VDS/HDF5-VDS-requirements-use-cases-2014-12-10.pdf
'''


class TestPercivalLowLevel(ut.TestCase):

    def setUp(self):
        self.working_dir = tempfile.mkdtemp()
        self.fname = ['raw_file_1.h5','raw_file_2.h5','raw_file_3.h5']
        k = 0
        for outfile in self.fname:
            filename = self.working_dir + outfile
            f = h5.File(filename,'w')
            f['data'] = np.ones((20,200,200))*k
            k +=1
            f.close()

        f = h5.File(self.working_dir+'raw_file_4.h5','w')
        f['data'] = np.ones((19,200,200))*3
        self.fname.append('raw_file_4.h5')
        self.fname = [self.working_dir+ix for ix in self.fname]
        f.close()

    def test_percival_low_level(self):
        self.outfile = self.working_dir + 'percival.h5'
        with h5.File(self.outfile, 'w', libver='latest') as f:
            vdset_shape = (1,200,200)
            num = h5.h5s.UNLIMITED
            vdset_max_shape = (num,)+vdset_shape[1:]
            virt_dspace = h5.h5s.create_simple(vdset_shape, vdset_max_shape)
            dcpl = h5.h5p.create(h5.h5p.DATASET_CREATE)
            dcpl.set_fill_value(np.array([-1]))
            # Create the source dataset dataspace
            k = 0
            for foo in self.fname:
                in_data = h5.File(foo, 'r')['data']
                src_shape = in_data.shape
                max_src_shape = (num,)+src_shape[1:]
                in_data.file.close()
                src_dspace = h5.h5s.create_simple(src_shape, max_src_shape)
                # Select the source dataset hyperslab
                src_dspace.select_hyperslab(start=(0, 0, 0),
                                            stride=(1,1,1),
                                            count=(num, 1, 1),
                                            block=(1,)+src_shape[1:])

                virt_dspace.select_hyperslab(start=(k, 0, 0),
                                             stride=(4,1,1),
                                             count=(num, 1, 1),
                                             block=(1,)+src_shape[1:])

                dcpl.set_virtual(virt_dspace, foo.encode('utf-8'), b'data', src_dspace)
                k+=1

            # Create the virtual dataset
            dset = h5.h5d.create(f.id, name=b"data", tid=h5.h5t.NATIVE_INT16, space=virt_dspace, dcpl=dcpl)

            f = h5.File(self.outfile,'r')
            sh = f['data'].shape
            line = f['data'][:8,100,100]
            foo = np.array(2*list(range(4)))
            f.close()
            self.assertEqual(sh,(79,200,200),)
            np.testing.assert_array_equal(line,foo)

    def tearDown(self):
        import os
        for f in self.fname:
            os.remove(f)
        os.remove(self.outfile)


def test_virtual_prefix(tmp_path):
    (tmp_path / 'a').mkdir()
    (tmp_path / 'b').mkdir()
    src_file = h5.File(tmp_path / 'a' / 'src.h5', 'w')
    src_file['data'] = np.arange(10)

    vds_file = h5.File(tmp_path / 'b' / 'vds.h5', 'w')
    layout = h5.VirtualLayout(shape=(10,), dtype=np.int64)
    layout[:] = h5.VirtualSource('src.h5', 'data', shape=(10,))
    vds_file.create_virtual_dataset('data', layout, fillvalue=-1)

    # Path doesn't resolve
    np.testing.assert_array_equal(vds_file['data'], np.full(10, fill_value=-1))

    path_a = bytes(tmp_path / 'a')
    dapl = h5.h5p.create(h5.h5p.DATASET_ACCESS)
    dapl.set_virtual_prefix(path_a)
    vds_id = h5.h5d.open(vds_file.id, b'data', dapl=dapl)
    vds = h5.Dataset(vds_id)

    # Now it should find the source file and read the data correctly
    np.testing.assert_array_equal(vds[:], np.arange(10))
    # Check that get_virtual_prefix gives back what we put in
    assert vds.id.get_access_plist().get_virtual_prefix() == path_a
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1696411274.0
h5py-3.13.0/h5py/tests/test_vds/test_virtual_source.py0000644000175000017500000001362114507227212023727 0ustar00takluyvertakluyverfrom ..common import ut
import h5py as h5
import numpy as np


class TestVirtualSource(ut.TestCase):
    def test_full_slice(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[:,:,:]
        self.assertEqual(dataset.shape,sliced.shape)

    # def test_full_slice_inverted(self):
    #     dataset = h5.VirtualSource('test','test',(20,30,30))
    #     sliced = dataset[:,:,::-1]
    #     self.assertEqual(dataset.shape,sliced.shape)
    #
    # def test_subsampled_slice_inverted(self):
    #     dataset = h5.VirtualSource('test','test',(20,30,30))
    #     sliced = dataset[:,:,::-2]
    #     self.assertEqual((20,30,15),sliced.shape)

    def test_integer_indexed(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[5,:,:]
        self.assertEqual((30,30),sliced.shape)

    def test_integer_single_indexed(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[5]
        self.assertEqual((30,30),sliced.shape)

    def test_two_integer_indexed(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[5,:,10]
        self.assertEqual((30,),sliced.shape)

    def test_single_range(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[5:10,:,:]
        self.assertEqual((5,)+dataset.shape[1:],sliced.shape)

    def test_shape_calculation_positive_step(self):
        dataset = h5.VirtualSource('test','test',(20,))
        cmp = []
        for i in range(5):
            d = dataset[2:12+i:3].shape[0]
            ref = np.arange(20)[2:12+i:3].size
            cmp.append(ref==d)
        self.assertEqual(5, sum(cmp))

    # def test_shape_calculation_positive_step_switched_start_stop(self):
    #     dataset = h5.VirtualSource('test','test',(20,))
    #     cmp = []
    #     for i in range(5):
    #         d = dataset[12+i:2:3].shape[0]
    #         ref = np.arange(20)[12+i:2:3].size
    #         cmp.append(ref==d)
    #     self.assertEqual(5, sum(cmp))
    #
    #
    # def test_shape_calculation_negative_step(self):
    #     dataset = h5.VirtualSource('test','test',(20,))
    #     cmp = []
    #     for i in range(5):
    #         d = dataset[12+i:2:-3].shape[0]
    #         ref = np.arange(20)[12+i:2:-3].size
    #         cmp.append(ref==d)
    #     self.assertEqual(5, sum(cmp))
    #
    # def test_shape_calculation_negative_step_switched_start_stop(self):
    #     dataset = h5.VirtualSource('test','test',(20,))
    #     cmp = []
    #     for i in range(5):
    #         d = dataset[2:12+i:-3].shape[0]
    #         ref = np.arange(20)[2:12+i:-3].size
    #         cmp.append(ref==d)
    #     self.assertEqual(5, sum(cmp))


    def test_double_range(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[5:10,:,20:25]
        self.assertEqual((5,30,5),sliced.shape)

    def test_double_strided_range(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[6:12:2,:,20:26:3]
        self.assertEqual((3,30,2,),sliced.shape)

    # def test_double_strided_range_inverted(self):
    #     dataset = h5.VirtualSource('test','test',(20,30,30))
    #     sliced = dataset[12:6:-2,:,26:20:-3]
    #     self.assertEqual((3,30,2),sliced.shape)

    def test_negative_start_index(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[-10:16]
        self.assertEqual((6,30,30),sliced.shape)

    def test_negative_stop_index(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[10:-4]
        self.assertEqual((6,30,30),sliced.shape)

    def test_negative_start_and_stop_index(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[-10:-4]
        self.assertEqual((6,30,30),sliced.shape)

    # def test_negative_start_and_stop_and_stride_index(self):
    #     dataset = h5.VirtualSource('test','test',(20,30,30))
    #     sliced = dataset[-4:-10:-2]
    #     self.assertEqual((3,30,30),sliced.shape)
#
    def test_ellipsis(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[...]
        self.assertEqual(dataset.shape,sliced.shape)

    def test_ellipsis_end(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[0:1,...]
        self.assertEqual((1,)+dataset.shape[1:],sliced.shape)

    def test_ellipsis_start(self):
        dataset = h5.VirtualSource('test','test',(20,30,30))
        sliced = dataset[...,0:1]
        self.assertEqual(dataset.shape[:-1]+(1,),sliced.shape)

    def test_ellipsis_sandwich(self):
        dataset = h5.VirtualSource('test','test',(20,30,30,40))
        sliced = dataset[0:1,...,5:6]
        self.assertEqual((1,)+dataset.shape[1:-1]+(1,),sliced.shape)

    def test_integer_shape(self):
        dataset = h5.VirtualSource('test','test', 20)
        self.assertEqual(dataset.shape, (20,))

    def test_integer_maxshape(self):
        dataset = h5.VirtualSource('test','test', 20, maxshape=30)
        self.assertEqual(dataset.maxshape, (30,))

    def test_extra_args(self):
        with h5.File(name='f1', driver='core',
                     backing_store=False, mode='w') as ftest:
            ftest['a'] = [1, 2, 3]
            a = ftest['a']

            with self.assertRaises(TypeError):
                h5.VirtualSource(a, 'b')
            with self.assertRaises(TypeError):
                h5.VirtualSource(a, shape=(1, ))
            with self.assertRaises(TypeError):
                h5.VirtualSource(a, maxshape=(None,))
            with self.assertRaises(TypeError):
                h5.VirtualSource(a, dtype=int)

    def test_repeated_slice(self):
        dataset = h5.VirtualSource('test', 'test', (20, 30, 30))
        sliced = dataset[5:10, :, :]
        with self.assertRaises(RuntimeError):
            sliced[:, :4]


if __name__ == "__main__":
    ut.main()
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/h5py/utils.pxd0000644000175000017500000000154614045746670016146 0ustar00takluyvertakluyver# cython: language_level=3
# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from .defs cimport *
from numpy cimport ndarray
cdef void* emalloc(size_t size) except? NULL
cdef void efree(void* ptr)

cpdef int check_numpy_read(ndarray arr, hid_t space_id=*) except -1
cpdef int check_numpy_write(ndarray arr, hid_t space_id=*) except -1

cdef int convert_tuple(object tuple, hsize_t *dims, hsize_t rank) except -1
cdef object convert_dims(hsize_t* dims, hsize_t rank)

cdef int require_tuple(object tpl, int none_allowed, int size, char* name) except -1

cdef object create_numpy_hsize(int rank, hsize_t* dims)
cdef object create_hsize_array(object arr)
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/h5py/utils.pyx0000644000175000017500000001252414755054371016167 0ustar00takluyvertakluyver# cython: language_level=3
# cython: profile=False

# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2019 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

from numpy cimport ndarray, import_array,\
                   NPY_UINT16, NPY_UINT32, NPY_UINT64,  npy_intp,\
                   PyArray_SimpleNew, PyArray_FROM_OTF,\
                   NPY_ARRAY_C_CONTIGUOUS, NPY_ARRAY_NOTSWAPPED, NPY_ARRAY_FORCECAST

# Initialization
import_array()

# === Exception-aware memory allocation =======================================

cdef inline void* emalloc(size_t size) except? NULL:
    """Wrapper for malloc(size) with the following behavior:

    1. Always returns NULL for emalloc(0)
    2. Raises MemoryError if allocation fails and returns NULL

    This function expects to be called with the GIL held

    :param size: Size of the memory (in bytes) to allocate
    :return: memory address with the given memory
    """
    cdef void *retval = NULL

    if size == 0:
        return NULL

    retval = malloc(size)
    if retval == NULL:
        errmsg = b"Can't malloc %d bytes" % size
        PyErr_SetString(MemoryError, errmsg)
    return retval


cdef inline void efree(void* what):
    free(what)


def _test_emalloc(size_t size):
    """Stub to simplify unit tests"""
    cdef void* mem
    mem = emalloc(size)
    if size == 0:
        assert mem == NULL
    efree(mem)


# === Testing of NumPy arrays =================================================

cdef int check_numpy(ndarray arr, hid_t space_id, int write):
    # -1 if exception, NOT AUTOMATICALLY CHECKED

    cdef int required_flags
    cdef hsize_t arr_rank
    cdef hsize_t space_rank
    cdef int i

    if arr is None:
        PyErr_SetString(TypeError, b"Array is None")
        return -1

    # Validate array flags

    if write:
        if not (arr.flags["C_CONTIGUOUS"] and arr.flags["WRITEABLE"]):
            PyErr_SetString(TypeError, b"Array must be C-contiguous and writable")
            return -1
    else:
        if not arr.flags["C_CONTIGUOUS"]:
            PyErr_SetString(TypeError, b"Array must be C-contiguous")
            return -1

    return 1

cpdef int check_numpy_write(ndarray arr, hid_t space_id=-1) except -1:
    return check_numpy(arr, space_id, 1)

cpdef int check_numpy_read(ndarray arr, hid_t space_id=-1) except -1:
    return check_numpy(arr, space_id, 0)

# === Conversion between HDF5 buffers and tuples ==============================

cdef int convert_tuple(object tpl, hsize_t *dims, hsize_t rank) except -1:
    # Convert a Python tuple to an hsize_t array.  You must allocate
    # the array yourself and pass both it and the size to this function.
    # Returns 0 on success, -1 on failure and raises an exception.
    cdef int i

    if len(tpl) != rank:
        raise ValueError("Tuple length incompatible with array")

    try:
        for i in range(rank):
            dims[i] = tpl[i]
    except TypeError:
        raise TypeError("Can't convert element %d (%s) to hsize_t" % (i, tpl[i]))

    return 0

cdef object convert_dims(hsize_t* dims, hsize_t rank):
    # Convert an hsize_t array to a Python tuple of ints.

    cdef list dims_list
    cdef int i
    dims_list = []

    for i in range(rank):
        dims_list.append(int(dims[i]))

    return tuple(dims_list)


cdef object create_numpy_hsize(int rank, hsize_t* dims):
    # Create an empty Numpy array which can hold HDF5 hsize_t entries

    cdef int typecode
    cdef npy_intp* dims_npy
    cdef ndarray arr
    cdef int i

    if sizeof(hsize_t) == 2:
        typecode = NPY_UINT16
    elif sizeof(hsize_t) == 4:
        typecode = NPY_UINT32
    elif sizeof(hsize_t) == 8:
        typecode = NPY_UINT64
    else:
        raise RuntimeError("Can't map hsize_t %d to Numpy typecode" % sizeof(hsize_t))

    dims_npy = emalloc(sizeof(npy_intp)*rank)

    try:
        for i in range(rank):
            dims_npy[i] = dims[i]
        arr = PyArray_SimpleNew(rank, dims_npy, typecode)
    finally:
        efree(dims_npy)

    return arr

cdef object create_hsize_array(object arr):
    # Create a NumPy array of hsize_t uints initialized to an existing array

    cdef int typecode
    cdef ndarray outarr

    if sizeof(hsize_t) == 2:
        typecode = NPY_UINT16
    elif sizeof(hsize_t) == 4:
        typecode = NPY_UINT32
    elif sizeof(hsize_t) == 8:
        typecode = NPY_UINT64
    else:
        raise RuntimeError("Can't map hsize_t %d to Numpy typecode" % sizeof(hsize_t))

    return PyArray_FROM_OTF(arr, typecode,
        NPY_ARRAY_C_CONTIGUOUS
        | NPY_ARRAY_NOTSWAPPED
        | NPY_ARRAY_FORCECAST)


# === Argument testing ========================================================

cdef int require_tuple(object tpl, int none_allowed, int size, char* name) except -1:
    # Ensure that tpl is in fact a tuple, or None if none_allowed is nonzero.
    # If size >= 0, also ensure that the length matches.
    # Otherwise raises ValueError

    if (tpl is None and none_allowed) or \
      (isinstance(tpl, tuple) and (size < 0 or len(tpl) == size)):
        return 1

    nmsg = b"" if size < 0 else b" of size %d" % size
    smsg = b"" if not none_allowed else b" or None"

    msg = b"%s must be a tuple%s%s." % (name, smsg, nmsg)
    PyErr_SetString(ValueError, msg)
    return -1
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739891098.0
h5py-3.13.0/h5py/version.py0000644000175000017500000000356714755120632016325 0ustar00takluyvertakluyver# This file is part of h5py, a Python interface to the HDF5 library.
#
# http://www.h5py.org
#
# Copyright 2008-2013 Andrew Collette and contributors
#
# License:  Standard 3-clause BSD; see "license.txt" for full license terms
#           and contributor agreement.

"""
    Versioning module for h5py.
"""

from collections import namedtuple
from . import h5 as _h5
import sys
import numpy

# All should be integers, except pre, as validating versions is more than is
# needed for our use case
_H5PY_VERSION_CLS = namedtuple("_H5PY_VERSION_CLS",
                               "major minor bugfix pre post dev")

hdf5_built_version_tuple = _h5.HDF5_VERSION_COMPILED_AGAINST

version_tuple = _H5PY_VERSION_CLS(3, 13, 0, None, None, None)

version = "{0.major:d}.{0.minor:d}.{0.bugfix:d}".format(version_tuple)
if version_tuple.pre is not None:
    version += version_tuple.pre
if version_tuple.post is not None:
    version += ".post{0.post:d}".format(version_tuple)
if version_tuple.dev is not None:
    version += ".dev{0.dev:d}".format(version_tuple)

hdf5_version_tuple = _h5.get_libversion()
hdf5_version = "%d.%d.%d" % hdf5_version_tuple

api_version_tuple = (1,8)
api_version = "%d.%d" % api_version_tuple

info = """\
Summary of the h5py configuration
---------------------------------

h5py    %(h5py)s
HDF5    %(hdf5)s
Python  %(python)s
sys.platform    %(platform)s
sys.maxsize     %(maxsize)s
numpy   %(numpy)s
cython (built with) %(cython_version)s
numpy (built against) %(numpy_build_version)s
HDF5 (built against) %(hdf5_build_version)s
""" % {
    'h5py': version,
    'hdf5': hdf5_version,
    'python': sys.version,
    'platform': sys.platform,
    'maxsize': sys.maxsize,
    'numpy': numpy.__version__,
    'cython_version': _h5.CYTHON_VERSION_COMPILED_WITH,
    'numpy_build_version': _h5.NUMPY_VERSION_COMPILED_AGAINST,
    'hdf5_build_version': "%d.%d.%d" % hdf5_built_version_tuple,
}
././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1739894414.586882
h5py-3.13.0/h5py.egg-info/0000755000175000017500000000000014755127217015753 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739894414.0
h5py-3.13.0/h5py.egg-info/PKG-INFO0000644000175000017500000000464414755127216017057 0ustar00takluyvertakluyverMetadata-Version: 2.1
Name: h5py
Version: 3.13.0
Summary: Read and write HDF5 files from Python
Author-email: Andrew Collette 
Maintainer-email: Thomas Kluyver , Thomas A Caswell 
License: BSD-3-Clause
Project-URL: Homepage, https://www.h5py.org/
Project-URL: Source, https://github.com/h5py/h5py
Project-URL: Documentation, https://docs.h5py.org/en/stable/
Project-URL: Release notes, https://docs.h5py.org/en/stable/whatsnew/index.html
Project-URL: Discussion forum, https://forum.hdfgroup.org/c/hdf5/h5py
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Information Technology
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: Unix
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: Microsoft :: Windows
Classifier: Programming Language :: Cython
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Database
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Description-Content-Type: text/x-rst
License-File: LICENSE
Requires-Dist: numpy>=1.19.3

The h5py package provides both a high- and low-level interface to the HDF5
library from Python. The low-level interface is intended to be a complete
wrapping of the HDF5 API, while the high-level component supports  access to
HDF5 files, datasets and groups using established Python and NumPy concepts.

A strong emphasis on automatic conversion between Python (Numpy) datatypes and
data structures and their HDF5 equivalents vastly simplifies the process of
reading and writing data from Python.

Wheels are provided for several popular platforms, with an included copy of
the HDF5 library (usually the latest version when h5py is released).

You can also `build h5py from source
`_
with any HDF5 stable release from version 1.10.4 onwards, although naturally new
HDF5 versions released after this version of h5py may not work.
Odd-numbered minor versions of HDF5 (e.g. 1.13) are experimental, and may not
be supported.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739894414.0
h5py-3.13.0/h5py.egg-info/SOURCES.txt0000644000175000017500000001150314755127216017636 0ustar00takluyvertakluyverLICENSE
MANIFEST.in
README.rst
api_gen.py
asv.conf.json
dev-install.sh
pylintrc
pyproject.toml
pytest.ini
setup.py
setup_build.py
setup_configure.py
tox.ini
benchmarks/__init__.py
benchmarks/benchmark_slicing.py
benchmarks/benchmarks.py
docs/Makefile
docs/build.rst
docs/conf.py
docs/config.rst
docs/contributing.rst
docs/faq.rst
docs/index.rst
docs/licenses.rst
docs/mpi.rst
docs/quick.rst
docs/refs.rst
docs/related_projects.rst
docs/release_guide.rst
docs/requirements-rtd.txt
docs/special.rst
docs/strings.rst
docs/swmr.rst
docs/vds.rst
docs/high/attr.rst
docs/high/dataset.rst
docs/high/dims.rst
docs/high/file.rst
docs/high/group.rst
docs/high/lowlevel.rst
docs/whatsnew/2.0.rst
docs/whatsnew/2.1.rst
docs/whatsnew/2.10.rst
docs/whatsnew/2.2.rst
docs/whatsnew/2.3.rst
docs/whatsnew/2.4.rst
docs/whatsnew/2.5.rst
docs/whatsnew/2.6.rst
docs/whatsnew/2.7.1.rst
docs/whatsnew/2.7.rst
docs/whatsnew/2.8.rst
docs/whatsnew/2.9.rst
docs/whatsnew/3.0.rst
docs/whatsnew/3.1.rst
docs/whatsnew/3.10.rst
docs/whatsnew/3.11.rst
docs/whatsnew/3.12.rst
docs/whatsnew/3.13.rst
docs/whatsnew/3.2.rst
docs/whatsnew/3.3.rst
docs/whatsnew/3.4.rst
docs/whatsnew/3.5.rst
docs/whatsnew/3.6.rst
docs/whatsnew/3.7.rst
docs/whatsnew/3.8.rst
docs/whatsnew/3.9.rst
docs/whatsnew/index.rst
docs_api/Makefile
docs_api/automod.py
docs_api/conf.py
docs_api/h5.rst
docs_api/h5a.rst
docs_api/h5ac.rst
docs_api/h5d.rst
docs_api/h5ds.rst
docs_api/h5f.rst
docs_api/h5fd.rst
docs_api/h5g.rst
docs_api/h5i.rst
docs_api/h5l.rst
docs_api/h5o.rst
docs_api/h5p.rst
docs_api/h5pl.rst
docs_api/h5r.rst
docs_api/h5s.rst
docs_api/h5t.rst
docs_api/h5z.rst
docs_api/index.rst
docs_api/objects.rst
docs_api/_static/.keep
examples/bytesio.py
examples/collective_io.py
examples/dataset_concatenation.py
examples/dual_pco_edge.py
examples/eiger_use_case.py
examples/excalibur_detector_modules.py
examples/multiblockslice_interleave.py
examples/multiprocessing_example.py
examples/percival_use_case.py
examples/store_and_retrieve_units_example.py
examples/store_datetimes.py
examples/swmr_inotify_example.py
examples/swmr_multiprocess.py
examples/threading_example.py
examples/vds_simple.py
examples/write-direct-compressed.py
h5py/__init__.py
h5py/_conv.pxd
h5py/_conv.pyx
h5py/_errors.pxd
h5py/_errors.pyx
h5py/_locks.pxi
h5py/_objects.pxd
h5py/_objects.pyx
h5py/_proxy.pxd
h5py/_proxy.pyx
h5py/_selector.pyx
h5py/api_compat.h
h5py/api_functions.txt
h5py/api_types_ext.pxd
h5py/api_types_hdf5.pxd
h5py/h5.pxd
h5py/h5.pyx
h5py/h5a.pxd
h5py/h5a.pyx
h5py/h5ac.pxd
h5py/h5ac.pyx
h5py/h5d.pxd
h5py/h5d.pyx
h5py/h5ds.pxd
h5py/h5ds.pyx
h5py/h5f.pxd
h5py/h5f.pyx
h5py/h5fd.pxd
h5py/h5fd.pyx
h5py/h5g.pxd
h5py/h5g.pyx
h5py/h5i.pxd
h5py/h5i.pyx
h5py/h5l.pxd
h5py/h5l.pyx
h5py/h5o.pxd
h5py/h5o.pyx
h5py/h5p.pxd
h5py/h5p.pyx
h5py/h5pl.pxd
h5py/h5pl.pyx
h5py/h5py_warnings.py
h5py/h5r.pxd
h5py/h5r.pyx
h5py/h5s.pxd
h5py/h5s.pyx
h5py/h5t.pxd
h5py/h5t.pyx
h5py/h5z.pxd
h5py/h5z.pyx
h5py/ipy_completer.py
h5py/utils.pxd
h5py/utils.pyx
h5py/version.py
h5py.egg-info/PKG-INFO
h5py.egg-info/SOURCES.txt
h5py.egg-info/dependency_links.txt
h5py.egg-info/requires.txt
h5py.egg-info/top_level.txt
h5py/_hl/__init__.py
h5py/_hl/attrs.py
h5py/_hl/base.py
h5py/_hl/compat.py
h5py/_hl/dataset.py
h5py/_hl/datatype.py
h5py/_hl/dims.py
h5py/_hl/files.py
h5py/_hl/filters.py
h5py/_hl/group.py
h5py/_hl/selections.py
h5py/_hl/selections2.py
h5py/_hl/vds.py
h5py/tests/__init__.py
h5py/tests/common.py
h5py/tests/conftest.py
h5py/tests/test_attribute_create.py
h5py/tests/test_attrs.py
h5py/tests/test_attrs_data.py
h5py/tests/test_base.py
h5py/tests/test_big_endian_file.py
h5py/tests/test_completions.py
h5py/tests/test_dataset.py
h5py/tests/test_dataset_getitem.py
h5py/tests/test_dataset_swmr.py
h5py/tests/test_datatype.py
h5py/tests/test_dimension_scales.py
h5py/tests/test_dims_dimensionproxy.py
h5py/tests/test_dtype.py
h5py/tests/test_errors.py
h5py/tests/test_file.py
h5py/tests/test_file2.py
h5py/tests/test_file_alignment.py
h5py/tests/test_file_image.py
h5py/tests/test_filters.py
h5py/tests/test_group.py
h5py/tests/test_h5.py
h5py/tests/test_h5d_direct_chunk.py
h5py/tests/test_h5f.py
h5py/tests/test_h5o.py
h5py/tests/test_h5p.py
h5py/tests/test_h5pl.py
h5py/tests/test_h5s.py
h5py/tests/test_h5t.py
h5py/tests/test_h5z.py
h5py/tests/test_objects.py
h5py/tests/test_ros3.py
h5py/tests/test_selections.py
h5py/tests/test_slicing.py
h5py/tests/data_files/__init__.py
h5py/tests/data_files/vlen_string_dset.h5
h5py/tests/data_files/vlen_string_dset_utc.h5
h5py/tests/data_files/vlen_string_s390x.h5
h5py/tests/test_vds/__init__.py
h5py/tests/test_vds/test_highlevel_vds.py
h5py/tests/test_vds/test_lowlevel_vds.py
h5py/tests/test_vds/test_virtual_source.py
licenses/hdf5.txt
licenses/license.txt
licenses/pytables.txt
licenses/python.txt
licenses/stdint.txt
lzf/LICENSE.txt
lzf/README.txt
lzf/example.c
lzf/lzf_filter.c
lzf/lzf_filter.h
lzf/lzf/lzf.h
lzf/lzf/lzfP.h
lzf/lzf/lzf_c.c
lzf/lzf/lzf_d.c././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739894414.0
h5py-3.13.0/h5py.egg-info/dependency_links.txt0000644000175000017500000000000114755127216022020 0ustar00takluyvertakluyver
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739894414.0
h5py-3.13.0/h5py.egg-info/requires.txt0000644000175000017500000000001614755127216020347 0ustar00takluyvertakluyvernumpy>=1.19.3
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739894414.0
h5py-3.13.0/h5py.egg-info/top_level.txt0000644000175000017500000000000514755127216020477 0ustar00takluyvertakluyverh5py
././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5848823
h5py-3.13.0/licenses/0000755000175000017500000000000014755127217015201 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/licenses/hdf5.txt0000644000175000017500000000726614045746670016605 0ustar00takluyvertakluyverHDF5 (Hierarchical Data Format 5) Software Library and Utilities
Copyright 2006-2007 by The HDF Group (THG).

NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities
Copyright 1998-2006 by the Board of Trustees of the University of Illinois.

All rights reserved.

Contributors: National Center for Supercomputing Applications (NCSA)
at the University of Illinois, Fortner Software, Unidata Program
Center (netCDF), The Independent JPEG Group (JPEG), Jean-loup Gailly
and Mark Adler (gzip), and Digital Equipment Corporation (DEC).

Redistribution and use in source and binary forms, with or without
modification, are permitted for any purpose (including commercial
purposes) provided that the following conditions are met:

   1. Redistributions of source code must retain the above copyright
notice, this list of conditions, and the following disclaimer.
   2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions, and the following
disclaimer in the documentation and/or materials provided with the
distribution.
   3. In addition, redistributions of modified forms of the source or
binary code must carry prominent notices stating that the original
code was changed and the date of the change.
   4. All publications or advertising materials mentioning features or
use of this software are asked, but not required, to acknowledge that
it was developed by The HDF Group and by the National Center for
Supercomputing Applications at the University of Illinois at
Urbana-Champaign and credit the contributors.
   5. Neither the name of The HDF Group, the name of the University,
nor the name of any Contributor may be used to endorse or promote
products derived from this software without specific prior written
permission from THG, the University, or the Contributor, respectively.

DISCLAIMER: THIS SOFTWARE IS PROVIDED BY THE HDF GROUP (THG) AND THE
CONTRIBUTORS "AS IS" WITH NO WARRANTY OF ANY KIND, EITHER EXPRESSED OR
IMPLIED. In no event shall THG or the Contributors be liable for any
damages suffered by the users arising out of the use of this software,
even if advised of the possibility of such damage.

Portions of HDF5 were developed with support from the University of
California, Lawrence Livermore National Laboratory (UC LLNL). The
following statement applies to those portions of the product and must
be retained in any redistribution of source code, binaries,
documentation, and/or accompanying materials:

This work was partially produced at the University of California,
Lawrence Livermore National Laboratory (UC LLNL) under contract
no. W-7405-ENG-48 (Contract 48) between the U.S. Department of Energy
(DOE) and The Regents of the University of California (University) for
the operation of UC LLNL.

DISCLAIMER: This work was prepared as an account of work sponsored by
an agency of the United States Government. Neither the United States
Government nor the University of California nor any of their
employees, makes any warranty, express or implied, or assumes any
liability or responsibility for the accuracy, completeness, or
usefulness of any information, apparatus, product, or process
disclosed, or represents that its use would not infringe privately-
owned rights. Reference herein to any specific commercial products,
process, or service by trade name, trademark, manufacturer, or
otherwise, does not necessarily constitute or imply its endorsement,
recommendation, or favoring by the United States Government or the
University of California. The views and opinions of authors expressed
herein do not necessarily state or reflect those of the United States
Government or the University of California, and shall not be used for
advertising or product endorsement purposes.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/licenses/license.txt0000644000175000017500000000331314045746670017366 0ustar00takluyvertakluyverCopyright Notice and Statement for the h5py Project
===================================================

    Copyright (c) 2008-2013 Andrew Collette and contributors
    http://www.h5py.org
    All rights reserved.

    Redistribution and use in source and binary forms, with or without
    modification, are permitted provided that the following conditions are
    met:

    a. Redistributions of source code must retain the above copyright
       notice, this list of conditions and the following disclaimer.

    b. Redistributions in binary form must reproduce the above copyright
       notice, this list of conditions and the following disclaimer in the
       documentation and/or other materials provided with the
       distribution.

    c. Neither the name of the author nor the names of contributors may
       be used to endorse or promote products derived from this software
       without specific prior written permission.

    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/licenses/pytables.txt0000644000175000017500000000315014045746670017566 0ustar00takluyvertakluyverCopyright Notice and Statement for PyTables Software Library and Utilities:

Copyright (c) 2002, 2003, 2004  Francesc Altet
Copyright (c) 2005, 2006, 2007  Carabos Coop. V.
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:

a. Redistributions of source code must retain the above copyright
   notice, this list of conditions and the following disclaimer.

b. Redistributions in binary form must reproduce the above copyright
   notice, this list of conditions and the following disclaimer in the
   documentation and/or other materials provided with the
   distribution.

c. Neither the name of the Carabos Coop. V. nor the names of its
   contributors may be used to endorse or promote products derived
   from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/licenses/python.txt0000644000175000017500000000470714045746670017275 0ustar00takluyvertakluyverPython license
==============

#. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and
   the Individual or Organization ("Licensee") accessing and otherwise using Python
   Python 2.7.5 software in source or binary form and its associated documentation.

#. Subject to the terms and conditions of this License Agreement, PSF hereby
   grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
   analyze, test, perform and/or display publicly, prepare derivative works,
   distribute, and otherwise use Python Python 2.7.5 alone or in any derivative
   version, provided, however, that PSF's License Agreement and PSF's notice of
   copyright, i.e., "Copyright 2001-2013 Python Software Foundation; All Rights
   Reserved" are retained in Python Python 2.7.5 alone or in any derivative version
   prepared by Licensee.

#. In the event Licensee prepares a derivative work that is based on or
   incorporates Python Python 2.7.5 or any part thereof, and wants to make the
   derivative work available to others as provided herein, then Licensee hereby
   agrees to include in any such work a brief summary of the changes made to Python
   Python 2.7.5.

#. PSF is making Python Python 2.7.5 available to Licensee on an "AS IS" basis.
   PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED.  BY WAY OF
   EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR
   WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE
   USE OF PYTHON Python 2.7.5 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.

#. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON Python 2.7.5
   FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
   MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON Python 2.7.5, OR ANY DERIVATIVE
   THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.

#. This License Agreement will automatically terminate upon a material breach of
   its terms and conditions.

#. Nothing in this License Agreement shall be deemed to create any relationship
   of agency, partnership, or joint venture between PSF and Licensee.  This License
   Agreement does not grant permission to use PSF trademarks or trade name in a
   trademark sense to endorse or promote products or services of Licensee, or any
   third party.

#. By copying, installing or otherwise using Python Python 2.7.5, Licensee agrees
   to be bound by the terms and conditions of this License Agreement.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/licenses/stdint.txt0000644000175000017500000000256214045746670017256 0ustar00takluyvertakluyverCopyright (c) 2006-2008 Alexander Chemeris

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice,
     this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright
     notice, this list of conditions and the following disclaimer in the
     documentation and/or other materials provided with the distribution.

  3. The name of the author may be used to endorse or promote products
     derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5858822
h5py-3.13.0/lzf/0000755000175000017500000000000014755127217014167 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/lzf/LICENSE.txt0000644000175000017500000000302414045746670016013 0ustar00takluyvertakluyverCopyright Notice and Statement for LZF filter

Copyright (c) 2008-2009 Andrew Collette
http://h5py.org
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:

a. Redistributions of source code must retain the above copyright
   notice, this list of conditions and the following disclaimer.

b. Redistributions in binary form must reproduce the above copyright
   notice, this list of conditions and the following disclaimer in the
   documentation and/or other materials provided with the
   distribution.

c. Neither the name of the author nor the names of contributors may
   be used to endorse or promote products derived from this software
   without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/lzf/README.txt0000644000175000017500000000637514675110407015672 0ustar00takluyvertakluyver===============================
LZF filter for HDF5, revision 3
===============================

The LZF filter provides high-speed compression with acceptable compression
performance, resulting in much faster performance than DEFLATE, at the
cost of a slightly lower compression ratio. It's appropriate for large
datasets of low to moderate complexity, for which some compression is
much better than none, but for which the speed of DEFLATE is unacceptable.

This filter has been tested against HDF5 versions 1.6.5 through 1.8.3.  It
is released under the BSD license (see LICENSE.txt for details).


Using the filter from HDF5
--------------------------

With HDF5 version 1.8.11 or later the filter can be loaded dynamically by the
HDF5 library.  The filter needs to be compiled as a plugin as described below
that is placed in the default plugin path /usr/local/hdf5/lib/plugin/.  The
plugin path can be overridden with the environment variable HDF5_PLUGIN_PATH.

With older HDF5 versions, or when statically linking the filter to your program,
the filter must be registered manually. There is exactly one new public function
declared in lzf_filter.h, with the following signature:

    int register_lzf(void)

Calling this will register the filter with the HDF5 library.  A non-negative
return value indicates success.  If the registration fails, an error is pushed
onto the current error stack and a negative value is returned.

It's strongly recommended to use the SHUFFLE filter with LZF, as it's
cheap, supported by all current versions of HDF5, and can significantly
improve the compression ratio.  An example C program ("example.c") is included
which demonstrates the proper use of the filter.


Compiling
---------

The filter consists of a single .c file and header, along with an embedded
version of the LZF compression library.  Since the filter is stateless, it's
recommended to statically link the entire thing into your program; for
example:

    $ gcc -O2 lzf/*.c lzf_filter.c myprog.c -lhdf5 -o myprog

It can also be built as a shared library, although you will have to install
the resulting library somewhere the runtime linker can find it:

    $ gcc -O2 -fPIC -shared lzf/*.c lzf_filter.c -lhdf5 -o liblzf_filter.so

A similar procedure should be used for building C++ code.  As in these
examples, using option -O1 or higher is strongly recommended for increased
performance.

With HDF5 version 1.8.11 or later the filter can be dynamically loaded as a
plugin.  The filter is built as a shared library that is *not* linked against
the HDF5 library:

    $ gcc -O2 -fPIC -shared lzf/*.c lzf_filter.c -o liblzf_filter.so


Contact
-------

This filter is maintained as part of the HDF5 for Python (h5py) project.  The
goal of h5py is to provide access to the majority of the HDF5 C API and feature
set from Python.  The most recent version of h5py (1.1) includes the LZF
filter by default.

* Downloads:  https://pypi.org/project/h5py/

* Issue tracker:  https://github.com/h5py/h5py

* Main web site and documentation:  http://h5py.org

* Discussion forum: https://forum.hdfgroup.org/c/hdf5/h5py


History of changes
------------------

Revision 3 (6/25/09)
    Fix issue with changed filter struct definition under HDF5 1.8.3.

Revision 2
    Minor speed enhancement.

Revision 1
    Initial release.
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/lzf/example.c0000644000175000017500000000525114045746670015773 0ustar00takluyvertakluyver/*
    Copyright (C) 2009 Andrew Collette
    http://h5py.org
    License: BSD (see LICENSE.txt)

    Example program demonstrating use of the LZF filter from C code.

    To compile this program:

    h5cc -DH5_USE_16_API lzf/*.c lzf_filter.c example.c -o example

    To run:

    $ ./example
    Success!
    $ h5ls -v test_lzf.hdf5
    Opened "test_lzf.hdf5" with sec2 driver.
    dset                     Dataset {100/100, 100/100, 100/100}
        Location:  0:1:0:976
        Links:     1
        Modified:  2009-02-15 16:35:11 PST
        Chunks:    {1, 100, 100} 40000 bytes
        Storage:   4000000 logical bytes, 174288 allocated bytes, 2295.05% utilization
        Filter-0:  shuffle-2 OPT {4}
        Filter-1:  lzf-32000 OPT {1, 261, 40000}
        Type:      native float
*/

#include 
#include "hdf5.h"
#include "lzf_filter.h"

#define SIZE 100*100*100
#define SHAPE {100,100,100}
#define CHUNKSHAPE {1,100,100}

int main(){

    static float data[SIZE];
    static float data_out[SIZE];
    const hsize_t shape[] = SHAPE;
    const hsize_t chunkshape[] = CHUNKSHAPE;
    int r, i;
    int return_code = 1;

    hid_t fid, sid, dset, plist = 0;

    for(i=0; i0)  H5Dclose(dset);
    if(sid>0)   H5Sclose(sid);
    if(plist>0) H5Pclose(plist);
    if(fid>0)   H5Fclose(fid);

    return return_code;
}
././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1739894414.586882
h5py-3.13.0/lzf/lzf/0000755000175000017500000000000014755127217014762 5ustar00takluyvertakluyver././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/lzf/lzf/lzf.h0000644000175000017500000001046614045746670015737 0ustar00takluyvertakluyver/*
 * Copyright (c) 2000-2008 Marc Alexander Lehmann 
 *
 * Redistribution and use in source and binary forms, with or without modifica-
 * tion, are permitted provided that the following conditions are met:
 *
 *   1.  Redistributions of source code must retain the above copyright notice,
 *       this list of conditions and the following disclaimer.
 *
 *   2.  Redistributions in binary form must reproduce the above copyright
 *       notice, this list of conditions and the following disclaimer in the
 *       documentation and/or other materials provided with the distribution.
 *
 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-
 * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO
 * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-
 * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
 * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
 * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-
 * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
 * OF THE POSSIBILITY OF SUCH DAMAGE.
 *
 * Alternatively, the contents of this file may be used under the terms of
 * the GNU General Public License ("GPL") version 2 or any later version,
 * in which case the provisions of the GPL are applicable instead of
 * the above. If you wish to allow the use of your version of this file
 * only under the terms of the GPL and not to allow others to use your
 * version of this file under the BSD license, indicate your decision
 * by deleting the provisions above and replace them with the notice
 * and other provisions required by the GPL. If you do not delete the
 * provisions above, a recipient may use your version of this file under
 * either the BSD or the GPL.
 */

#ifndef LZF_H
#define LZF_H

/***********************************************************************
**
**	lzf -- an extremely fast/free compression/decompression-method
**	http://liblzf.plan9.de/
**
**	This algorithm is believed to be patent-free.
**
***********************************************************************/

#define LZF_VERSION 0x0105 /* 1.5, API version */

/*
 * Compress in_len bytes stored at the memory block starting at
 * in_data and write the result to out_data, up to a maximum length
 * of out_len bytes.
 *
 * If the output buffer is not large enough or any error occurs return 0,
 * otherwise return the number of bytes used, which might be considerably
 * more than in_len (but less than 104% of the original size), so it
 * makes sense to always use out_len == in_len - 1), to ensure _some_
 * compression, and store the data uncompressed otherwise (with a flag, of
 * course.
 *
 * lzf_compress might use different algorithms on different systems and
 * even different runs, thus might result in different compressed strings
 * depending on the phase of the moon or similar factors. However, all
 * these strings are architecture-independent and will result in the
 * original data when decompressed using lzf_decompress.
 *
 * The buffers must not be overlapping.
 *
 * If the option LZF_STATE_ARG is enabled, an extra argument must be
 * supplied which is not reflected in this header file. Refer to lzfP.h
 * and lzf_c.c.
 *
 */
unsigned int
lzf_compress (const void *const in_data,  unsigned int in_len,
              void             *out_data, unsigned int out_len);

/*
 * Decompress data compressed with some version of the lzf_compress
 * function and stored at location in_data and length in_len. The result
 * will be stored at out_data up to a maximum of out_len characters.
 *
 * If the output buffer is not large enough to hold the decompressed
 * data, a 0 is returned and errno is set to E2BIG. Otherwise the number
 * of decompressed bytes (i.e. the original length of the data) is
 * returned.
 *
 * If an error in the compressed data is detected, a zero is returned and
 * errno is set to EINVAL.
 *
 * This function is very fast, about as fast as a copying loop.
 */
unsigned int
lzf_decompress (const void *const in_data,  unsigned int in_len,
                void             *out_data, unsigned int out_len);

#endif
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/lzf/lzf/lzfP.h0000644000175000017500000001250514045746670016053 0ustar00takluyvertakluyver/*
 * Copyright (c) 2000-2007 Marc Alexander Lehmann 
 *
 * Redistribution and use in source and binary forms, with or without modifica-
 * tion, are permitted provided that the following conditions are met:
 *
 *   1.  Redistributions of source code must retain the above copyright notice,
 *       this list of conditions and the following disclaimer.
 *
 *   2.  Redistributions in binary form must reproduce the above copyright
 *       notice, this list of conditions and the following disclaimer in the
 *       documentation and/or other materials provided with the distribution.
 *
 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-
 * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO
 * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-
 * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
 * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
 * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-
 * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
 * OF THE POSSIBILITY OF SUCH DAMAGE.
 *
 * Alternatively, the contents of this file may be used under the terms of
 * the GNU General Public License ("GPL") version 2 or any later version,
 * in which case the provisions of the GPL are applicable instead of
 * the above. If you wish to allow the use of your version of this file
 * only under the terms of the GPL and not to allow others to use your
 * version of this file under the BSD license, indicate your decision
 * by deleting the provisions above and replace them with the notice
 * and other provisions required by the GPL. If you do not delete the
 * provisions above, a recipient may use your version of this file under
 * either the BSD or the GPL.
 */

#ifndef LZFP_h
#define LZFP_h

#define STANDALONE 1 /* at the moment, this is ok. */

#ifndef STANDALONE
# include "lzf.h"
#endif

/*
 * Size of hashtable is (1 << HLOG) * sizeof (char *)
 * decompression is independent of the hash table size
 * the difference between 15 and 14 is very small
 * for small blocks (and 14 is usually a bit faster).
 * For a low-memory/faster configuration, use HLOG == 13;
 * For best compression, use 15 or 16 (or more, up to 23).
 */
#ifndef HLOG
# define HLOG 17  /* Avoid pathological case at HLOG=16   A.C. 2/15/09 */
#endif

/*
 * Sacrifice very little compression quality in favour of compression speed.
 * This gives almost the same compression as the default code, and is
 * (very roughly) 15% faster. This is the preferred mode of operation.
 */
#ifndef VERY_FAST
# define VERY_FAST 1
#endif

/*
 * Sacrifice some more compression quality in favour of compression speed.
 * (roughly 1-2% worse compression for large blocks and
 * 9-10% for small, redundant, blocks and >>20% better speed in both cases)
 * In short: when in need for speed, enable this for binary data,
 * possibly disable this for text data.
 */
#ifndef ULTRA_FAST
# define ULTRA_FAST 1
#endif

/*
 * Unconditionally aligning does not cost very much, so do it if unsure
 */
#ifndef STRICT_ALIGN
# define STRICT_ALIGN !(defined(__i386) || defined (__amd64))
#endif

/*
 * You may choose to pre-set the hash table (might be faster on some
 * modern cpus and large (>>64k) blocks, and also makes compression
 * deterministic/repeatable when the configuration otherwise is the same).
 */
#ifndef INIT_HTAB
# define INIT_HTAB 0
#endif

/* =======================================================================
    Changing things below this line may break the HDF5 LZF filter.
    A.C. 2/15/09
   =======================================================================
*/

/*
 * Avoid assigning values to errno variable? for some embedding purposes
 * (linux kernel for example), this is necessary. NOTE: this breaks
 * the documentation in lzf.h.
 */
#ifndef AVOID_ERRNO
# define AVOID_ERRNO 0
#endif

/*
 * Whether to pass the LZF_STATE variable as argument, or allocate it
 * on the stack. For small-stack environments, define this to 1.
 * NOTE: this breaks the prototype in lzf.h.
 */
#ifndef LZF_STATE_ARG
# define LZF_STATE_ARG 0
#endif

/*
 * Whether to add extra checks for input validity in lzf_decompress
 * and return EINVAL if the input stream has been corrupted. This
 * only shields against overflowing the input buffer and will not
 * detect most corrupted streams.
 * This check is not normally noticeable on modern hardware
 * (<1% slowdown), but might slow down older cpus considerably.
 */

#ifndef CHECK_INPUT
# define CHECK_INPUT 1
#endif

/*****************************************************************************/
/* nothing should be changed below */

typedef unsigned char u8;

typedef const u8 *LZF_STATE[1 << (HLOG)];

#if !STRICT_ALIGN
/* for unaligned accesses we need a 16 bit datatype. */
# include 
# if USHRT_MAX == 65535
    typedef unsigned short u16;
# elif UINT_MAX == 65535
    typedef unsigned int u16;
# else
#  undef STRICT_ALIGN
#  define STRICT_ALIGN 1
# endif
#endif

#if ULTRA_FAST
# if defined(VERY_FAST)
#  undef VERY_FAST
# endif
#endif

#if INIT_HTAB
# ifdef __cplusplus
#  include 
# else
#  include 
# endif
#endif

#endif
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/lzf/lzf/lzf_c.c0000644000175000017500000002145214045746670016231 0ustar00takluyvertakluyver/*
 * Copyright (c) 2000-2008 Marc Alexander Lehmann 
 *
 * Redistribution and use in source and binary forms, with or without modifica-
 * tion, are permitted provided that the following conditions are met:
 *
 *   1.  Redistributions of source code must retain the above copyright notice,
 *       this list of conditions and the following disclaimer.
 *
 *   2.  Redistributions in binary form must reproduce the above copyright
 *       notice, this list of conditions and the following disclaimer in the
 *       documentation and/or other materials provided with the distribution.
 *
 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-
 * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO
 * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-
 * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
 * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
 * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-
 * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
 * OF THE POSSIBILITY OF SUCH DAMAGE.
 *
 * Alternatively, the contents of this file may be used under the terms of
 * the GNU General Public License ("GPL") version 2 or any later version,
 * in which case the provisions of the GPL are applicable instead of
 * the above. If you wish to allow the use of your version of this file
 * only under the terms of the GPL and not to allow others to use your
 * version of this file under the BSD license, indicate your decision
 * by deleting the provisions above and replace them with the notice
 * and other provisions required by the GPL. If you do not delete the
 * provisions above, a recipient may use your version of this file under
 * either the BSD or the GPL.
 */

#include "lzfP.h"

#define HSIZE (1 << (HLOG))

/*
 * don't play with this unless you benchmark!
 * decompression is not dependent on the hash function
 * the hashing function might seem strange, just believe me
 * it works ;)
 */
#ifndef FRST
# define FRST(p) (((p[0]) << 8) | p[1])
# define NEXT(v,p) (((v) << 8) | p[2])
# if ULTRA_FAST
#  define IDX(h) ((( h             >> (3*8 - HLOG)) - h  ) & (HSIZE - 1))
# elif VERY_FAST
#  define IDX(h) ((( h             >> (3*8 - HLOG)) - h*5) & (HSIZE - 1))
# else
#  define IDX(h) ((((h ^ (h << 5)) >> (3*8 - HLOG)) - h*5) & (HSIZE - 1))
# endif
#endif
/*
 * IDX works because it is very similar to a multiplicative hash, e.g.
 * ((h * 57321 >> (3*8 - HLOG)) & (HSIZE - 1))
 * the latter is also quite fast on newer CPUs, and compresses similarly.
 *
 * the next one is also quite good, albeit slow ;)
 * (int)(cos(h & 0xffffff) * 1e6)
 */

#if 0
/* original lzv-like hash function, much worse and thus slower */
# define FRST(p) (p[0] << 5) ^ p[1]
# define NEXT(v,p) ((v) << 5) ^ p[2]
# define IDX(h) ((h) & (HSIZE - 1))
#endif

#define        MAX_LIT        (1 <<  5)
#define        MAX_OFF        (1 << 13)
#define        MAX_REF        ((1 << 8) + (1 << 3))

#if __GNUC__ >= 3
# define expect(expr,value)         __builtin_expect ((expr),(value))
# define inline                     inline
#else
# define expect(expr,value)         (expr)
# define inline                     static
#endif

#define expect_false(expr) expect ((expr) != 0, 0)
#define expect_true(expr)  expect ((expr) != 0, 1)

/*
 * compressed format
 *
 * 000LLLLL     ; literal
 * LLLooooo oooooooo ; backref L
 * 111ooooo LLLLLLLL oooooooo ; backref L+7
 *
 */

unsigned int
lzf_compress (const void *const in_data, unsigned int in_len,
	      void *out_data, unsigned int out_len
#if LZF_STATE_ARG
              , LZF_STATE htab
#endif
              )
{
#if !LZF_STATE_ARG
  LZF_STATE htab;
#endif
  const u8 **hslot;
  const u8 *ip = (const u8 *)in_data;
        u8 *op = (u8 *)out_data;
  const u8 *in_end  = ip + in_len;
        u8 *out_end = op + out_len;
  const u8 *ref;

  /* off requires a type wide enough to hold a general pointer difference.
   * ISO C doesn't have that (size_t might not be enough and ptrdiff_t only
   * works for differences within a single object). We also assume that no
   * no bit pattern traps. Since the only platform that is both non-POSIX
   * and fails to support both assumptions is windows 64 bit, we make a
   * special workaround for it.
   */
#if ( defined (WIN32) && defined (_M_X64) ) || defined (_WIN64)
  unsigned _int64 off; /* workaround for missing POSIX compliance */
#else
  unsigned long off;
#endif
  unsigned int hval;
  int lit;

  if (!in_len || !out_len)
    return 0;

#if INIT_HTAB
  memset (htab, 0, sizeof (htab));
# if 0
  for (hslot = htab; hslot < htab + HSIZE; hslot++)
    *hslot++ = ip;
# endif
#endif

  lit = 0; op++; /* start run */

  hval = FRST (ip);
  while (ip < in_end - 2)
    {
      hval = NEXT (hval, ip);
      hslot = htab + IDX (hval);
      ref = *hslot; *hslot = ip;

      if (1
#if INIT_HTAB
          && ref < ip /* the next test will actually take care of this, but this is faster */
#endif
          && (off = ip - ref - 1) < MAX_OFF
          && ip + 4 < in_end
          && ref > (u8 *)in_data
#if STRICT_ALIGN
          && ref[0] == ip[0]
          && ref[1] == ip[1]
          && ref[2] == ip[2]
#else
          && *(u16 *)ref == *(u16 *)ip
          && ref[2] == ip[2]
#endif
        )
        {
          /* match found at *ref++ */
          unsigned int len = 2;
          unsigned int maxlen = in_end - ip - len;
          maxlen = maxlen > MAX_REF ? MAX_REF : maxlen;

          if (expect_false (op + 3 + 1 >= out_end)) /* first a faster conservative test */
            if (op - !lit + 3 + 1 >= out_end) /* second the exact but rare test */
              return 0;

          op [- lit - 1] = lit - 1; /* stop run */
          op -= !lit; /* undo run if length is zero */

          for (;;)
            {
              if (expect_true (maxlen > 16))
                {
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;

                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;

                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;

                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                  len++; if (ref [len] != ip [len]) break;
                }

              do
                len++;
              while (len < maxlen && ref[len] == ip[len]);

              break;
            }

          len -= 2; /* len is now #octets - 1 */
          ip++;

          if (len < 7)
            {
              *op++ = (off >> 8) + (len << 5);
            }
          else
            {
              *op++ = (off >> 8) + (  7 << 5);
              *op++ = len - 7;
            }

          *op++ = off;
          lit = 0; op++; /* start run */

          ip += len + 1;

          if (expect_false (ip >= in_end - 2))
            break;

#if ULTRA_FAST || VERY_FAST
          --ip;
# if VERY_FAST && !ULTRA_FAST
          --ip;
# endif
          hval = FRST (ip);

          hval = NEXT (hval, ip);
          htab[IDX (hval)] = ip;
          ip++;

# if VERY_FAST && !ULTRA_FAST
          hval = NEXT (hval, ip);
          htab[IDX (hval)] = ip;
          ip++;
# endif
#else
          ip -= len + 1;

          do
            {
              hval = NEXT (hval, ip);
              htab[IDX (hval)] = ip;
              ip++;
            }
          while (len--);
#endif
        }
      else
        {
          /* one more literal byte we must copy */
          if (expect_false (op >= out_end))
            return 0;

          lit++; *op++ = *ip++;

          if (expect_false (lit == MAX_LIT))
            {
              op [- lit - 1] = lit - 1; /* stop run */
              lit = 0; op++; /* start run */
            }
        }
    }

  if (op + 3 > out_end) /* at most 3 bytes can be missing here */
    return 0;

  while (ip < in_end)
    {
      lit++; *op++ = *ip++;

      if (expect_false (lit == MAX_LIT))
        {
          op [- lit - 1] = lit - 1; /* stop run */
          lit = 0; op++; /* start run */
        }
    }

  op [- lit - 1] = lit - 1; /* end run */
  op -= !lit; /* undo run if length is zero */

  return op - (u8 *)out_data;
}
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/lzf/lzf/lzf_d.c0000644000175000017500000001047714045746670016237 0ustar00takluyvertakluyver/*
 * Copyright (c) 2000-2007 Marc Alexander Lehmann 
 *
 * Redistribution and use in source and binary forms, with or without modifica-
 * tion, are permitted provided that the following conditions are met:
 *
 *   1.  Redistributions of source code must retain the above copyright notice,
 *       this list of conditions and the following disclaimer.
 *
 *   2.  Redistributions in binary form must reproduce the above copyright
 *       notice, this list of conditions and the following disclaimer in the
 *       documentation and/or other materials provided with the distribution.
 *
 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-
 * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO
 * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-
 * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
 * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
 * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-
 * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
 * OF THE POSSIBILITY OF SUCH DAMAGE.
 *
 * Alternatively, the contents of this file may be used under the terms of
 * the GNU General Public License ("GPL") version 2 or any later version,
 * in which case the provisions of the GPL are applicable instead of
 * the above. If you wish to allow the use of your version of this file
 * only under the terms of the GPL and not to allow others to use your
 * version of this file under the BSD license, indicate your decision
 * by deleting the provisions above and replace them with the notice
 * and other provisions required by the GPL. If you do not delete the
 * provisions above, a recipient may use your version of this file under
 * either the BSD or the GPL.
 */

#include "lzfP.h"

#if AVOID_ERRNO
# define SET_ERRNO(n)
#else
# include 
# define SET_ERRNO(n) errno = (n)
#endif

/* ASM is slower than C in HDF5 tests -- A.C. 2/5/09
#ifndef __STRICT_ANSI__
#ifndef H5PY_DISABLE_LZF_ASM
#if (__i386 || __amd64) && __GNUC__ >= 3
# define lzf_movsb(dst, src, len)                \
   asm ("rep movsb"                              \
        : "=D" (dst), "=S" (src), "=c" (len)     \
        :  "0" (dst),  "1" (src),  "2" (len));
#endif
#endif
#endif
*/

unsigned int
lzf_decompress (const void *const in_data,  unsigned int in_len,
                void             *out_data, unsigned int out_len)
{
  u8 const *ip = (const u8 *)in_data;
  u8       *op = (u8 *)out_data;
  u8 const *const in_end  = ip + in_len;
  u8       *const out_end = op + out_len;

  do
    {
      unsigned int ctrl = *ip++;

      if (ctrl < (1 << 5)) /* literal run */
        {
          ctrl++;

          if (op + ctrl > out_end)
            {
              SET_ERRNO (E2BIG);
              return 0;
            }

#if CHECK_INPUT
          if (ip + ctrl > in_end)
            {
              SET_ERRNO (EINVAL);
              return 0;
            }
#endif

#ifdef lzf_movsb
          lzf_movsb (op, ip, ctrl);
#else
          do
            *op++ = *ip++;
          while (--ctrl);
#endif
        }
      else /* back reference */
        {
          unsigned int len = ctrl >> 5;

          u8 *ref = op - ((ctrl & 0x1f) << 8) - 1;

#if CHECK_INPUT
          if (ip >= in_end)
            {
              SET_ERRNO (EINVAL);
              return 0;
            }
#endif
          if (len == 7)
            {
              len += *ip++;
#if CHECK_INPUT
              if (ip >= in_end)
                {
                  SET_ERRNO (EINVAL);
                  return 0;
                }
#endif
            }

          ref -= *ip++;

          if (op + len + 2 > out_end)
            {
              SET_ERRNO (E2BIG);
              return 0;
            }

          if (ref < (u8 *)out_data)
            {
              SET_ERRNO (EINVAL);
              return 0;
            }

#ifdef lzf_movsb
          len += 2;
          lzf_movsb (op, ref, len);
#else
          *op++ = *ref++;
          *op++ = *ref++;

          do
            *op++ = *ref++;
          while (--len);
#endif
        }
    }
  while (ip < in_end);

  return op - (u8 *)out_data;
}
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1671639227.0
h5py-3.13.0/lzf/lzf_filter.c0000644000175000017500000001610714350630273016470 0ustar00takluyvertakluyver/***** Preamble block *********************************************************
*
* This file is part of h5py, a low-level Python interface to the HDF5 library.
*
* Copyright (C) 2008 Andrew Collette
* http://h5py.org
* License: BSD  (See LICENSE.txt for full license)
*
* $Date$
*
****** End preamble block ****************************************************/

/*
    Implements an LZF filter module for HDF5, using the BSD-licensed library
    by Marc Alexander Lehmann (http://www.goof.com/pcg/marc/liblzf.html).

    No Python-specific code is used.  The filter behaves like the DEFLATE
    filter, in that it is called for every type and space, and returns 0
    if the data cannot be compressed.

    The only public function is (int) register_lzf(void), which passes on
    the result from H5Zregister.
*/

#include 
#include 
#include 
#include "hdf5.h"
#include "lzf.h"
#include "lzf_filter.h"

/* Our own versions of H5Epush_sim, as it changed in 1.8 */
#if H5_VERS_MAJOR == 1 && H5_VERS_MINOR < 7

#define PUSH_ERR(func, minor, str)  H5Epush(__FILE__, func, __LINE__, H5E_PLINE, minor, str)
#define H5PY_GET_FILTER H5Pget_filter_by_id

#else

#define PUSH_ERR(func, minor, str)  H5Epush1(__FILE__, func, __LINE__, H5E_PLINE, minor, str)
#define H5PY_GET_FILTER(a,b,c,d,e,f,g) H5Pget_filter_by_id2(a,b,c,d,e,f,g,NULL)

#endif

/*  Deal with the multiple definitions for H5Z_class_t.
    Note: HDF5 >=1.6 are supported.

    (1) The old class should always be used for HDF5 1.6
    (2) The new class should always be used for HDF5 1.8 < 1.8.3
    (3) The old class should be used for HDF5 >= 1.8.3 only if the
        macro H5_USE_16_API is set
*/

#if H5_VERS_MAJOR == 1 && H5_VERS_MINOR == 6
#define H5PY_H5Z_NEWCLS 0
#elif H5_VERS_MAJOR == 1 && H5_VERS_MINOR == 8 && H5_VERS_RELEASE < 3
#define H5PY_H5Z_NEWCLS 1
#elif H5_USE_16_API
#define H5PY_H5Z_NEWCLS 0
#else /* Default: use new class */
#define H5PY_H5Z_NEWCLS 1
#endif

size_t lzf_filter(unsigned flags, size_t cd_nelmts,
		    const unsigned cd_values[], size_t nbytes,
		    size_t *buf_size, void **buf);

herr_t lzf_set_local(hid_t dcpl, hid_t type, hid_t space);

#if H5PY_H5Z_NEWCLS
static const H5Z_class_t filter_class = {
    H5Z_CLASS_T_VERS,
    (H5Z_filter_t)(H5PY_FILTER_LZF),
    1, 1,
    "lzf",
    NULL,
    (H5Z_set_local_func_t)(lzf_set_local),
    (H5Z_func_t)(lzf_filter)
};
#else
static const H5Z_class_t filter_class = {
    (H5Z_filter_t)(H5PY_FILTER_LZF),
    "lzf",
    NULL,
    (H5Z_set_local_func_t)(lzf_set_local),
    (H5Z_func_t)(lzf_filter)
};
#endif

/* Support dynamical loading of LZF filter plugin */
#if defined(H5_VERSION_GE)
#if H5_VERSION_GE(1, 8, 11)

#include "H5PLextern.h"

H5PL_type_t H5PLget_plugin_type(void){ return H5PL_TYPE_FILTER; }

const void *H5PLget_plugin_info(void){ return &filter_class; }

#endif
#endif

/* Try to register the filter, passing on the HDF5 return value */
int register_lzf(void){

    int retval;

    retval = H5Zregister(&filter_class);
    if(retval<0){
        PUSH_ERR("register_lzf", H5E_CANTREGISTER, "Can't register LZF filter");
    }
    return retval;
}

/*  Filter setup.  Records the following inside the DCPL:

    1.  If version information is not present, set slots 0 and 1 to the filter
        revision and LZF API version, respectively.

    2. Compute the chunk size in bytes and store it in slot 2.
*/
herr_t lzf_set_local(hid_t dcpl, hid_t type, hid_t space){

    int ndims;
    int i;
    herr_t r;

    unsigned int bufsize;
    hsize_t chunkdims[32];

    unsigned int flags;
    size_t nelements = 8;
    unsigned values[] = {0,0,0,0,0,0,0,0};

    r = H5PY_GET_FILTER(dcpl, H5PY_FILTER_LZF, &flags, &nelements, values, 0, NULL);
    if(r<0) return -1;

    if(nelements < 3) nelements = 3;  /* First 3 slots reserved.  If any higher
                                      slots are used, preserve the contents. */

    /* It seems the H5Z_FLAG_REVERSE flag doesn't work here, so we have to be
       careful not to clobber any existing version info */
    if(values[0]==0) values[0] = H5PY_FILTER_LZF_VERSION;
    if(values[1]==0) values[1] = LZF_VERSION;

    ndims = H5Pget_chunk(dcpl, 32, chunkdims);
    if(ndims<0) return -1;
    if(ndims>32){
        PUSH_ERR("lzf_set_local", H5E_CALLBACK, "Chunk rank exceeds limit");
        return -1;
    }

    bufsize = H5Tget_size(type);
    if(bufsize==0) return -1;

    for(i=0;i=3)&&(cd_values[2]!=0)){
            outbuf_size = cd_values[2];   /* Precomputed buffer guess */
        }else{
            outbuf_size = (*buf_size);
        }

#ifdef H5PY_LZF_DEBUG
        fprintf(stderr, "Decompress %d chunk w/buffer %d\n", nbytes, outbuf_size);
#endif

        while(!status){

            free(outbuf);
            outbuf = malloc(outbuf_size);

            if(outbuf == NULL){
                PUSH_ERR("lzf_filter", H5E_CALLBACK, "Can't allocate decompression buffer");
                goto failed;
            }

            status = lzf_decompress(*buf, nbytes, outbuf, outbuf_size);

            if(!status){    /* compression failed */

                if(errno == E2BIG){
                    outbuf_size += (*buf_size);
#ifdef H5PY_LZF_DEBUG
                    fprintf(stderr, "    Too small: %d\n", outbuf_size);
#endif
                } else if(errno == EINVAL) {

                    PUSH_ERR("lzf_filter", H5E_CALLBACK, "Invalid data for LZF decompression");
                    goto failed;

                } else {
                    PUSH_ERR("lzf_filter", H5E_CALLBACK, "Unknown LZF decompression error");
                    goto failed;
                }

            } /* if !status */

        } /* while !status */

    } /* compressing vs decompressing */

    if(status != 0){

        free(*buf);
        *buf = outbuf;
        *buf_size = outbuf_size;

        return status;  /* Size of compressed/decompressed data */
    }

    failed:

    free(outbuf);
    return 0;

} /* End filter function */
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1620561336.0
h5py-3.13.0/lzf/lzf_filter.h0000644000175000017500000000153514045746670016506 0ustar00takluyvertakluyver/***** Preamble block *********************************************************
*
* This file is part of h5py, a low-level Python interface to the HDF5 library.
*
* Copyright (C) 2008 Andrew Collette
* http://h5py.org
* License: BSD  (See LICENSE.txt for full license)
*
* $Date$
*
****** End preamble block ****************************************************/


#ifndef H5PY_LZF_H
#define H5PY_LZF_H

#ifdef __cplusplus
extern "C" {
#endif

/* Filter revision number, starting at 1 */
#define H5PY_FILTER_LZF_VERSION 4

/* Filter ID registered with the HDF Group as of 2/6/09.  For maintenance
   requests, contact the filter author directly. */
#define H5PY_FILTER_LZF 32000

/* Register the filter with the library. Returns a negative value on failure,
   and a non-negative value on success.
*/
int register_lzf(void);

#ifdef __cplusplus
}
#endif

#endif
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1701418452.0
h5py-3.13.0/pylintrc0000644000175000017500000002703414532312724015161 0ustar00takluyvertakluyver[MASTER]

# Specify a configuration file.
#rcfile=

# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=

# Profiled execution.
profile=no

# Add files or directories to the blacklist. They should be base names, not
# paths.
ignore=tests

# Pickle collected data for later comparisons.
persistent=yes

# List of plugins (as comma separated values of python modules names) to load,
# usually to register additional checkers.
load-plugins=

# Use multiple processes to speed up Pylint.
jobs=1

# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no

# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code
extension-pkg-whitelist=numpy,h5py


[MESSAGES CONTROL]

# Only show warnings with the listed confidence levels. Leave empty to show
# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED
confidence=

# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once).You can also use "--disable=all" to
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use"--disable=all --enable=classes
# --disable=W"
#
#      | Checkers                 | Broken import checks     | Other random garbage
disable=format,design,similarities,cyclic-import,import-error,broad-except,no-self-use,no-name-in-module,invalid-name,abstract-method,star-args,import-self,no-init,locally-disabled,unidiomatic-typecheck

# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time. See also the "--disable" option for examples.
#
#     | Some format checks which are OK
enable=bad-indentation,mixed-indentation,unnecessary-semicolon,superfluous-parens

[REPORTS]

# Set the output format. Available formats are text, parseable, colorized, msvs
# (visual studio) and html. You can also give a reporter class, eg
# mypackage.mymodule.MyReporterClass.
output-format=text

# Put messages in a separate file for each module / package specified on the
# command line instead of printing them on stdout. Reports (if any) will be
# written in a file name "pylint_global.[txt|html]".
files-output=no

# Tells whether to display a full report or only the messages
reports=no

# Python expression which should return a note less than 10 (10 is the highest
# note). You have access to the variables errors warning, statement which
# respectively contain the number of errors / warnings messages and the total
# number of statements analyzed. This is used by the global evaluation report
# (RP0004).
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)

# Add a comment according to your evaluation note. This is used by the global
# evaluation report (RP0004).
comment=no

# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details
#msg-template=


[BASIC]

# Required attributes for module, separated by a comma
required-attributes=

# List of builtins function names that should not be used, separated by a comma
bad-functions=map,filter

# Good variable names which should always be accepted, separated by a comma
good-names=i,j,k,ex,Run,_

# Bad variable names which should always be refused, separated by a comma
bad-names=foo,bar,baz,toto,tutu,tata

# Colon-delimited sets of names that determine each other's naming style when
# the name regexes allow several styles.
name-group=

# Include a hint for the correct naming format with invalid-name
include-naming-hint=no

# Regular expression matching correct method names
method-rgx=[a-z_][a-z0-9_]{2,30}$

# Naming hint for method names
method-name-hint=[a-z_][a-z0-9_]{2,30}$

# Regular expression matching correct module names
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$

# Naming hint for module names
module-name-hint=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$

# Regular expression matching correct inline iteration names
inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$

# Naming hint for inline iteration names
inlinevar-name-hint=[A-Za-z_][A-Za-z0-9_]*$

# Regular expression matching correct constant names
const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$

# Naming hint for constant names
const-name-hint=(([A-Z_][A-Z0-9_]*)|(__.*__))$

# Regular expression matching correct variable names
variable-rgx=[a-z_][a-z0-9_]{2,30}$

# Naming hint for variable names
variable-name-hint=[a-z_][a-z0-9_]{2,30}$

# Regular expression matching correct argument names
argument-rgx=[a-z_][a-z0-9_]{2,30}$

# Naming hint for argument names
argument-name-hint=[a-z_][a-z0-9_]{2,30}$

# Regular expression matching correct class attribute names
class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$

# Naming hint for class attribute names
class-attribute-name-hint=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$

# Regular expression matching correct class names
class-rgx=[A-Z_][a-zA-Z0-9]+$

# Naming hint for class names
class-name-hint=[A-Z_][a-zA-Z0-9]+$

# Regular expression matching correct function names
function-rgx=[a-z_][a-z0-9_]{2,30}$

# Naming hint for function names
function-name-hint=[a-z_][a-z0-9_]{2,30}$

# Regular expression matching correct attribute names
attr-rgx=[a-z_][a-z0-9_]{2,30}$

# Naming hint for attribute names
attr-name-hint=[a-z_][a-z0-9_]{2,30}$

# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=__.*__

# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1


[FORMAT]

# Maximum number of characters on a single line.
max-line-length=100

# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )??$

# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no

# List of optional constructs for which whitespace checking is disabled
no-space-check=trailing-comma,dict-separator

# Maximum number of lines in a module
max-module-lines=1000

# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string='    '

# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4

# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
expected-line-ending-format=


[LOGGING]

# Logging modules to check that the string format arguments are in logging
# function parameter format
logging-modules=logging


[MISCELLANEOUS]

# List of note tags to take in consideration, separated by a comma.
notes=FIXME,XXX,TODO


[SIMILARITIES]

# Minimum lines number of a similarity.
min-similarity-lines=4

# Ignore comments when computing similarities.
ignore-comments=yes

# Ignore docstrings when computing similarities.
ignore-docstrings=yes

# Ignore imports when computing similarities.
ignore-imports=no


[SPELLING]

# Spelling dictionary name. Available dictionaries: none. To make it working
# install python-enchant package.
spelling-dict=

# List of comma separated words that should not be checked.
spelling-ignore-words=

# A path to a file that contains private dictionary; one word per line.
spelling-private-dict-file=

# Tells whether to store unknown words to indicated private dictionary in
# --spelling-private-dict-file option instead of raising a message.
spelling-store-unknown-words=no


[TYPECHECK]

# Tells whether missing members accessed in mixin class should be ignored. A
# mixin class is detected if its name ends with "mixin" (case insensitive).
ignore-mixin-members=yes

# List of module names for which member attributes should not be checked
# (useful for modules/projects where namespaces are manipulated during runtime
# and thus existing member attributes cannot be deduced by static analysis
ignored-modules=

# List of classes names for which member attributes should not be checked
# (useful for classes with attributes dynamically set).
ignored-classes=SQLObject

# When zope mode is activated, add a predefined set of Zope acquired attributes
# to generated-members.
zope=no

# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E0201 when accessed. Python regular
# expressions are accepted.
generated-members=REQUEST,acl_users,aq_parent


[VARIABLES]

# Tells whether we should check for unused import in __init__ files.
init-import=no

# A regular expression matching the name of dummy variables (i.e. expectedly
# not used).
dummy-variables-rgx=_$|dummy

# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
additional-builtins=

# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,_cb


[CLASSES]

# List of interface methods to ignore, separated by a comma. This is used for
# instance to not check methods defines in Zope's Interface base class.
ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by

# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,__new__,setUp

# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls

# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs

# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make


[DESIGN]

# Maximum number of arguments for function / method
max-args=5

# Argument names that match this expression will be ignored. Default to name
# with leading underscore
ignored-argument-names=_.*

# Maximum number of locals for function / method body
max-locals=15

# Maximum number of return / yield for function / method body
max-returns=6

# Maximum number of branch for function / method body
max-branches=12

# Maximum number of statements in function / method body
max-statements=50

# Maximum number of parents for a class (see R0901).
max-parents=7

# Maximum number of attributes for a class (see R0902).
max-attributes=7

# Minimum number of public methods for a class (see R0903).
min-public-methods=2

# Maximum number of public methods for a class (see R0904).
max-public-methods=20


[IMPORTS]

# Deprecated modules which should not be used, separated by a comma
deprecated-modules=stringprep,optparse

# Create a graph of every (i.e. internal and external) dependencies in the
# given file (report RP0402 must not be disabled)
import-graph=

# Create a graph of external dependencies in the given file (report RP0402 must
# not be disabled)
ext-import-graph=

# Create a graph of internal dependencies in the given file (report RP0402 must
# not be disabled)
int-import-graph=


[EXCEPTIONS]

# Exceptions that will emit a warning when being caught. Defaults to
# "Exception"
overgeneral-exceptions=Exception
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/pyproject.toml0000644000175000017500000000712714675110407016311 0ustar00takluyvertakluyver[build-system]
requires = [
    "Cython >=0.29.31,<4",
    "numpy >=2.0.0, <3",
    "pkgconfig",
    "setuptools >=61",
]
build-backend = "setuptools.build_meta"

[project]
name = "h5py"
description = "Read and write HDF5 files from Python"
authors = [
    {name = "Andrew Collette", email = "andrew.collette@gmail.com"},
]
maintainers = [
    {name = "Thomas Kluyver", email = "thomas@kluyver.me.uk"},
    {name = "Thomas A Caswell", email = "tcaswell@bnl.gov"},
]
license = {text = "BSD-3-Clause"}
classifiers = [
    "Development Status :: 5 - Production/Stable",
    "Intended Audience :: Developers",
    "Intended Audience :: Information Technology",
    "Intended Audience :: Science/Research",
    "License :: OSI Approved :: BSD License",
    "Operating System :: Unix",
    "Operating System :: POSIX :: Linux",
    "Operating System :: MacOS :: MacOS X",
    "Operating System :: Microsoft :: Windows",
    "Programming Language :: Cython",
    "Programming Language :: Python",
    "Programming Language :: Python :: 3",
    "Programming Language :: Python :: Implementation :: CPython",
    "Topic :: Scientific/Engineering",
    "Topic :: Database",
    "Topic :: Software Development :: Libraries :: Python Modules",
]
requires-python = ">=3.9"
dynamic = ["dependencies", "version"]

[project.readme]
text = """\
The h5py package provides both a high- and low-level interface to the HDF5
library from Python. The low-level interface is intended to be a complete
wrapping of the HDF5 API, while the high-level component supports  access to
HDF5 files, datasets and groups using established Python and NumPy concepts.

A strong emphasis on automatic conversion between Python (Numpy) datatypes and
data structures and their HDF5 equivalents vastly simplifies the process of
reading and writing data from Python.

Wheels are provided for several popular platforms, with an included copy of
the HDF5 library (usually the latest version when h5py is released).

You can also `build h5py from source
`_
with any HDF5 stable release from version 1.10.4 onwards, although naturally new
HDF5 versions released after this version of h5py may not work.
Odd-numbered minor versions of HDF5 (e.g. 1.13) are experimental, and may not
be supported.
"""
content-type = "text/x-rst"

[project.urls]
Homepage = "https://www.h5py.org/"
Source = "https://github.com/h5py/h5py"
Documentation = "https://docs.h5py.org/en/stable/"
"Release notes" = "https://docs.h5py.org/en/stable/whatsnew/index.html"
"Discussion forum" = "https://forum.hdfgroup.org/c/hdf5/h5py"

[tool.setuptools]
# to ignore .pxd and .pyx files in wheels
include-package-data = false
packages = [
    "h5py",
    "h5py._hl",
    "h5py.tests",
    "h5py.tests.data_files",
    "h5py.tests.test_vds",
]

[tool.cibuildwheel]
build-verbosity = 1
build-frontend = { name = "pip", args = ["--only-binary", "numpy"] }
manylinux-x86_64-image = "ghcr.io/h5py/manylinux2014_x86_64-hdf5"
manylinux-aarch64-image = "ghcr.io/h5py/manylinux2014_aarch64-hdf5"
test-requires = [
    "tox",
    "codecov",
    "coverage",
]
test-command = "bash {project}/ci/cibw_test_command.sh {project} {wheel}"

[tool.cibuildwheel.linux]
environment-pass = ["GITHUB_ACTIONS"]

[tool.cibuildwheel.linux.environment]
CFLAGS = "-g1"

[tool.cibuildwheel.macos]
# https://cibuildwheel.pypa.io/en/stable/faq/#macos-passing-dyld_library_path-to-delocate
repair-wheel-command = """\
DYLD_FALLBACK_LIBRARY_PATH=$HDF5_DIR/lib \
delocate-wheel --require-archs {delocate_archs} -w {dest_dir} -v {wheel} \
"""

[tool.cibuildwheel.macos.environment]
H5PY_ROS3 = "0"
H5PY_DIRECT_VFD = "0"
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/pytest.ini0000644000175000017500000000020014675110407015407 0ustar00takluyvertakluyver[pytest]
markers =
    nonetwork: Tests to skip when no network access
filterwarnings =
    error
required_plugins = pytest-mpi
././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739894414.5878823
h5py-3.13.0/setup.cfg0000644000175000017500000000004614755127217015215 0ustar00takluyvertakluyver[egg_info]
tag_build = 
tag_date = 0

././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739891098.0
h5py-3.13.0/setup.py0000755000175000017500000000535414755120632015112 0ustar00takluyvertakluyver#!/usr/bin/env python

"""
    This is the main setup script for h5py (http://www.h5py.org).

    Most of the functionality is provided in two separate modules:
    setup_configure, which manages compile-time/Cython-time build options
    for h5py, and setup_build, which handles the actual compilation process.
"""

from setuptools import Extension, setup
import sys
import os

# Newer packaging standards may recommend removing the current dir from the
# path, add it back if needed.
if '' not in sys.path:
    sys.path.insert(0, '')

import setup_build, setup_configure


VERSION = '3.13.0'


# these are required to use h5py
RUN_REQUIRES = [
    # We only really aim to support NumPy & Python combinations for which
    # there are wheels on PyPI (e.g. NumPy >=1.23.2 for Python 3.11).
    # But we don't want to duplicate the information in oldest-supported-numpy
    # here, and if you can build an older NumPy on a newer Python, h5py probably
    # works (assuming you build it from source too).
    # NumPy 1.19.3 is the first with wheels for Python 3.9, our minimum Python.
    "numpy >=1.19.3",
]

# Packages needed to build h5py (in addition to static list in pyproject.toml)
# For packages we link to (numpy, mpi4py), we build against the oldest
# supported version; h5py wheels should then work with newer versions of these.
# Downstream packagers - e.g. Linux distros - can safely build with newer
# versions.
# TODO: setup_requires is deprecated in setuptools.
SETUP_REQUIRES = []

if setup_configure.mpi_enabled():
    # mpi4py 3.1.1 fixed a typo in python_requires, which made older versions
    # incompatible with newer setuptools.
    RUN_REQUIRES.append('mpi4py >=3.1.1')
    SETUP_REQUIRES.append("mpi4py ==3.1.1; python_version<'3.11'")
    SETUP_REQUIRES.append("mpi4py ==3.1.4; python_version=='3.11.*'")
    SETUP_REQUIRES.append("mpi4py ==3.1.6; python_version=='3.12.*'")
    SETUP_REQUIRES.append("mpi4py ==4.0.1; python_version>='3.13'")

# Set the environment variable H5PY_SETUP_REQUIRES=0 if we need to skip
# setup_requires for any reason.
if os.environ.get('H5PY_SETUP_REQUIRES', '1') == '0':
    SETUP_REQUIRES = []

# --- Custom Distutils commands -----------------------------------------------

CMDCLASS = {'build_ext': setup_build.h5py_build_ext}


# --- Dynamic metadata for setuptools -----------------------------------------

package_data = {'h5py': [], "h5py.tests.data_files": ["*.h5"]}
if os.name == 'nt':
    package_data['h5py'].append('*.dll')

setup(
  name = 'h5py',
  version = VERSION,
  package_data = package_data,
  ext_modules = [Extension('h5py.x',['x.c'])],  # To trick build into running build_ext
  install_requires = RUN_REQUIRES,
  setup_requires = SETUP_REQUIRES,
  cmdclass = CMDCLASS,
)

# see pyproject.toml for static metadata
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739872505.0
h5py-3.13.0/setup_build.py0000644000175000017500000001447414755054371016277 0ustar00takluyvertakluyver#!/usr/bin/env python3
"""
    Implements a custom Distutils build_ext replacement, which handles the
    full extension module build process, from api_gen to C compilation and
    linking.
"""

try:
    from setuptools import Extension
except ImportError:
    from distutils.extension import Extension
from distutils.command.build_ext import build_ext
import sys
import os
import os.path as op
from pathlib import Path

import api_gen
from setup_configure import BuildConfig


def localpath(*args):
    return op.abspath(op.join(op.dirname(__file__), *args))


MODULES = ['defs', '_errors', '_objects', '_proxy', 'h5fd', 'h5z',
            'h5', 'h5i', 'h5r', 'utils', '_selector',
            '_conv', 'h5t', 'h5s',
            'h5p',
            'h5d', 'h5a', 'h5f', 'h5g',
            'h5l', 'h5o',
            'h5ds', 'h5ac',
            'h5pl']

COMPILER_SETTINGS = {
   'libraries'      : ['hdf5', 'hdf5_hl'],
   'include_dirs'   : [localpath('lzf')],
   'library_dirs'   : [],
   'define_macros'  : [('H5_USE_110_API', None),
                       # The definition should imply the one below, but CI on
                       # Ubuntu 20.04 still gets H5Rdereference1 for some reason
                       ('H5Rdereference_vers', 2),
                       ('NPY_NO_DEPRECATED_API', 0),
                      ]
}

EXTRA_SRC = {'h5z': [ localpath("lzf/lzf_filter.c") ]}

# Set the environment variable H5PY_SYSTEM_LZF=1 if we want to
# use the system lzf library
if os.environ.get('H5PY_SYSTEM_LZF', '0') == '1':
    EXTRA_LIBRARIES = {
       'h5z': [ 'lzf' ]
    }
else:
    COMPILER_SETTINGS['include_dirs'] += [localpath('lzf/lzf')]

    EXTRA_SRC['h5z'] += [localpath("lzf/lzf/lzf_c.c"),
                  localpath("lzf/lzf/lzf_d.c")]

    EXTRA_LIBRARIES = {}

if sys.platform.startswith('win'):
    COMPILER_SETTINGS['include_dirs'].append(localpath('windows'))
    COMPILER_SETTINGS['define_macros'].extend([
        ('_HDF5USEDLL_', None),
        ('H5_BUILT_AS_DYNAMIC_LIB', None)
    ])


class h5py_build_ext(build_ext):

    """
        Custom distutils command which encapsulates api_gen pre-building,
        Cython building, and C compilation.

        Also handles making the Extension modules, since we can't rely on
        NumPy being present in the main body of the setup script.
    """

    @staticmethod
    def _make_extensions(config):
        """ Produce a list of Extension instances which can be passed to
        cythonize().

        This is the point at which custom directories, MPI options, etc.
        enter the build process.
        """
        import numpy

        settings = COMPILER_SETTINGS.copy()

        settings['include_dirs'][:0] = config.hdf5_includedirs
        settings['library_dirs'][:0] = config.hdf5_libdirs
        settings['define_macros'].extend(config.hdf5_define_macros)

        if config.msmpi:
            settings['include_dirs'].extend(config.msmpi_inc_dirs)
            settings['library_dirs'].extend(config.msmpi_lib_dirs)
            settings['libraries'].append('msmpi')

        try:
            numpy_includes = numpy.get_include()
        except AttributeError:
            # if numpy is not installed get the headers from the .egg directory
            import numpy.core
            numpy_includes = os.path.join(os.path.dirname(numpy.core.__file__), 'include')

        settings['include_dirs'] += [numpy_includes]
        if config.mpi:
            import mpi4py
            settings['include_dirs'] += [mpi4py.get_include()]

        # TODO: should this only be done on UNIX?
        if os.name != 'nt':
            settings['runtime_library_dirs'] = settings['library_dirs']

        def make_extension(module):
            sources = [localpath('h5py', module + '.pyx')] + EXTRA_SRC.get(module, [])
            settings['libraries'] += EXTRA_LIBRARIES.get(module, [])
            return Extension('h5py.' + module, sources, **settings)

        return [make_extension(m) for m in MODULES]

    def run(self):
        """ Distutils calls this method to run the command """

        from Cython import __version__ as cython_version
        from Cython.Build import cythonize
        import numpy

        complex256_support = hasattr(numpy, 'complex256')

        # This allows ccache to recognise the files when pip builds in a temp
        # directory. It speeds up repeatedly running tests through tox with
        # ccache configured (CC="ccache gcc"). It should have no effect if
        # ccache is not in use.
        os.environ['CCACHE_BASEDIR'] = op.dirname(op.abspath(__file__))
        os.environ['CCACHE_NOHASHDIR'] = '1'

        # Get configuration from environment variables
        config = BuildConfig.from_env()
        config.summarise()

        if config.hdf5_version < (1, 10, 6):
            raise Exception(
                f"This version of h5py requires HDF5 >= 1.10.6 (got version "
                f"{config.hdf5_version} from environment variable or library)"
            )

        config_file = localpath('h5py', 'config.pxi')

        # Refresh low-level defs if missing or stale
        print("Executing api_gen rebuild of defs")
        api_gen.run()

        # Rewrite config.pxi file if needed
        s = f"""\
# This file is automatically generated by the h5py setup script.  Don't modify.

DEF MPI = {bool(config.mpi)}
DEF ROS3 = {bool(config.ros3)}
DEF HDF5_VERSION = {config.hdf5_version}
DEF DIRECT_VFD = {bool(config.direct_vfd)}
DEF VOL_MIN_HDF5_VERSION = (1,11,5)
DEF COMPLEX256_SUPPORT = {complex256_support}
DEF NUMPY_BUILD_VERSION = '{numpy.__version__}'
DEF CYTHON_BUILD_VERSION = '{cython_version}'
"""
        write_if_changed(config_file, s)

        # Run Cython
        print("Executing cythonize()")
        self.extensions = cythonize(self._make_extensions(config),
                                    force=config.changed() or self.force,
                                    language_level=3)

        # Perform the build
        build_ext.run(self)

        # Record the configuration we built
        config.record_built()


def write_if_changed(target_path, s: str):
    """Overwrite target_path unless the contents already match s

    Avoids changing the mtime when we're just writing the same data.
    """
    p = Path(target_path)
    b = s.encode('utf-8')
    try:
        if p.read_bytes() == b:
            return
    except FileNotFoundError:
        pass

    p.write_bytes(b)
    print(f'Updated {p}')
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/setup_configure.py0000644000175000017500000002674714675110407017161 0ustar00takluyvertakluyver
"""
    Implements a new custom Distutils command for handling library
    configuration.

    The "configure" command here doesn't directly affect things like
    config.pxi; rather, it exists to provide a set of attributes that are
    used by the build_ext replacement in setup_build.py.

    Options from the command line and environment variables are stored
    between invocations in a pickle file.  This allows configuring the library
    once and e.g. calling "build" and "test" without recompiling everything
    or explicitly providing the same options every time.

    This module also contains the auto-detection logic for figuring out
    the currently installed HDF5 version.
"""

import os
import os.path as op
import platform
import re
import sys
import json


def load_stashed_config():
    """ Load settings dict from the pickle file """
    try:
        with open('h5config.json', 'r') as f:
            cfg = json.load(f)
        if not isinstance(cfg, dict):
            raise TypeError
    except Exception:
        return {}
    return cfg


def stash_config(dct):
    """Save settings dict to the pickle file."""
    with open('h5config.json', 'w') as f:
        json.dump(dct, f)


def validate_version(s):
    """Ensure that s contains an X.Y.Z format version string, or ValueError."""
    # HDF5 tags can have a patch version, which we'll ignore for now.
    m = re.match('(\d+)\.(\d+)\.(\d+)(?:\.\d+)?$', s)
    if m:
        return tuple(int(x) for x in m.groups())
    raise ValueError(f"HDF5 version string {s!r} not in X.Y.Z[.P] format")


def mpi_enabled():
    return os.environ.get('HDF5_MPI') == "ON"


class BuildConfig:
    def __init__(self, hdf5_includedirs, hdf5_libdirs, hdf5_define_macros,
                 hdf5_version, mpi, ros3, direct_vfd):
        self.hdf5_includedirs = hdf5_includedirs
        self.hdf5_libdirs = hdf5_libdirs
        self.hdf5_define_macros = hdf5_define_macros
        self.hdf5_version = hdf5_version
        self.mpi = mpi
        self.ros3 = ros3
        self.direct_vfd = direct_vfd

        if self.mpi and os.environ.get('H5PY_MSMPI') == 'ON':
            self.msmpi = True
            self.msmpi_inc_dirs = os.environ.get('MSMPI_INC').split(';')
            import platform
            bitness, _ = platform.architecture()
            if bitness == '64bit':
                mpi_lib_envvar = 'MSMPI_LIB64'
            else:
                mpi_lib_envvar = 'MSMPI_LIB32'
            self.msmpi_lib_dirs = os.environ.get(mpi_lib_envvar).split(';')
        else:
            self.msmpi = False
            self.msmpi_inc_dirs = []
            self.msmpi_lib_dirs = []

    @classmethod
    def from_env(cls):
        mpi = mpi_enabled()
        h5_inc, h5_lib, h5_macros = cls._find_hdf5_compiler_settings(mpi)

        h5_version_s = os.environ.get('HDF5_VERSION')
        h5py_ros3 = os.environ.get('H5PY_ROS3')
        h5py_direct_vfd = os.environ.get('H5PY_DIRECT_VFD')

        if h5_version_s and not mpi and h5py_ros3 and h5py_direct_vfd:
            # if we know config, don't use wrapper, it may not be supported
            return cls(
                h5_inc, h5_lib, h5_macros, validate_version(h5_version_s), mpi,
                h5py_ros3 == '1', h5py_direct_vfd == '1')

        h5_wrapper = HDF5LibWrapper(h5_lib)
        if h5_version_s:
            h5_version = validate_version(h5_version_s)
        else:
            h5_version = h5_wrapper.autodetect_version()
            if mpi and not h5_wrapper.has_mpi_support():
                raise RuntimeError("MPI support not detected")

        if h5py_ros3:
            ros3 = h5py_ros3 == '1'
        else:
            ros3 = h5_wrapper.has_ros3_support()

        if h5py_direct_vfd:
            direct_vfd = h5py_direct_vfd == '1'
        else:
            direct_vfd = h5_wrapper.has_direct_vfd_support()

        return cls(h5_inc, h5_lib, h5_macros, h5_version, mpi, ros3, direct_vfd)

    @staticmethod
    def _find_hdf5_compiler_settings(mpi=False):
        """Get compiler settings from environment or pkgconfig.

        Returns (include_dirs, lib_dirs, define_macros)
        """
        hdf5 = os.environ.get('HDF5_DIR')
        hdf5_includedir = os.environ.get('HDF5_INCLUDEDIR')
        hdf5_libdir = os.environ.get('HDF5_LIBDIR')
        hdf5_pkgconfig_name = os.environ.get('HDF5_PKGCONFIG_NAME')

        if sum([
            bool(hdf5_includedir or hdf5_libdir),
            bool(hdf5),
            bool(hdf5_pkgconfig_name)
        ]) > 1:
            raise ValueError(
                "Specify only one of: HDF5 lib/include dirs, HDF5 prefix dir, "
                "or HDF5 pkgconfig name"
            )

        if hdf5_includedir or hdf5_libdir:
            inc_dirs = [hdf5_includedir] if hdf5_includedir else []
            lib_dirs = [hdf5_libdir] if hdf5_libdir else []
            return (inc_dirs, lib_dirs, [])

        # Specified a prefix dir (e.g. '/usr/local')
        if hdf5:
            inc_dirs = [op.join(hdf5, 'include')]
            lib_dirs = [op.join(hdf5, 'lib')]
            if sys.platform.startswith('win'):
                lib_dirs.append(op.join(hdf5, 'bin'))
            return (inc_dirs, lib_dirs, [])

        # Specified a name to be looked up in pkgconfig
        if hdf5_pkgconfig_name:
            import pkgconfig
            if not pkgconfig.exists(hdf5_pkgconfig_name):
                raise ValueError(
                    f"No pkgconfig information for {hdf5_pkgconfig_name}"
                )
            pc = pkgconfig.parse(hdf5_pkgconfig_name)
            return (pc['include_dirs'], pc['library_dirs'], pc['define_macros'])

        # Fallback: query pkgconfig for default hdf5 names
        import pkgconfig
        pc_name = 'hdf5-openmpi' if mpi else 'hdf5'
        pc = {}
        try:
            if pkgconfig.exists(pc_name):
                pc = pkgconfig.parse(pc_name)
        except OSError:
            if os.name != 'nt':
                print(
                    "Building h5py requires pkg-config unless the HDF5 path "
                    "is explicitly specified using the environment variable HDF5_DIR. "
                    "For more information and details, "
                    "see https://docs.h5py.org/en/stable/build.html#custom-installation", file=sys.stderr
                )
                raise

        return (
            pc.get('include_dirs', []),
            pc.get('library_dirs', []),
            pc.get('define_macros', []),
        )

    def as_dict(self):
        return {
            'hdf5_includedirs': self.hdf5_includedirs,
            'hdf5_libdirs': self.hdf5_libdirs,
            'hdf5_define_macros': self.hdf5_define_macros,
            'hdf5_version': list(self.hdf5_version),  # list() to match the JSON
            'mpi': self.mpi,
            'ros3': self.ros3,
            'direct_vfd': self.direct_vfd,
            'msmpi': self.msmpi,
            'msmpi_inc_dirs': self.msmpi_inc_dirs,
            'msmpi_lib_dirs': self.msmpi_lib_dirs,
        }

    def changed(self):
        """Has the config changed since the last build?"""
        return self.as_dict() != load_stashed_config()

    def record_built(self):
        """Record config after a successful build"""
        stash_config(self.as_dict())

    def summarise(self):
        def fmt_dirs(l):
            return '\n'.join((['['] + [f'  {d!r}' for d in l] + [']'])) if l else '[]'

        print('*' * 80)
        print(' ' * 23 + "Summary of the h5py configuration")
        print('')
        print("  HDF5 include dirs:", fmt_dirs(self.hdf5_includedirs))
        print("  HDF5 library dirs:", fmt_dirs(self.hdf5_libdirs))
        print("       HDF5 Version:", repr(self.hdf5_version))
        print("        MPI Enabled:", self.mpi)
        print("   ROS3 VFD Enabled:", self.ros3)
        print(" DIRECT VFD Enabled:", self.direct_vfd)
        print("   Rebuild Required:", self.changed())
        print("     MS-MPI Enabled:", self.msmpi)
        print("MS-MPI include dirs:", self.msmpi_inc_dirs)
        print("MS-MPI library dirs:", self.msmpi_lib_dirs)
        print('')
        print('*' * 80)


class HDF5LibWrapper:

    def __init__(self, libdirs):
        self._load_hdf5_lib(libdirs)

    def _load_hdf5_lib(self, libdirs):
        """
        Detect and load the HDF5 library.

        Raises an exception if anything goes wrong.

        libdirs: the library paths to search for the library
        """
        import ctypes

        # extra keyword args to pass to LoadLibrary
        load_kw = {}
        if sys.platform.startswith('darwin'):
            default_path = 'libhdf5.dylib'
            regexp = re.compile(r'^libhdf5.dylib')
        elif sys.platform.startswith('win'):
            if 'MSC' in sys.version:
                default_path = 'hdf5.dll'
                regexp = re.compile(r'^hdf5.dll')
            else:
                default_path = 'libhdf5-0.dll'
                regexp = re.compile(r'^libhdf5-[0-9].dll')
            # To overcome "difficulty" loading the library on windows
            # https://bugs.python.org/issue42114
            load_kw['winmode'] = 0
        elif sys.platform.startswith('cygwin'):
            default_path = 'cyghdf5-200.dll'
            regexp = re.compile(r'^cyghdf5-\d+.dll$')
        else:
            default_path = 'libhdf5.so'
            regexp = re.compile(r'^libhdf5.so')

        path = None
        for d in libdirs:
            try:
                candidates = [x for x in os.listdir(d) if regexp.match(x)]
            except Exception:
                continue   # Skip invalid entries

            if len(candidates) != 0:
                candidates.sort(key=lambda x: len(x))   # Prefer libfoo.so to libfoo.so.X.Y.Z
                path = op.abspath(op.join(d, candidates[0]))
                break

        if path is None:
            path = default_path

        print("Loading library to get build settings and version:", path)

        self._lib_path = path

        if op.isabs(path) and not op.exists(path):
            raise FileNotFoundError(f"{path} is missing")

        try:
            lib = ctypes.CDLL(path, **load_kw)
        except Exception:
            print("error: Unable to load dependency HDF5, make sure HDF5 is installed properly")
            print(f"on {sys.platform=} with {platform.machine()=}")
            print("Library dirs checked:", libdirs)
            raise

        self._lib = lib

    def autodetect_version(self):
        """
        Detect the current version of HDF5, and return X.Y.Z version string.

        Raises an exception if anything goes wrong.
        """
        import ctypes
        from ctypes import byref

        major = ctypes.c_uint()
        minor = ctypes.c_uint()
        release = ctypes.c_uint()

        try:
            self._lib.H5get_libversion(byref(major), byref(minor), byref(release))
        except Exception:
            print("error: Unable to find HDF5 version")
            raise

        return int(major.value), int(minor.value), int(release.value)

    def load_function(self, func_name):
        try:
            return getattr(self._lib, func_name)
        except AttributeError:
            # No such function
            return None

    def has_functions(self, *func_names):
        for func_name in func_names:
            if self.load_function(func_name) is None:
                return False
        return True

    def has_mpi_support(self):
        return self.has_functions("H5Pget_fapl_mpio", "H5Pset_fapl_mpio")

    def has_ros3_support(self):
        return self.has_functions("H5Pget_fapl_ros3", "H5Pset_fapl_ros3")

    def has_direct_vfd_support(self):
        return self.has_functions("H5Pget_fapl_direct", "H5Pset_fapl_direct")
././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1727303943.0
h5py-3.13.0/tox.ini0000644000175000017500000000560014675110407014702 0ustar00takluyvertakluyver[tox]
# We want an envlist like
# envlist = {py36,py37,pypy3}-{test}-{deps,mindeps}-{,mpi4py}-{,pre},nightly,docs,checkreadme,pre-commit
# but we want to skip mpi and pre by default, so this envlist is below
envlist = {py39,py310,py311,py312,pypy3}-{test}-{deps,mindeps},nightly,docs,apidocs,checkreadme,pre-commit,rever
isolated_build = True

[testenv]
deps =
    test: pytest
    test: pytest-cov
    test: pytest-mpi>=0.2

    # For --pre, 2.0.0b1 has a header size difference that has been reverted, so avoid it
    py39-deps: numpy>=1.19.3,!=2.0.0b1
    py310-deps: numpy>=1.21.3,!=2.0.0b1
    py311-deps: numpy>=1.23.2,!=2.0.0b1
    py312-deps: numpy>=1.26.0,!=2.0.0b1

    mindeps: oldest-supported-numpy

    mpi4py: mpi4py>=3.1.1

    tables-deps: tables>=3.4.4
    tables-mindeps: tables==3.4.4

# see pytest.ini for additional common options to pytest
commands =
    test: python -c "import sys; print('64 bit?', sys.maxsize > 2**32)"
    test: python {toxinidir}/ci/fix_paths.py
    test: python -c "from h5py.version import info; print(info)"
    test-!mpi4py: python -m pytest --pyargs h5py --cov=h5py -rxXs --cov-config={toxinidir}/.coveragerc {posargs}
    test-mpi4py: mpirun -n {env:MPI_N_PROCS:2} {envpython} -m pytest --pyargs h5py -rxXs --with-mpi {posargs}
changedir =
    test: {toxworkdir}
passenv =
    H5PY_SYSTEM_LZF
    H5PY_TEST_CHECK_FILTERS
    HDF5_DIR
    HDF5_VERSION
    HDF5_INCLUDEDIR
    HDF5_LIBDIR
    HDF5_PKGCONFIG_NAME
    HDF5_MPI
    MPI_N_PROCS
    OMPI_* # used to configure OpenMPI
    CC
    ZLIB_ROOT
    CIBUILDWHEEL
allowlist_externals =
    mpirun
setenv =
    COVERAGE_FILE={toxinidir}/.coverage_dir/coverage-{envname}
# needed otherwise coverage cannot find the file when reporting

pip_pre =
    pre: True

[testenv:nightly]
pip_pre = True
basepython = python3.9

[testenv:docs]
skip_install=True
# Work around https://github.com/tox-dev/tox/issues/2442
package_env = DUMMY NON-EXISTENT ENV NAME
changedir=docs
deps=
    -r docs/requirements-rtd.txt
commands=
    sphinx-build -W -b html -d {envtmpdir}/doctrees .  {envtmpdir}/html

[testenv:apidocs]
changedir=docs_api
deps=
    sphinx
commands=
    sphinx-build -W -b html -d {envtmpdir}/doctrees .  _build/html

[testenv:checkreadme]
skip_install=True
# Work around https://github.com/tox-dev/tox/issues/2442
package_env = DUMMY NON-EXISTENT ENV NAME
deps=
    build
    twine
commands=
    python -m build --sdist
    twine check --strict dist/*

[testenv:pre-commit]
skip_install=True
# Work around https://github.com/tox-dev/tox/issues/2442
package_env = DUMMY NON-EXISTENT ENV NAME
deps=pre-commit
passenv =
    HOMEPATH
    SSH_AUTH_SOCK
commands=
    pre-commit run --all-files

[testenv:rever]
skip_install=True
# Work around https://github.com/tox-dev/tox/issues/2442
package_env = DUMMY NON-EXISTENT ENV NAME
deps=
    re-ver
    xonsh
    lazyasd
    ruamel.yaml
commands=
    rever check --activities version_bump,changelog